<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
  <!-- Source: https://percona.community/blog/index.xml -->
  <channel>
    <title>Percona Community Blog - learn about MySQL, MariaDB, PostgreSQL, and MongoDB</title>
    <link>https://siftrss.com/f/dJlegZB6Nm</link>
    <description>Percona Community Blog is a place where you can learn and get best from the community knowledge about open source databases (MySQL, PostgreSQL, MariaDB, and MongoDB) and various tools. Check out some of the great free content and contribute and share your experience with other community members.</description>
    <atom:link href="https://siftrss.com/f/dJlegZB6Nm" rel="self" type="application/rss+xml"/>
    <generator>Hugo</generator>
    <language>en-us</language>
    <copyright>© Percona Community. MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.</copyright>
    <lastBuildDate>Tue, 07 Apr 2026 11:03:38 PDT</lastBuildDate>
    <item>
      <title>Percona Bug Report: March 2026</title>
      <link>https://percona.community/blog/2026/04/03/percona-bug-report-march-2026/</link>
      <guid>https://percona.community/blog/2026/04/03/percona-bug-report-march-2026/</guid>
      <pubDate>Fri, 03 Apr 2026 00:00:00 UTC</pubDate>
      <description>At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.</description>
      <content:encoded>&lt;p&gt;At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.&lt;/p&gt;
&lt;p&gt;We constantly update our &lt;a href="https://perconadev.atlassian.net/" target="_blank" rel="noopener noreferrer"&gt;bug reports&lt;/a&gt; and monitor &lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;other boards&lt;/a&gt; to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. This post is a central place to get information on the most noteworthy open and recently resolved bugs.&lt;/p&gt;
&lt;p&gt;In this edition of our bug report, we have the following list of bugs.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-servermysql-bugs"&gt;Percona Server/MySQL Bugs&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-10378" target="_blank" rel="noopener noreferrer"&gt;PS-10378&lt;/a&gt;: In the MeCab plugin, BOOLEAN MODE full-text queries with a LIMIT clause do not behave as expected. Although the optimizer indicates that ranking should be skipped (Ft_hints: no_ranking), the query still performs full ranking and sorting before applying LIMIT, preventing the intended optimization and impacting performance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.4.x&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: No workaround available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.46-37, 8.4.9-9, 9.7.0-0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-10448" target="_blank" rel="noopener noreferrer"&gt;PS-10448&lt;/a&gt;: Insert prepared statements fail on partitioned tables with timestamp-based partitions when the partition key uses a non-constant default (e.g., &lt;strong&gt;CURRENT_TIMESTAMP&lt;/strong&gt;). After initial execution, the statement remains bound to the original partition and fails with a partition mismatch error when data should go into a different partition.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.42-33, 8.0.43-34, 8.0.44-35, 8.4.7-7&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: &lt;a href="https://bugs.mysql.com/bug.php?id=119309" target="_blank" rel="noopener noreferrer"&gt;Bug #119309&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Modify statements to explicitly use &lt;strong&gt;NOW()&lt;/strong&gt; (requires updating procedures)&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.46-37, 8.4.9-9, 9.7.0-0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-10481" target="_blank" rel="noopener noreferrer"&gt;PS-10481&lt;/a&gt;: The range optimizer incorrectly falls back to a full table scan instead of using an index range scan for WHERE … IN() queries when values exceed column or prefix length on non-binary collations (e.g. utf8mb4_0900_ai_ci). A single truncated value in IN() can invalidate all valid ranges, forcing a full scan and degrading performance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.4.x&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: &lt;a href="https://bugs.mysql.com/bug.php?id=118009" target="_blank" rel="noopener noreferrer"&gt;Bug #118009&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: No workaround available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: Not fixed yet&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-10593" target="_blank" rel="noopener noreferrer"&gt;PS-10593&lt;/a&gt;: The audit_log plugin can crash (segfault) during memcpy operations when configured with audit_log_strategy=PERFORMANCE, audit_log_policy=ALL, and buffering enabled. The issue can be reproduced under specific memory allocator setups (e.g., jemalloc) and also occurs with standard libc malloc, indicating instability in the plugin’s memory handling.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.34-26, 8.0.45-36, 8.4.7-7&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: No workaround available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.46-37, 8.4.9-9&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-10990" target="_blank" rel="noopener noreferrer"&gt;PS-10990&lt;/a&gt;: Server crashes (signal 11) in Item_cache::walk when executing queries that use JOIN with a subquery in an IN clause inside stored procedures. The issue occurs during query execution/privilege checking and is reproducible across MySQL and Percona Server 8.0.x versions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.45-36&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: &lt;a href="https://bugs.mysql.com/bug.php?id=115885" target="_blank" rel="noopener noreferrer"&gt;Bug #115885&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Execute the query outside the stored procedure&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: Not specified&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-10578" target="_blank" rel="noopener noreferrer"&gt;PS-10578&lt;/a&gt;: The legacy audit_log plugin does not populate the DB field in audit records unless the session is started with the –database option. Even when a database is selected later using USE or referenced explicitly in queries, the DB field may remain empty.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.43-34, 8.0.45-36&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Use Audit Log Filter component (8.4) or audit log filter (8.0), where this issue is not reproducible&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: Not planned to be fixed&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-xtradb-cluster"&gt;Percona Xtradb Cluster&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4844" target="_blank" rel="noopener noreferrer"&gt;PXC-4844&lt;/a&gt;: In PXC clusters under high load, inconsistency voting during DDL or DCL operations can trigger an internal deadlock, causing standby nodes to get stuck applying transactions and continuously request FC pause. Although voting completes successfully and no node is expelled, writes remain blocked in wsrep: replicating and certifying write set, effectively stalling the cluster until the affected node is restarted.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.42&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Restart the blocked standby node to restore cluster activity&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: Not fixed yet&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4799" target="_blank" rel="noopener noreferrer"&gt;PXC-4799&lt;/a&gt;: In PXC clusters, when a backup lock (&lt;strong&gt;LOCK INSTANCE FOR BACKUP&lt;/strong&gt;) is active and a replicated DDL is pending, executing &lt;strong&gt;FLUSH TABLES WITH READ LOCK&lt;/strong&gt; on the same node can trigger a deadlock. This results in an inconsistency vote and causes the node to leave the cluster, disrupting backup operations.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.42, 8.0.43, 8.4.6&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Avoid running DDL operations during backup or use a single backup instance instead of parallel runs&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.46, 8.4.9, 9.7.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4814" target="_blank" rel="noopener noreferrer"&gt;PXC-4814&lt;/a&gt;: In PXC with &lt;strong&gt;wsrep_OSU_method=‘RSU’&lt;/strong&gt;, a failed DDL due to table name case mismatch (e.g., &lt;strong&gt;OPTIMIZE TABLE&lt;/strong&gt;) is incorrectly written to the binary log as a successful transaction (&lt;strong&gt;error_code=0&lt;/strong&gt;). This results in a GTID being generated for a failed operation, causing GTID inconsistencies across cluster nodes and in replication setups.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.33-25, 8.0.44, 8.4.6&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Validate table name case sensitivity before executing DDL in RSU mode&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.45, 8.4.8, 9.6.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4845" target="_blank" rel="noopener noreferrer"&gt;PXC-4845&lt;/a&gt;: After an IST failure (e.g., due to network issues), a PXC node may remain running in an inconsistent state instead of restarting, causing the donor and other nodes to become unresponsive. The joiner node gets stuck during state transfer instead of failing cleanly, impacting overall cluster availability.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.42&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: No workaround available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.45, 8.4.8, 9.6.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4849" target="_blank" rel="noopener noreferrer"&gt;PXC-4849&lt;/a&gt;: A PXC node fails to start after successful SST when &lt;strong&gt;read_only&lt;/strong&gt; or &lt;strong&gt;super_read_only&lt;/strong&gt; is enabled and event scheduler objects exist on the donor. During initialization, the event scheduler fails to load, causing the node to abort, making it impossible to run read-only nodes with events defined in the cluster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.44, 8.4.7&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Start the node without read_only, then enable it manually later, or remove events&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.46, 8.4.9, 9.7.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4965" target="_blank" rel="noopener noreferrer"&gt;PXC-4965&lt;/a&gt;: Passwords containing the &lt;code&gt;'&lt;/code&gt; character are incorrectly handled, causing syntax errors during replication (e.g., &lt;strong&gt;SET PASSWORD&lt;/strong&gt;) and triggering inconsistency voting that can force a node to leave the cluster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.45, 8.4.7&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Avoid using &lt;code&gt;'&lt;/code&gt; character in passwords&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.46, 8.4.8, 9.6.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-5198" target="_blank" rel="noopener noreferrer"&gt;PXC-5198&lt;/a&gt;: Executing &lt;strong&gt;SELECT … FOR UPDATE SKIP LOCKED&lt;/strong&gt; can trigger InnoDB crashes with fatal errors (e.g., “Unknown error code 21: Skip locked records”) under concurrent transactional workloads. Instead of returning expected deadlock errors, the query causes mysqld to abort, impacting cluster stability.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.33-25, 8.0.35-27, 8.0.36-28&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Avoid using &lt;strong&gt;SKIP LOCKED&lt;/strong&gt; in &lt;strong&gt;SELECT … FOR UPDATE&lt;/strong&gt; queries&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.46, 8.4.8, 9.6.0&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-xtrabackup"&gt;Percona XtraBackup&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3543" target="_blank" rel="noopener noreferrer"&gt;PXB-3543&lt;/a&gt;: Incremental backups in XtraBackup can become significantly slower than full backups on instances with a very large number of small tables, due to excessive CPU usage in memset during incremental processing. This leads to severe performance degradation, with incremental backups taking hours compared to minutes for full backups.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.35-33, 8.0.35-34&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Use full backups instead of incremental backups&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.35-35, 8.4.0-6, 9.6.0-1&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3667" target="_blank" rel="noopener noreferrer"&gt;PXB-3667&lt;/a&gt;: Installation of XtraBackup 8.4 fails on RHEL 9–based systems due to dependency conflicts between percona-xtrabackup-84, perl(DBD::mysql), and incompatible libmysqlclient versions. Percona Server 8.4 provides libmysqlclient.so.24, while required dependencies expect libmysqlclient.so.21, resulting in unresolved package installation errors.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.4.0-5&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: Not specified&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-toolkit"&gt;Percona Toolkit&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2519" target="_blank" rel="noopener noreferrer"&gt;PT-2519&lt;/a&gt;: pt-query-digest fails when processing large, slow query logs, repeatedly throwing “Argument "" isn’t numeric” errors during the aggregate fingerprint stage. The tool retries multiple times but does not complete, resulting in stalled analysis and very slow progress.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.7.0, 3.7.1&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.7.3&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2511" target="_blank" rel="noopener noreferrer"&gt;PT-2511&lt;/a&gt;: pt-summary incorrectly reports that sshd is not running due to an invalid awk expression used to detect the process. The script checks the wrong field in ps output, causing false negatives even when sshd is active.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.7.1&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.7.3&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2516" target="_blank" rel="noopener noreferrer"&gt;PT-2516&lt;/a&gt;: pt-mongodb-index-check fails to detect duplicate indexes (e.g., &lt;code&gt;{a:1}&lt;/code&gt; and &lt;code&gt;{a:1, b:1}&lt;/code&gt;) and may produce no output, making it unclear whether the tool is functioning or connecting properly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.7.1&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: Not specified&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="pmm-percona-monitoring-and-management"&gt;PMM [Percona Monitoring and Management]&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-14493" target="_blank" rel="noopener noreferrer"&gt;PMM-14493&lt;/a&gt;: PMM fails to start when using Podman with the &lt;strong&gt;–log-driver passthrough&lt;/strong&gt; option due to an error opening /dev/stderr during Nginx initialization. This causes the container to exit with configuration test failure, while other log drivers work as expected.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.4.0, 3.4.1&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Use a different &lt;strong&gt;–log-driver&lt;/strong&gt; option such as none or journald&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.8.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-14576" target="_blank" rel="noopener noreferrer"&gt;PMM-14576&lt;/a&gt;: PMM Client reports “failed to get backup status” errors during MongoDB backups, marking them as failed in the UI even though backups are successfully completed by PBM. This leads to incorrect backup status reporting and confusion for users.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.5.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Avoid using PMM Backup Management (not ideal)&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.9.0, 3.X&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-14594" target="_blank" rel="noopener noreferrer"&gt;PMM-14594&lt;/a&gt;: PMM incorrectly reports compatible XtraBackup versions as incompatible with supported MySQL versions during backup validation. This causes backups to be blocked in PMM even when the installed XtraBackup version is the latest available and should be accepted.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.5.0, 3.6.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Use the xtrabackup command-line tool to take backups&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.9.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-14852" target="_blank" rel="noopener noreferrer"&gt;PMM-14852&lt;/a&gt;: Some panels in the MongoDB InMemory dashboard show no data because they incorrectly use WiredTiger-specific metrics. As a result, dashboards for InMemory storage engine deployments can display empty or misleading panels instead of relevant metrics.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.2.0, 3.6.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.8.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-14906" target="_blank" rel="noopener noreferrer"&gt;PMM-14906&lt;/a&gt;: The postgres_exporter generates excessive &lt;strong&gt;SELECT version()&lt;/strong&gt; queries (~4500/hour) after upgrading to PMM 3.6.0, flooding PostgreSQL logs and increasing unnecessary query load, causing log spam and disk growth.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.6.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.8.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-14958" target="_blank" rel="noopener noreferrer"&gt;PMM-14958&lt;/a&gt;: mysqld_exporter continues to generate duplicate metric collection errors with GTID and parallel replication enabled, even in PMM 3.6.0. These repeated errors (e.g., &lt;strong&gt;mysql_perf_schema_replication_group_worker_transport_time_seconds&lt;/strong&gt;) lead to continuous log spam, causing rapid log growth (up to ~10GB/hour), disk space exhaustion, and increased noise that makes it difficult to identify real issues.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.6.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.7.1&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-kubernetes-operator"&gt;Percona Kubernetes Operator&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-737" target="_blank" rel="noopener noreferrer"&gt;K8SPG-737&lt;/a&gt;: In PostgreSQL Kubernetes deployments, the node_exporter in the PMM client sidecar cannot access the datadir mountpoint because it is not exposed via /proc, preventing collection of datadir-related metrics. This results in incomplete monitoring data for PostgreSQL pods.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.9.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: No workaround available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 2.10.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1737" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1737&lt;/a&gt;: The PXC Operator crashes during reconciliation in CompareMySQLVersion when the cluster status lacks a MySQL version value. An empty version field causes a panic (“Malformed version”), preventing proper cluster reconciliation and replication setup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.18.0, 1.19.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Create the cluster before configuring replication or manually patch the CR status to include the missing version value, for example:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl patch pxc &lt;cluster-name&gt; \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --type=merge \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --subresource=status \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --patch '
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;status:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pxc:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; version: "8.0.42-33.1"'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 1.20.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1843" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1843&lt;/a&gt;: Backups can get stuck in a Running state if the Joiner/Garbd disconnects from the Donor (e.g., due to sst-idle-timeout). Even after the SST process fails and the donor leaves the cluster, the backup process (e.g., xbcloud put) continues indefinitely without timing out, preventing backup completion.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.19.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: No workaround available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 1.20.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1831" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1831&lt;/a&gt;: When using mysqlAllocator=jemalloc on ARM images, the operator attempts to preload /usr/lib64/libjemalloc.so.1, but only libjemalloc.so.2 is available. This results in preload errors and prevents proper use of the jemalloc allocator.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.19.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 1.20.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1830" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1830&lt;/a&gt;: ProxySQL monitoring fails in PMM when using caching_sha2_password, causing proxysql_exporter to fail authentication with errors like:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Error opening connection to ProxySQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;unexpected resp from server for caching_sha2_password, perform full authentication&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This occurs because ProxySQL does not support the required RSA-based full authentication, breaking PMM monitoring integration.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.19.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Use &lt;code&gt;mysql_native_password&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 1.20.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPSMDB-1617" target="_blank" rel="noopener noreferrer"&gt;K8SPSMDB-1617&lt;/a&gt;: Scheduled backups can be triggered even when the MongoDB cluster is not ready (e.g., in initializing state) and without the required safety flags. This leads to failed backup attempts and inconsistent backup behaviour.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.22.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: Not specified&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPSMDB-1524" target="_blank" rel="noopener noreferrer"&gt;K8SPSMDB-1524&lt;/a&gt;: The PBM agent continuously triggers resync storage operations, causing backup processes to stall or remain in pending/unknown states. Logs show repeated resync commands being executed without completion, leading to unstable backup behaviour.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.21.1&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 1.22.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-939" target="_blank" rel="noopener noreferrer"&gt;K8SPG-939&lt;/a&gt;: Patroni does not propagate labels defined in the PostgreSQL Operator CR, causing failures in environments with strict label policies. As a result, Kubernetes rejects resource creation (e.g., Services) due to missing mandatory labels, preventing cluster reconciliation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.8.2&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 2.9.0&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="pbm-percona-backup-for-mongodb"&gt;PBM [Percona Backup for MongoDB]&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1683" target="_blank" rel="noopener noreferrer"&gt;PBM-1683&lt;/a&gt;: The size_uncompressed_h field in pbm describe-backup reports incorrect (inflated) sizes for non-base incremental backups, showing significantly larger values than the actual data size and leading to misleading backup size reporting.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.10.0, 2.11.0, 2.12.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 2.14.0&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="psmdb-percona-server-for-mongodb"&gt;PSMDB [Percona Server for MongoDB]&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PSMDB-1915" target="_blank" rel="noopener noreferrer"&gt;PSMDB-1915&lt;/a&gt;: Newer PSMDB packages fail to install or upgrade on RHEL 9.4 due to a dependency on OpenSSL 3.4, which is not available in that OS version. This breaks upgrades (e.g., from 6.0.25 to 6.0.27) and affects multiple major versions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 6.0.27-21, 7.0.28-15, 8.0.17-6&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 6.0.27-21, 7.0.28-15, 8.0.17-6&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PSMDB-1998" target="_blank" rel="noopener noreferrer"&gt;PSMDB-1998&lt;/a&gt;: LDAP authentication can hang indefinitely when the LDAP server is unreachable due to missing timeout handling. This leads to continuously accumulating connections, eventually exhausting file descriptors and causing service disruption or crashes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 7.0.16-10, 7.0.30-16&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: No workaround available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 7.0.31-17, 8.0.20-8&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-distribution-for-mysql-orchestrator"&gt;Percona Distribution for MySQL [Orchestrator]&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/DISTMYSQL-584" target="_blank" rel="noopener noreferrer"&gt;DISTMYSQL-584&lt;/a&gt;: Orchestrator loses SSL-related settings such as SOURCE_SSL_CA and SOURCE_SSL_VERIFY_SERVER_CERT during failover when issuing CHANGE REPLICATION SOURCE, causing replication to run without required security configurations and potentially violating compliance requirements.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.4.7&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: Not specified&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="pcsm-percona-clustersync-for-mongodb"&gt;PCSM [Percona ClusterSync for MongoDB]&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PCSM-294" target="_blank" rel="noopener noreferrer"&gt;PCSM-294&lt;/a&gt;: PCSM replication can crash during change replication due to flawed conflict detection and unbatched pipeline generation. This results in oversized aggregation pipelines, memory exhaustion, or invalid $slice operations, causing replication to fail with errors such as stage limit exceeded, buffer limits, or invalid arguments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 0.7.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 0.8.0&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="pg_tde-percona-transparent-data-encryption-for-postgresql"&gt;PG_TDE [Percona Transparent Data Encryption for PostgreSQL]&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PG-2125" target="_blank" rel="noopener noreferrer"&gt;PG-2125&lt;/a&gt;: pg_tde fails to create/register symmetric keys when using HashiCorp KMIP, returning errors from the KMIP server during key registration. This prevents key setup and blocks encryption workflows for users relying on KMIP integration.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: pg_tde 2.1.0&lt;br&gt;
&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not specified&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: pg_tde NEXT&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, &lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;How to Report Bugs, Improvements, New Feature Requests for Percona Products&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the most up-to-date information, be sure to follow us on &lt;a href="https://twitter.com/percona" target="_blank" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/percona" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://www.facebook.com/Percona?fref=ts" target="_blank" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Quick References:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://jira.percona.com" target="_blank" rel="noopener noreferrer"&gt;Percona JIRA&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL Bug Report&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;Report a Bug in a Percona Product&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Aaditya Dubey</author>
      <category>Percona Server/MySQL</category>
      <category>Percona XtraDB Cluster</category>
      <category>Percona XtraBackup</category>
      <category>Percona Toolkit</category>
      <category>PMM</category>
      <category>Kubernetes Operator</category>
      <category>PBM</category>
      <category>PSMDB</category>
      <category>Percona Distribution for MySQL</category>
      <category>Orchestrator</category>
      <category>PCSM</category>
      <category>PG_TDE</category>
      <media:thumbnail url="https://percona.community/blog/2026/04/BugReportMarch2026_hu_a1e1c87ec055cccb.jpg"/>
      <media:content url="https://percona.community/blog/2026/04/BugReportMarch2026_hu_d15272a9622ff37.jpg" medium="image"/>
    </item>
    <item>
      <title>InnoDB Buffer Pool Tuning: From Rule-of-Thumb to Real Signals</title>
      <link>https://percona.community/blog/2026/04/02/innodb-buffer-pool-tuning-from-rule-of-thumb-to-real-signals/</link>
      <guid>https://percona.community/blog/2026/04/02/innodb-buffer-pool-tuning-from-rule-of-thumb-to-real-signals/</guid>
      <pubDate>Thu, 02 Apr 2026 00:00:00 UTC</pubDate>
      <description>Introduction Many MySQL setups begin life with a familiar incantation:</description>
      <content:encoded>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Many MySQL setups begin life with a familiar incantation:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_size = 70% of RAM&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;…and then nothing changes.&lt;/p&gt;
&lt;p&gt;That’s not tuning. That’s a starting guess.&lt;/p&gt;
&lt;p&gt;Real tuning starts when the workload pushes back.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="visual-overview"&gt;Visual Overview&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2026/04/innodb_buffer_pool_diagram.png" alt="InnoDB Buffer Pool Diagram" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;The InnoDB buffer pool is where database performance is quietly decided. It determines whether your workload hums along in memory or drags itself across disk. If you’re not actively observing and tuning it, you’re leaving performance on the table.&lt;/p&gt;
&lt;p&gt;This guide walks through how to monitor, understand, and tune the buffer pool using real signals instead of guesswork.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="what-the-buffer-pool-really-is"&gt;What the Buffer Pool Really Is&lt;/h2&gt;
&lt;p&gt;The buffer pool isn’t just “memory for MySQL.” It’s a living system under constant pressure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A cache of data and indexes&lt;/li&gt;
&lt;li&gt;A write staging area (dirty pages)&lt;/li&gt;
&lt;li&gt;A contention zone between reads, writes, and eviction&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Think of it as your database’s working memory. If your working set fits, queries glide. If it doesn’t, pages are constantly evicted and reloaded, introducing latency that rarely announces itself clearly.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="a-simple-mental-model"&gt;A Simple Mental Model&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +---------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | Buffer Pool |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; |---------------------------|
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Reads ---&gt; | Cached Pages |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Writes ---&gt; | Dirty Pages (pending IO) |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Eviction -&gt; | LRU / Free List |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +---------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; v
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Disk (slow)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Three forces are always competing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reads want hot data in memory&lt;/li&gt;
&lt;li&gt;Writes generate dirty pages&lt;/li&gt;
&lt;li&gt;Eviction makes room under pressure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Your job is to keep this system balanced.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="how-to-monitor-the-buffer-pool"&gt;How to Monitor the Buffer Pool&lt;/h2&gt;
&lt;h3 id="option-1-quick-snapshot"&gt;Option 1: Quick Snapshot&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ENGINE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;INNODB&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;G&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Useful for human inspection. Look for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Buffer pool size&lt;/li&gt;
&lt;li&gt;Free buffers&lt;/li&gt;
&lt;li&gt;Database pages&lt;/li&gt;
&lt;li&gt;Modified (dirty) pages&lt;/li&gt;
&lt;li&gt;Page read/write rates&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Great for debugging. Not ideal for automation.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="option-2-structured-metrics-recommended"&gt;Option 2: Structured Metrics (Recommended)&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;pool_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;free_buffers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;database_pages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;modified_database_pages&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;information_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INNODB_BUFFER_POOL_STATS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Key fields:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;free_buffers&lt;/code&gt; → Available pages (breathing room)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;database_pages&lt;/code&gt; → Pages holding data&lt;/li&gt;
&lt;li&gt;&lt;code&gt;modified_database_pages&lt;/code&gt; → Dirty pages waiting to flush&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Great for automation.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="the-5-signals-that-actually-matter"&gt;The 5 Signals That Actually Matter&lt;/h2&gt;
&lt;h3 id="1-buffer-pool-hit-ratio-handle-with-care"&gt;1. Buffer Pool Hit Ratio (Handle With Care)&lt;/h3&gt;
&lt;p&gt;Yes, it’s widely used. No, it’s not enough.&lt;/p&gt;
&lt;p&gt;A high hit ratio does not mean your system is healthy. It does not capture:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Page churn&lt;/li&gt;
&lt;li&gt;Eviction pressure&lt;/li&gt;
&lt;li&gt;Access patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can have a 99% hit ratio and still be IO-bound.&lt;/p&gt;
&lt;p&gt;Use it as a sanity check, not a decision-maker.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="2-free-buffers"&gt;2. Free Buffers&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;free_buffers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;free_buffers&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;information_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INNODB_BUFFER_POOL_STATS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Interpretation:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Near zero during steady load → normal&lt;/li&gt;
&lt;li&gt;Near zero + rising disk reads → pressure&lt;/li&gt;
&lt;li&gt;Near zero while mostly idle → suspicious (possible misread or config issue)&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="3-dirty-page-percentage"&gt;3. Dirty Page Percentage&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;modified_database_pages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;database_pages&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;dirty_pct&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;information_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INNODB_BUFFER_POOL_STATS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Interpretation (context matters):&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;0–5% → Very clean&lt;/li&gt;
&lt;li&gt;5–20% → Typical&lt;/li&gt;
&lt;li&gt;20–30%+ → Potential flushing lag&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="4-disk-read-pressure-critical-signal"&gt;4. Disk Read Pressure (Critical Signal)&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Innodb_buffer_pool_reads'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="c1"&gt;-- Take two samples 60s apart and compare&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Track the rate of change (reads/sec), not the absolute value.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Interpretation:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Rising reads → Working set does not fit in memory&lt;/li&gt;
&lt;li&gt;Flat reads → Memory is absorbing the workload&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="5-read-ahead--eviction-pressure"&gt;5. Read Ahead / Eviction Pressure&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Innodb_buffer_pool_read_ahead%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Innodb_buffer_pool_pages_evicted'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Innodb_buffer_pool_reads'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Interpretation:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Efficient read-ahead:
&lt;ul&gt;
&lt;li&gt;read_ahead increases&lt;/li&gt;
&lt;li&gt;read_ahead_evicted remains low&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Inefficient read-ahead (wasted IO):
&lt;ul&gt;
&lt;li&gt;High read_ahead_evicted / read_ahead&lt;/li&gt;
&lt;li&gt;Indicates access patterns defeating prefetching&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Buffer pool churn:
&lt;ul&gt;
&lt;li&gt;pages_evicted rising&lt;/li&gt;
&lt;li&gt;buffer_pool_reads rising&lt;/li&gt;
&lt;li&gt;Indicates pages are evicted and re-read from disk&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Healthy vs unhealthy eviction:
&lt;ul&gt;
&lt;li&gt;High evictions + stable reads → normal turnover&lt;/li&gt;
&lt;li&gt;High evictions + rising reads → memory pressure&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Focus on rates of change over time, not absolute values.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="detecting-thrashing"&gt;Detecting Thrashing&lt;/h2&gt;
&lt;p&gt;Thrashing is when the buffer pool constantly evicts and reloads pages.&lt;/p&gt;
&lt;h3 id="classic-symptoms"&gt;Classic Symptoms&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Low or zero free buffers&lt;/li&gt;
&lt;li&gt;Increasing disk reads&lt;/li&gt;
&lt;li&gt;Stable (but misleading) hit ratio&lt;/li&gt;
&lt;li&gt;Spiky query latency&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="visualizing-thrash"&gt;Visualizing Thrash&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Time ---&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Memory: [FULL][FULL][FULL][FULL]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Reads: ↑ ↑↑ ↑↑↑ ↑↑↑↑
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Latency: - ^ ^^ ^^^
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Evictions: ↑ ↑↑ ↑↑↑ ↑↑↑↑&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you see this pattern, your working set does not fit in memory.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="tuning-the-buffer-pool"&gt;Tuning the Buffer Pool&lt;/h2&gt;
&lt;h3 id="step-1-size-it-intentionally"&gt;Step 1: Size It Intentionally&lt;/h3&gt;
&lt;p&gt;Instead of blindly assigning 70% of RAM:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Observe working set behavior&lt;/li&gt;
&lt;li&gt;Monitor free buffers and reads&lt;/li&gt;
&lt;li&gt;Increase gradually&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Avoid starving the OS or filesystem cache.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="step-2-tune-flushing-behavior"&gt;Step 2: Tune Flushing Behavior&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_max_dirty_pages_pct = 75
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_io_capacity = 1000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_io_capacity_max = 2000&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Sustained IO spikes → increase innodb_io_capacity&lt;/li&gt;
&lt;li&gt;Dirty pages climbing → flushing lag&lt;/li&gt;
&lt;li&gt;Sudden stalls → checkpoint pressure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What they control:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;innodb_io_capacity&lt;/code&gt; → Expected steady-state IO throughput&lt;/li&gt;
&lt;li&gt;&lt;code&gt;innodb_io_capacity_max&lt;/code&gt; → Burst flushing capacity&lt;/li&gt;
&lt;li&gt;&lt;code&gt;innodb_max_dirty_pages_pct&lt;/code&gt; → Threshold for aggressive flushing&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;⚠️ These values should reflect real hardware capability.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="step-3-buffer-pool-instancesreduce-contention"&gt;Step 3: Buffer Pool Instances:Reduce Contention&lt;/h3&gt;
&lt;p&gt;A practical, battle-tested guideline:&lt;/p&gt;
&lt;p&gt;Use 1 instance per ~1GB of buffer pool, up to a reasonable limit.&lt;/p&gt;
&lt;p&gt;Buffer Pool Instances: Reducing Contention&lt;/p&gt;
&lt;p&gt;The buffer pool can be split into multiple instances, each managing its own internal structures. This helps reduce contention under high concurrency.&lt;/p&gt;
&lt;p&gt;Without this, all threads compete for the same buffer pool internals. With multiple instances, that load is distributed.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="when-it-matters"&gt;When It Matters&lt;/h3&gt;
&lt;p&gt;Buffer pool instances only help when contention exists. You’ll see benefits if your system has:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;High concurrency (many active threads)&lt;/li&gt;
&lt;li&gt;CPU-bound workloads&lt;/li&gt;
&lt;li&gt;Mutex contention in InnoDB&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If your workload is primarily IO-bound, this setting will have little impact.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="sizing-guidelines"&gt;Sizing Guidelines&lt;/h3&gt;
&lt;p&gt;General guidance:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt; 1GB buffer pool → 1 instance&lt;/li&gt;
&lt;li&gt;1GB–8GB → 2–4 instances&lt;/li&gt;
&lt;li&gt;8GB–64GB → 4–8 instances&lt;/li&gt;
&lt;li&gt;64GB+ → 8–16 instances&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="keep-instances-large-enough"&gt;Keep Instances Large Enough&lt;/h3&gt;
&lt;p&gt;Each instance needs enough memory to function efficiently.&lt;/p&gt;
&lt;p&gt;Avoid going below ~1GB per instance.&lt;/p&gt;
&lt;p&gt;If instances are too small:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;LRU efficiency drops&lt;/li&gt;
&lt;li&gt;Eviction becomes more aggressive&lt;/li&gt;
&lt;li&gt;Cache locality suffers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Example&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;innodb_buffer_pool_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="k"&gt;G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;innodb_buffer_pool_instances&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This gives ~4GB per instance, which is well-balanced.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="common-mistakes"&gt;Common Mistakes&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Increasing instances without evidence of contention&lt;/li&gt;
&lt;li&gt;Matching instance count to CPU cores&lt;/li&gt;
&lt;li&gt;Using many instances with a small buffer pool&lt;/li&gt;
&lt;li&gt;Expecting this to fix IO bottlenecks&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="step-4-understand-resizing-behavior"&gt;Step 4: Understand Resizing Behavior&lt;/h3&gt;
&lt;p&gt;Buffer pool resizing is online in modern MySQL versions, but:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It happens in chunks&lt;/li&gt;
&lt;li&gt;Controlled by &lt;code&gt;innodb_buffer_pool_chunk_size&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="real-world-scenarios"&gt;Real-World Scenarios&lt;/h2&gt;
&lt;h3 id="scenario-1-everything-looks-fine-but-its-slow"&gt;Scenario 1: “Everything Looks Fine… But It’s Slow”&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;High hit ratio&lt;/li&gt;
&lt;li&gt;Low free buffers&lt;/li&gt;
&lt;li&gt;Rising disk reads&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Cause:&lt;/strong&gt; Working set barely fits&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Increase buffer pool size gradually&lt;/p&gt;
&lt;p&gt;If increasing the buffer pool size does not reduce disk reads, the problem is not memory.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="scenario-2-write-heavy-workload"&gt;Scenario 2: Write-Heavy Workload&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Dirty pages increasing&lt;/li&gt;
&lt;li&gt;Periodic IO spikes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Cause:&lt;/strong&gt; Flushing cannot keep up&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Increase &lt;code&gt;innodb_io_capacity&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Adjust dirty page thresholds&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="scenario-3-sudden-latency-spikes"&gt;Scenario 3: Sudden Latency Spikes&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Sharp performance drops&lt;/li&gt;
&lt;li&gt;Disk activity surges&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Cause:&lt;/strong&gt; Checkpoint pressure&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Improve IO capacity tuning&lt;/li&gt;
&lt;li&gt;Reduce dirty page buildup&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="practical-monitoring-queries"&gt;Practical Monitoring Queries&lt;/h2&gt;
&lt;h3 id="buffer-pool-usage-mb"&gt;Buffer Pool Usage (MB)&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;database_pages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mb_used&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;information_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INNODB_BUFFER_POOL_STATS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Assumes default 16KB page size (innodb_page_size).&lt;/p&gt;
&lt;h3 id="dirty-page-percentage"&gt;Dirty Page Percentage&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;modified_database_pages&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;database_pages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;dirty_pct&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;information_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INNODB_BUFFER_POOL_STATS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="free-buffer-check"&gt;Free Buffer Check&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;free_buffers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;free_buffers&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;information_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INNODB_BUFFER_POOL_STATS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="common-mistakes-1"&gt;Common Mistakes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Treating 70% as a rule instead of a starting point&lt;/li&gt;
&lt;li&gt;Blindly trusting hit ratio&lt;/li&gt;
&lt;li&gt;Ignoring disk read trends&lt;/li&gt;
&lt;li&gt;Oversizing and starving the OS&lt;/li&gt;
&lt;li&gt;Not tuning IO capacity&lt;/li&gt;
&lt;li&gt;Leaving defaults in write-heavy systems&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="quick-checklist"&gt;Quick Checklist&lt;/h2&gt;
&lt;p&gt;If you remember nothing else:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reads increasing? → working set too big&lt;/li&gt;
&lt;li&gt;Free buffers always ~0? → pressure&lt;/li&gt;
&lt;li&gt;Dirty pages high? → flushing lag&lt;/li&gt;
&lt;li&gt;Latency spiking? → checkpoint or IO saturation&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="final-thoughts"&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;The InnoDB buffer pool doesn’t fail loudly. It degrades quietly until your disk becomes the bottleneck.&lt;/p&gt;
&lt;p&gt;By the time you notice, you’re debugging latency instead of preventing it.&lt;/p&gt;
&lt;p&gt;Monitor the right signals, and you’ll see problems forming before users do.&lt;/p&gt;
&lt;p&gt;That’s the difference between reacting to performance… and controlling it.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Opensource</category>
      <category>Percona</category>
      <category>MySQL</category>
      <category>Community</category>
      <category>Percona Server</category>
      <category>innodb bufferpool</category>
      <category>tuning</category>
      <media:thumbnail url="https://percona.community/blog/2026/04/bufferpool-tuning_hu_7481b328ae6e02da.jpg"/>
      <media:content url="https://percona.community/blog/2026/04/bufferpool-tuning_hu_1ac592e54c20c4ce.jpg" medium="image"/>
    </item>
    <item>
      <title>Hardening MySQL: Practical Security Strategies for DBAs</title>
      <link>https://percona.community/blog/2026/03/02/hardening-mysql-practical-security-strategies-for-dbas/</link>
      <guid>https://percona.community/blog/2026/03/02/hardening-mysql-practical-security-strategies-for-dbas/</guid>
      <pubDate>Mon, 02 Mar 2026 00:00:00 UTC</pubDate>
      <description>MySQL Security Best Practices: A Practical Guide for Locking Down Your Database Introduction MySQL runs just about everywhere. I’ve seen it behind small personal projects, internal tools, SaaS platforms, and large enterprise systems handling serious transaction volume. When your database sits at the center of everything, it becomes part of your security perimeter whether you planned it that way or not. And that makes it a target.</description>
      <content:encoded>&lt;h1 id="mysql-security-best-practices-a-practical-guide-for-locking-down-your-database"&gt;MySQL Security Best Practices: A Practical Guide for Locking Down Your Database&lt;/h1&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;MySQL runs just about everywhere. I’ve seen it behind small personal projects, internal tools, SaaS platforms, and large enterprise systems handling serious transaction volume. When your database sits at the center of everything, it becomes part of your security perimeter whether you planned it that way or not. And that makes it a target.&lt;/p&gt;
&lt;p&gt;Securing MySQL isn’t about flipping one magical setting and calling it done. It’s about layers. Tight access control. Encrypted connections. Clear visibility into what’s happening on the server. And operational discipline that doesn’t drift over time.&lt;/p&gt;
&lt;p&gt;In this guide, I’m going to walk through practical MySQL security best practices that you can apply right away. These are the kinds of checks and hardening steps that reduce real risk in real environments, and help build a database platform that stays resilient under pressure.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="1-principle-of-least-privilege"&gt;1. Principle of Least Privilege&lt;/h2&gt;
&lt;p&gt;One of the most common security mistakes is over-granting privileges.
Applications and users should have only the permissions they absolutely
need.&lt;/p&gt;
&lt;h3 id="bad-practice"&gt;Bad Practice&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;GRANT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ALL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;PRIVILEGES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ON&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'appuser'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'10.%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="better-approach"&gt;Better Approach&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;GRANT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ON&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;appdb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'appuser'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'10.%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="recommendations"&gt;Recommendations&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Avoid global privileges unless absolutely required&lt;/li&gt;
&lt;li&gt;Restrict users by host whenever possible&lt;/li&gt;
&lt;li&gt;Separate admin accounts from application accounts&lt;/li&gt;
&lt;li&gt;Use different credentials for read-only vs write operations&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="audit-existing-privileges"&gt;Audit Existing Privileges&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;host&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Select_priv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Insert_priv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Update_priv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Delete_priv&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="2-strong-authentication--password-policies"&gt;2. Strong Authentication &amp; Password Policies&lt;/h2&gt;
&lt;p&gt;Weak credentials remain one of the easiest attack vectors.&lt;/p&gt;
&lt;h3 id="enable-password-validation"&gt;Enable Password Validation&lt;/h3&gt;
&lt;p&gt;component_validate_password is MySQL’s modern password policy engine. Think of it as a gatekeeper for credential quality. Every time someone tries to set or change a password, it checks whether that password meets your defined security standards before letting it in.&lt;/p&gt;
&lt;p&gt;It replaces the older validate_password plugin with a component-based architecture that is more flexible and better aligned with MySQL 8.x design.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;INSTALL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;COMPONENT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'file://component_validate_password'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="what-it-does"&gt;What It Does&lt;/h3&gt;
&lt;p&gt;When enabled, it enforces rules such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Minimum password length&lt;/li&gt;
&lt;li&gt;Required mix of character types&lt;/li&gt;
&lt;li&gt;Dictionary file checks&lt;/li&gt;
&lt;li&gt;Strength scoring&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If a password fails policy, the statement is rejected before the credential is stored.&lt;/p&gt;
&lt;h3 id="why-it-matters"&gt;Why It Matters&lt;/h3&gt;
&lt;p&gt;Weak passwords remain one of the most common entry points in database breaches. This component reduces risk by enforcing baseline credential hygiene automatically, instead of relying on developer discipline.&lt;/p&gt;
&lt;h3 id="recommended-policies"&gt;Recommended Policies&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Minimum length: 14+ characters&lt;/li&gt;
&lt;li&gt;Require mixed case, numbers, and symbols&lt;/li&gt;
&lt;li&gt;Enable dictionary checks&lt;/li&gt;
&lt;li&gt;Enable username checks&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="remove-anonymous-accounts"&gt;Remove Anonymous Accounts&lt;/h3&gt;
&lt;h4 id="find-anonymous-users"&gt;Find Anonymous Users&lt;/h4&gt;
&lt;p&gt;Anonymous users have an empty User field.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;host&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;WHERE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you see rows returned, those are anonymous accounts.&lt;/p&gt;
&lt;h3 id="drop-anonymous-users"&gt;Drop Anonymous Users&lt;/h3&gt;
&lt;p&gt;In modern MySQL versions:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;DROP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'localhost'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;DROP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Adjust the Host value based on what your query returned.&lt;/p&gt;
&lt;h3 id="why-this-matters"&gt;Why This Matters&lt;/h3&gt;
&lt;p&gt;Anonymous users:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Allow login without credentials&lt;/li&gt;
&lt;li&gt;May have default privileges in some distributions&lt;/li&gt;
&lt;li&gt;Increase the attack surface unnecessarily&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In hardened environments, there should be zero accounts with an empty username. Every identity should be explicit, accountable, and least-privileged.&lt;/p&gt;
&lt;h2 id="3-encryption-everywhere"&gt;3. Encryption Everywhere&lt;/h2&gt;
&lt;p&gt;Encryption protects data both in transit and at rest.&lt;/p&gt;
&lt;h3 id="enable-transparent-data-encryption-tde"&gt;Enable Transparent Data Encryption (TDE)&lt;/h3&gt;
&lt;p&gt;See my January 13 post for a deep dive into Transparent Data Encryption:
&lt;a href="https://percona.community/blog/2026/01/13/configuring-the-component-keyring-in-percona-server-and-pxc-8.4/" target="_blank" rel="noopener noreferrer"&gt;Configuring the Component Keyring in Percona Server and PXC 8.4&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="enable-tls-for-connections"&gt;Enable TLS for Connections&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;require_secure_transport&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;ON&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="verify-ssl-usage"&gt;Verify SSL Usage&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Ssl_cipher'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="encryption-areas-to-consider"&gt;Encryption Areas to Consider&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Client-server connections&lt;/li&gt;
&lt;li&gt;Replication channels&lt;/li&gt;
&lt;li&gt;Backups and snapshot storage&lt;/li&gt;
&lt;li&gt;Disk-level encryption&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="4-patch-management--version-hygiene"&gt;4. Patch Management &amp; Version Hygiene&lt;/h2&gt;
&lt;p&gt;Running outdated MySQL versions is equivalent to leaving known
vulnerabilities exposed.&lt;/p&gt;
&lt;h3 id="maintenance-strategy"&gt;Maintenance Strategy&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Track vendor security advisories&lt;/li&gt;
&lt;li&gt;Apply minor updates regularly&lt;/li&gt;
&lt;li&gt;Test patches in staging before production rollout&lt;/li&gt;
&lt;li&gt;Avoid unsupported MySQL versions&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="check-version"&gt;Check Version&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;VERSION&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="5-logging-auditing-and-monitoring"&gt;5. Logging, Auditing, and Monitoring&lt;/h2&gt;
&lt;p&gt;Security without visibility is blind defense, enable Audit Logging.&lt;/p&gt;
&lt;h3 id="1-audit_log-plugin-legacy-model"&gt;1. audit_log Plugin (Legacy Model)&lt;/h3&gt;
&lt;h4 id="installation"&gt;Installation&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;INSTALL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PLUGIN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;audit_log&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SONAME&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'audit_log.so'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="verify"&gt;Verify&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PLUGINS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'audit%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="2-audit_log_filter-component-modern-model"&gt;2. audit_log_filter Component (Modern Model)&lt;/h3&gt;
&lt;p&gt;Introduced in MySQL 8 to provide a more flexible and granular alternative to the older plugin model.&lt;/p&gt;
&lt;h4 id="installation-1"&gt;Installation&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;INSTALL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;COMPONENT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'file://component_audit_log_filter'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="verify-1"&gt;Verify&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;component&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="architecture-difference"&gt;Architecture Difference&lt;/h4&gt;
&lt;p&gt;Instead of a single global policy, you create:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Filters (define what to log)&lt;/li&gt;
&lt;li&gt;Users assigned to filters&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It’s granular and rule-driven.&lt;/p&gt;
&lt;h3 id="auditing-key-events"&gt;Auditing Key Events&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Failed logins&lt;/li&gt;
&lt;li&gt;Privilege changes&lt;/li&gt;
&lt;li&gt;Schema modifications&lt;/li&gt;
&lt;li&gt;Unusual query activity&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="references"&gt;References:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2025/09/18/audit-log-filter-component/" target="_blank" rel="noopener noreferrer"&gt;Audit Log Filter Component
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2025/10/08/audit-log-filters-part-ii/" target="_blank" rel="noopener noreferrer"&gt;Audit Log Filters Part II
&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="useful-metrics"&gt;Useful Metrics&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Aborted_connects'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Connections'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="6-secure-configuration-hardening"&gt;6. Secure Configuration Hardening&lt;/h2&gt;
&lt;p&gt;A secure baseline configuration reduces risk from common attack
patterns.&lt;/p&gt;
&lt;h3 id="recommended-settings"&gt;Recommended Settings&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;ini&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-ini" data-lang="ini"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;local_infile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;OFF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;secure_file_priv&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/var/lib/mysql-files&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;sql_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"STRICT_ALL_TABLES"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;secure-log-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/var/log/mysql&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="why-these-matter"&gt;Why These Matter&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Prevent arbitrary file imports&lt;/li&gt;
&lt;li&gt;Reduce filesystem abuse&lt;/li&gt;
&lt;li&gt;Restrict data export/import locations&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="7-backup-security"&gt;7. Backup Security&lt;/h2&gt;
&lt;p&gt;Backups often contain everything an attacker wants.&lt;/p&gt;
&lt;h3 id="backup-best-practices"&gt;Backup Best Practices&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Encrypt backups&lt;/li&gt;
&lt;li&gt;Restrict filesystem permissions&lt;/li&gt;
&lt;li&gt;Store offsite copies securely&lt;/li&gt;
&lt;li&gt;Rotate backup credentials&lt;/li&gt;
&lt;li&gt;Verify restore procedures regularly&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="example-permission-check"&gt;Example Permission Check&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ls -l /backup/mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="8-replication--cluster-security"&gt;8. Replication &amp; Cluster Security&lt;/h2&gt;
&lt;p&gt;Replication is not just a data distribution feature. It is a persistent, privileged communication channel between servers. If misconfigured, it can become a lateral movement pathway inside your infrastructure. Treat every replication link as a trusted but tightly controlled corridor.&lt;/p&gt;
&lt;p&gt;Principle: Replication Is a Privileged Service Account&lt;/p&gt;
&lt;p&gt;Replication users require elevated capabilities. They must be isolated, tightly scoped, and monitored like any other service identity.&lt;/p&gt;
&lt;h3 id="secure-replication-users"&gt;Secure Replication Users&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'repl'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'10.%'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;IDENTIFIED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;BY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'strongpassword'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;REQUIRE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SSL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;GRANT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;REPLICATION&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;REPLICA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ON&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'repl'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'10.%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Hardening considerations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Restrict host patterns as narrowly as possible. Avoid % whenever feasible.&lt;/li&gt;
&lt;li&gt;Require SSL or X.509 certificate authentication.&lt;/li&gt;
&lt;li&gt;Enforce strong password policies or use a secrets manager.&lt;/li&gt;
&lt;li&gt;Disable interactive login capability if applicable.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="encrypt-replication-traffic"&gt;Encrypt Replication Traffic&lt;/h3&gt;
&lt;p&gt;Replication traffic may include sensitive row data, DDL statements, and metadata. Always encrypt it.&lt;/p&gt;
&lt;p&gt;At minimum:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enable require_secure_transport=ON&lt;/li&gt;
&lt;li&gt;Configure TLS certificates on source and replica&lt;/li&gt;
&lt;li&gt;Set replication channel to use SSL:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;CHANGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;REPLICATION&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SOURCE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SOURCE_SSL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SOURCE_SSL_CA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/path/ca.pem'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SOURCE_SSL_CERT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/path/client-cert.pem'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SOURCE_SSL_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/path/client-key.pem'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For MySQL Group Replication or InnoDB Cluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enable group communication SSL&lt;/li&gt;
&lt;li&gt;Validate certificate identity&lt;/li&gt;
&lt;li&gt;Use dedicated replication networks&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="binary-log-and-relay-log-protection"&gt;Binary Log and Relay Log Protection&lt;/h3&gt;
&lt;p&gt;Replication relies on binary logs. Protect them.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Set binlog_encryption=ON&lt;/li&gt;
&lt;li&gt;Set relay_log_info_repository=TABLE&lt;/li&gt;
&lt;li&gt;Restrict filesystem access to log directories&lt;/li&gt;
&lt;li&gt;Monitor log retention policies&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Compromised binary logs can reveal historical data changes.&lt;/p&gt;
&lt;h2 id="9-continuous-security-reviews"&gt;9. Continuous Security Reviews&lt;/h2&gt;
&lt;p&gt;Security is not a one-time checklist. Regular audits help catch
configuration drift and evolving threats.&lt;/p&gt;
&lt;h3 id="suggested-review-cadence"&gt;Suggested Review Cadence&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Weekly: failed login review&lt;/li&gt;
&lt;li&gt;Monthly: privilege audits&lt;/li&gt;
&lt;li&gt;Quarterly: configuration review&lt;/li&gt;
&lt;li&gt;Semiannually: full security assessment&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="security-checklist-summary"&gt;Security Checklist Summary&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Area&lt;/th&gt;
&lt;th&gt;Key Action&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Access Control&lt;/td&gt;
&lt;td&gt;Least privilege grants&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Authentication&lt;/td&gt;
&lt;td&gt;Strong password policies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Encryption&lt;/td&gt;
&lt;td&gt;TLS + encrypted storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Updates&lt;/td&gt;
&lt;td&gt;Regular patching&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monitoring&lt;/td&gt;
&lt;td&gt;Audit logging enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Configuration&lt;/td&gt;
&lt;td&gt;Harden defaults&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backups&lt;/td&gt;
&lt;td&gt;Encrypt and protect&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Replication&lt;/td&gt;
&lt;td&gt;Secure replication users&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="final-thoughts"&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;Strong MySQL security doesn’t come from one feature or one tool. It comes from layers working together. Hardened configuration. Tight, intentional privilege design. Encryption everywhere it makes sense. And monitoring that actually gets reviewed instead of just written to disk.&lt;/p&gt;
&lt;p&gt;In my experience, the strongest environments aren’t the ones trying to be unbreakable. They’re the ones built to detect, contain, and respond. Every layer should either reduce blast radius or increase visibility. If an attacker gets through one control, the next one slows them down. And while they’re slowing down, your logging and monitoring should already be telling you something isn’t right.&lt;/p&gt;
&lt;p&gt;That’s what a mature security posture looks like in practice.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Opensource</category>
      <category>Percona</category>
      <category>MySQL</category>
      <category>Community</category>
      <category>Percona Server</category>
      <category>security</category>
      <category>auditing</category>
      <media:thumbnail url="https://percona.community/blog/2026/03/mysql-security_hu_2e3482c9b216e342.jpg"/>
      <media:content url="https://percona.community/blog/2026/03/mysql-security_hu_5d7a0d72bf4766b.jpg" medium="image"/>
    </item>
    <item>
      <title>Meet Percona at KubeCon + CloudNativeCon Europe 2026</title>
      <link>https://percona.community/blog/2026/02/25/meet-percona-at-kubecon--cloudnativecon-europe-2026/</link>
      <guid>https://percona.community/blog/2026/02/25/meet-percona-at-kubecon--cloudnativecon-europe-2026/</guid>
      <pubDate>Wed, 25 Feb 2026 12:00:00 UTC</pubDate>
      <description>The Percona team is heading to KubeCon + CloudNativeCon Europe in Amsterdam, and we’d love to meet you in person!</description>
      <content:encoded>&lt;p&gt;The Percona team is heading to KubeCon + CloudNativeCon Europe in Amsterdam, and we’d love to meet you in person!&lt;/p&gt;
&lt;p&gt;You can find us at &lt;strong&gt;Booth 790&lt;/strong&gt;. This is a great chance to talk with engineers working on Percona Operators.&lt;/p&gt;
&lt;p&gt;We will be there to discuss:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Running MySQL, PostgreSQL, and MongoDB on Kubernetes&lt;/li&gt;
&lt;li&gt;Production-ready HA setups&lt;/li&gt;
&lt;li&gt;Backup and PITR strategies&lt;/li&gt;
&lt;li&gt;Multi-cluster and multi-region deployments&lt;/li&gt;
&lt;li&gt;Operators roadmap and upcoming features&lt;/li&gt;
&lt;li&gt;Real-world troubleshooting stories&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you’re running Percona Operators in production (or just getting started), we’d love to hear your feedback and learn about your challenges.&lt;/p&gt;
&lt;p&gt;If you’re just curious (or even suspicious) about running databases on Kubernetes, we’d love to talk and answer your questions.&lt;/p&gt;
&lt;h3 id="admission-tickets---20-off-for-our-community"&gt;Admission Tickets - 20% Off for Our Community&lt;/h3&gt;
&lt;p&gt;We have a 20% discount code available for Percona community members.&lt;br&gt;
If you’re planning to attend and don’t have a ticket yet, drop a comment or message us and we’ll share the details.&lt;/p&gt;
&lt;h3 id="schedule-a-meeting"&gt;Schedule a Meeting&lt;/h3&gt;
&lt;p&gt;Want dedicated time with our engineers?&lt;br&gt;
Drop a comment here or reach out directly. We’re happy to schedule a meeting during the event.&lt;/p&gt;
&lt;p&gt;See you in Amsterdam!&lt;/p&gt;</content:encoded>
      <author>Ege Güneş</author>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>events</category>
      <category>operators</category>
      <media:thumbnail url="https://percona.community/blog/2026/02/kubecon_hu_5a17e098a1ce4fe3.jpg"/>
      <media:content url="https://percona.community/blog/2026/02/kubecon_hu_3ed257b90767ff50.jpg" medium="image"/>
    </item>
    <item>
      <title>Pre-FOSDEM &amp; FOSDEM 2026, Community, Databases, and Open Source</title>
      <link>https://percona.community/blog/2026/02/09/pre-fosdem-fosdem-2026-community-databases-and-open-source/</link>
      <guid>https://percona.community/blog/2026/02/09/pre-fosdem-fosdem-2026-community-databases-and-open-source/</guid>
      <pubDate>Mon, 09 Feb 2026 10:00:00 UTC</pubDate>
      <description>This is a recap of Percona at preFosdem and Fosdem!</description>
      <content:encoded>&lt;p&gt;This is a recap of Percona at preFosdem and Fosdem!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-all_hu_e712242d8225486.png 480w, https://percona.community/blog/2026/02/fosdem-all_hu_419bf189b512c36b.png 768w, https://percona.community/blog/2026/02/fosdem-all_hu_9d4edb54da418973.png 1400w"
src="https://percona.community/blog/2026/02/fosdem-all.png" alt="Fosdem intro" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Before FOSDEM officially started, the database community gathered for MySQL Belgium Days (Pre-FOSDEM), a two-day event bringing together MySQL developers, DBAs, engineers, tool builders, and open-source enthusiasts. It was an excellent space for deep technical discussions, knowledge sharing, and reconnecting with the community, hosted by the amazing &lt;strong&gt;Frederic Descamps&lt;/strong&gt;.
The event featured strong participation from &lt;strong&gt;Percona&lt;/strong&gt; and the wider MySQL ecosystem, with talks led by &lt;strong&gt;Peter Zaitsev, Marco Tusa, Fernando Laudares Camargos, Arunjith Aravindan, Vinicius Grippa, Pep Pla, and Yura Sorokin&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2026/02/fosdem-speakers.png" alt="Fosdem speakers" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Find the recordings of the talks &lt;a href="https://www.youtube.com/playlist?list=PL6tzEWmw-bpxe0k5Xrk09N-m6q5rGTy_l" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Also, in Belgium same week, several other events took place, including &lt;strong&gt;PGDay-FOSDEM&lt;/strong&gt;, &lt;strong&gt;MariaDB Day&lt;/strong&gt;, and the &lt;strong&gt;MySQL Summit&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;At &lt;strong&gt;PGDay&lt;/strong&gt;, it was a pleasure to see the PostgreSQL community together; we had several participants representing us.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2026/02/fosdem-pg.png" alt="Fosdem speakers" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MariaDB Day&lt;/strong&gt;. We had Peter Zeitsev presenting a talk titled “What MariaDB Community can learn from PostgreSQL?”
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-peter_hu_b3c4adfd61074afa.jpg 480w, https://percona.community/blog/2026/02/fosdem-peter_hu_1fbd35cb821b5c8c.jpg 768w, https://percona.community/blog/2026/02/fosdem-peter_hu_be436136b025998e.jpg 1400w"
src="https://percona.community/blog/2026/02/fosdem-peter.jpg" alt="Fosdem speakers" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="mysql-summit"&gt;MySQL summit&lt;/h2&gt;
&lt;p&gt;During MySQL Days in Brussels, the community gathered for an in-person MySQL Summit focused on collaboration and strengthening the MySQL ecosystem, with open discussions around its present and future driven by community involvement.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-mysql-summit_hu_2469df151a62abfb.jpeg 480w, https://percona.community/blog/2026/02/fosdem-mysql-summit_hu_a33fe6004c89e577.jpeg 768w, https://percona.community/blog/2026/02/fosdem-mysql-summit_hu_3ff493e53e6b551a.jpeg 1400w"
src="https://percona.community/blog/2026/02/fosdem-mysql-summit.jpeg" alt="Fosdem MySQL Summit" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="mysql-rockstars-2026"&gt;MySQL RockStars 2026&lt;/h2&gt;
&lt;p&gt;The MySQL Rockstar Award is a recognition given by the MySQL Community Team at Oracle, together with previous award winners, to members of the MySQL community who have actively contributed to promoting MySQL during the past year.
MySQL Legends are long-standing community members who have made a significant and lasting impact on the adoption, development, and evolution of MySQL over many years.&lt;/p&gt;
&lt;p&gt;This year, the MySQL RockStars selected were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Matthias Crauwels&lt;/li&gt;
&lt;li&gt;Marco Tusa&lt;/li&gt;
&lt;li&gt;Umesh Shastry&lt;/li&gt;
&lt;li&gt;Ronald Bradford&lt;/li&gt;
&lt;li&gt;Marcelo Altmann&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-rockstars_hu_1bffe1196cbfd19a.jpeg 480w, https://percona.community/blog/2026/02/fosdem-rockstars_hu_3fd61445be2327bd.jpeg 768w, https://percona.community/blog/2026/02/fosdem-rockstars_hu_eebd57b07de5904b.jpeg 1400w"
src="https://percona.community/blog/2026/02/fosdem-rockstars.jpeg" alt="Fosdem postgressql" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Congratulations to all of them! You can find previous &lt;a href="https://www.mysqlandfriends.eu/mysql-rockstars-hall-of-fame/" target="_blank" rel="noopener noreferrer"&gt;MySQL RockStars in this list&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="fosdem-2026-percona-booth"&gt;FOSDEM 2026, Percona booth&lt;/h2&gt;
&lt;p&gt;At the Percona booth, conversations focused on open-source databases and Kubernetes, including &lt;a href="https://github.com/openeverest/openeverest" target="_blank" rel="noopener noreferrer"&gt;OpenEverest’s&lt;/a&gt; first-ever presence at FOSDEM. Around 40 Perconians were present, a great chance to finally meet many colleagues in person. As always, we had many visitors, and it was great to see that many already knew about Percona, while others were eager to learn more and explore what we do in the world of Open Source!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2026/02/fosdem-booth.png" alt="Fosdem postgressql" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="fosdem-2026-databases-devroom"&gt;FOSDEM 2026, Databases DevRoom&lt;/h2&gt;
&lt;p&gt;FOSDEM 2026 officially kicked off at the ULB Solbosch Campus, bringing together thousands of open-source contributors from around the world.
The Database DevRoom (UB2.252A) was packed with high-quality talks and discussions, co-led with &lt;strong&gt;Matthias Crauwels&lt;/strong&gt; and &lt;strong&gt;Ray Paik&lt;/strong&gt;, and&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-database-01_hu_24d435d202eb15dc.jpeg 480w, https://percona.community/blog/2026/02/fosdem-database-01_hu_5bfbaa44d8e570f6.jpeg 768w, https://percona.community/blog/2026/02/fosdem-database-01_hu_aa83876523e835fa.jpeg 1400w"
src="https://percona.community/blog/2026/02/fosdem-database-01.jpeg" alt="Fosdem postgressql" /&gt;&lt;/figure&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-database-02_hu_1fd300fdd4485d45.jpeg 480w, https://percona.community/blog/2026/02/fosdem-database-02_hu_3aa1f8ab0f221851.jpeg 768w, https://percona.community/blog/2026/02/fosdem-database-02_hu_a647570fbb1f1d95.jpeg 1400w"
src="https://percona.community/blog/2026/02/fosdem-database-02.jpeg" alt="Fosdem postgressql" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Find more details and some of the recordings &lt;a href="https://fosdem.org/2026/schedule/track/databases/" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="celebrating-20-years-of-percona"&gt;Celebrating 20 Years of Percona&lt;/h2&gt;
&lt;p&gt;This year, &lt;strong&gt;Percona&lt;/strong&gt; celebrates its 20th anniversary. Throughout FOSDEM and Pre-FOSDEM events, it was inspiring to meet long-time users who have relied on Percona’s open-source solutions for years and shared their positive experiences.
You can explore Percona’s 20-year journey here:
👉 &lt;a href="https://percona20.com/" target="_blank" rel="noopener noreferrer"&gt;https://percona20.com/&lt;/a&gt;
If you’ve had a great experience with Percona, you’re invited to share your story via the community survey.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2026/02/fosdem-percona-20.png" alt="Fosdem postgressql" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;See you at FOSDEM 2027!&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Percona</category>
      <category>PostgreSQL</category>
      <category>Community</category>
      <category>Events</category>
      <category>FOSDEM</category>
      <media:thumbnail url="https://percona.community/blog/2026/02/fosdem-all_hu_ead0e789549fbf3a.jpg"/>
      <media:content url="https://percona.community/blog/2026/02/fosdem-all_hu_f5de724365851f2d.jpg" medium="image"/>
    </item>
    <item>
      <title>PGDay and FOSDEM Report from Kai</title>
      <link>https://percona.community/blog/2026/02/04/pgday-and-fosdem-report-from-kai/</link>
      <guid>https://percona.community/blog/2026/02/04/pgday-and-fosdem-report-from-kai/</guid>
      <pubDate>Wed, 04 Feb 2026 10:00:00 UTC</pubDate>
      <description>The following thoughts and comments are completely my personal opinion and do not reflect my employers thoughts or beliefs. If you don’t like anything in this post, reach out to me directly, so I can ignore it ;-).</description>
      <content:encoded>&lt;p&gt;The following thoughts and comments are completely my personal opinion and do not reflect my employers thoughts or beliefs. If you don’t like anything in this post, reach out to me directly, so I can ignore it ;-).&lt;/p&gt;
&lt;p&gt;I’m currently on the train on my way back home from FOSDEM this year and man, I’m exhausted but also happy. Why? Because the PG and FOSDEM community is just crazily awesome. While it’s always too much of everything, it’s at the same time inspiring to see so many enthusiastic IT nerds in one place, discussing and working on what they love - technology and engineering challenges.&lt;/p&gt;
&lt;h2 id="pgday-fosdem"&gt;PGDay FOSDEM&lt;/h2&gt;
&lt;p&gt;It all started with the usual PGDay FOSDEM the day before FOSDEM. Just in case - this has been happening for over 15 years and if you read this as a little blame that you didn’t know about it, that’s absolutely correct, as you should. It’s been a great event as usual: around 150 Postgres enthusiasts collaborating with each other. There was a great set of talks (no recording available, so yes, just join next year to not miss anything), as well as the hallway track conversations.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/pgday-slonik_hu_89ebc067372229f.jpeg 480w, https://percona.community/blog/2026/02/pgday-slonik_hu_f4da3e9e56bdb45d.jpeg 768w, https://percona.community/blog/2026/02/pgday-slonik_hu_dc2c2f7307992d6b.jpeg 1400w"
src="https://percona.community/blog/2026/02/pgday-slonik.jpeg" alt="PGDay Kai and Slonik" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I was able and accepted again as a volunteer helping to make the event happen. While you might think, what’s special about it, I cannot express my gratitude for being able to help in any way. I simply love it. I’m not a great coder and I’ve never been one. I’m the one that looks at his code from a year ago and questions his technical existence and overall abilities if I should rather do something without touching a keyboard. What I am very well capable of is helping and supporting events. So it was my pleasure and I hope you do feel inspired to do the same next year or at any future event, not only in the Postgres ecosystem but in general. I strongly believe in this: doing good things will get you good things back.&lt;/p&gt;
&lt;p&gt;After the wrap up to the PGDay and a great community dinner to collaborate and discuss further, I simply fell completely tired asleep, as the next day and FOSDEM was already waiting.&lt;/p&gt;
&lt;h2 id="fosdem-day-1"&gt;FOSDEM Day 1&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-pgbooth-volunteering_hu_5a36d046c38bf9f4.jpg 480w, https://percona.community/blog/2026/02/fosdem-pgbooth-volunteering_hu_447a561bf987611e.jpg 768w, https://percona.community/blog/2026/02/fosdem-pgbooth-volunteering_hu_f37053b55358f597.jpg 1400w"
src="https://percona.community/blog/2026/02/fosdem-pgbooth-volunteering.jpg" alt="PGDay Kai PG Booth Volunteering" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The next day started with volunteering at the Postgres booth. As usual, Saturday was simply crazy. The Postgres swag like hoodies, caps, mugs, shirts, etc. was almost ripped out of our living hands. We had people waiting in line just to be able to get some swag. That fact alone shows how Postgres is viewed outside of the internal PG ecosystem community. How many times I heard the sentence “Thanks a lot for the great work you do” or “Postgres just works.” Yeah, we can all argue about the details and scenarios, but what this is about is the overall ease of use. Not everyone has terabytes of data or the most complex HA and replication scenarios on this planet. Some just need a functional and boring database and, in the best case, open source - and we all know, looking at real open source, not single-vendor owned, Postgres is the king and here to stay.&lt;/p&gt;
&lt;p&gt;After all of this, I switched clothes and helped at the Percona booth. This wasn’t any less interesting in comparison to the PG booth. How many people stepped by, asking about what we do or thanking us for our projects and that we remained open source even after all these years and so many other companies not withstanding the quick and easy money to go with open-core or closed offerings. That’s the reason I’m proud to be part of this company. We walk the talk, since 20y and we have no incentive ever changing it. Thanks to Peter Zaitsev and Peter Farkas aka P² - for those who know, just know.&lt;/p&gt;
&lt;p&gt;Following that I had the pleassure of being the Slonik guide again. What is a Slonik guide you might ask? Slonik, the mascot of Postgres (big blue elephant), needs some help and guidance while walking throught the crowd, as you can barely see anything while inside the costome. As usual, Slonik is a celebraty. Everyone wants a picture and taking their chance to photograph Slonik in the “wild”. As you can see, even MySQL’s Sakila couldn’t resist and had to take a picture with Slonik.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-slonik_hu_3f785c72b8014059.jpeg 480w, https://percona.community/blog/2026/02/fosdem-slonik_hu_7b4ba0e740713ceb.jpeg 768w, https://percona.community/blog/2026/02/fosdem-slonik_hu_72b2283e2a31412.jpeg 1400w"
src="https://percona.community/blog/2026/02/fosdem-slonik.jpeg" alt="FOSDEM Sakila and Slonik" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;If you’re wondering, like many others, why Slonik and why an Elephant? &lt;a href="https://learnsql.com/blog/the-history-of-slonik-the-postgresql-elephant-logo/" target="_blank" rel="noopener noreferrer"&gt;Click here for some nice written down history lesson&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;After an exciting but also energy-draining day, I enjoyed a Percona crew/team dinner at BrewDog, with some great conversations and good food. &lt;a href="https://www.reddit.com/r/Homebrewing/comments/47icau/brewdog_just_open_sourced_all_their_recipes/" target="_blank" rel="noopener noreferrer"&gt;Fun Fact: Did you know that BrewDog is also open source?&lt;/a&gt;. I couldn’t stay too long - sorry about that - but I had another date. The famous Floor Drees organized in tradition another karaoke event that I couldn’t miss. As I couldn’t make it to earlier versions of it, I definitely wanted to join. What should I say apart from thanks, Floor, for this great tradition. Yes, I had a hard time talking the next day, but damn I had fun singing Swedish, Polish, German, and English songs - and yes, I most likely misunderstood all of them as usual.&lt;/p&gt;
&lt;p&gt;Too many songs for my voice and maybe a “soft drink or two” later, I felt in my bed like a stone, and couldn’t really accept the fact that my alarm clock went off almost five minutes later (at least that’s how it felt to me).&lt;/p&gt;
&lt;h2 id="fosdem-day-2"&gt;FOSDEM Day 2&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2026/02/fosdem-perconabooth-volunteering_hu_1424cc475b89471a.jpeg 480w, https://percona.community/blog/2026/02/fosdem-perconabooth-volunteering_hu_72d624d5738f7a58.jpeg 768w, https://percona.community/blog/2026/02/fosdem-perconabooth-volunteering_hu_6c8ccbd6d626db60.jpeg 1400w"
src="https://percona.community/blog/2026/02/fosdem-perconabooth-volunteering.jpeg" alt="PGDay Kai PG Booth Volunteering" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;No whining helped, just getting up and making myself ready for Day 2 of FOSDEM, which started with another round of volunteering at the Postgres and Percona booths. Both basically matched the previous feedback, apart from a definitely dropped and less crowded space - seems I wasn’t the only one singing last night ;-).&lt;/p&gt;
&lt;p&gt;With that, thanks a lot to everyone making this great FOSDEM happen. I’ll try now if the Deutsche Bahn restaurant actually works this time, as I need coffee, a big one, maybe two… See all of you next year again or at another event this year.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Stay on top of Postgres development without the inbox overwhelm. Explore &lt;a href="https://hackorum.dev/" target="_blank" rel="noopener noreferrer"&gt;hackorum.dev&lt;/a&gt; today and share your feedback with us.&lt;/p&gt;&lt;/blockquote&gt;</content:encoded>
      <author>Kai Wagner</author>
      <category>Percona</category>
      <category>PostgreSQL</category>
      <category>Community</category>
      <category>Events</category>
      <category>FOSDEM</category>
      <category>pg_kwagner</category>
      <media:thumbnail url="https://percona.community/blog/2026/02/FOSDEM_logo.svg_hu_ae522c5838afabab.jpg"/>
      <media:content url="https://percona.community/blog/2026/02/FOSDEM_logo.svg_hu_27fd666d652e3786.jpg" medium="image"/>
    </item>
    <item>
      <title>Tuning MySQL for Performance: The Variables That Actually Matter</title>
      <link>https://percona.community/blog/2026/02/01/tuning-mysql-for-performance-the-variables-that-actually-matter/</link>
      <guid>https://percona.community/blog/2026/02/01/tuning-mysql-for-performance-the-variables-that-actually-matter/</guid>
      <pubDate>Sun, 01 Feb 2026 00:00:00 UTC</pubDate>
      <description>There is a special kind of boredom that only database people know. The kind where you stare at a server humming along and think, surely there is something here I can tune. Good news: there is.</description>
      <content:encoded>&lt;p&gt;There is a special kind of boredom that only database people know. The kind where you stare at a server humming along and think, &lt;em&gt;surely there is something here I can tune&lt;/em&gt;. Good news: there is.&lt;/p&gt;
&lt;p&gt;This post walks through the &lt;strong&gt;most important MySQL variables to tune for performance&lt;/strong&gt;, why they matter, and when touching them helps versus when it quietly makes things worse. This is written with &lt;strong&gt;InnoDB-first workloads&lt;/strong&gt; in mind, because let’s be honest, that’s almost everyone.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="1-innodb_buffer_pool_size"&gt;1. &lt;code&gt;innodb_buffer_pool_size&lt;/code&gt;&lt;/h2&gt;
&lt;h3 id="real-metrics-to-watch"&gt;Real metrics to watch&lt;/h3&gt;
&lt;p&gt;Before touching this variable, look at these:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Innodb_buffer_pool_read%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Key fields:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Innodb_buffer_pool_reads&lt;/code&gt; – physical reads from disk&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Innodb_buffer_pool_read_requests&lt;/code&gt; – logical reads&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Rule of thumb:&lt;/strong&gt;
If &lt;code&gt;reads / read_requests&lt;/code&gt; &gt; 1–2%, your buffer pool is too small.&lt;/p&gt;
&lt;h3 id="example-graph"&gt;Example graph&lt;/h3&gt;
&lt;p&gt;Plot &lt;code&gt;Innodb_buffer_pool_reads&lt;/code&gt; over time. A healthy system shows a flat or gently rising line. Spikes that look like a city skyline usually mean memory pressure or a cold cache.&lt;/p&gt;
&lt;p&gt;If MySQL performance had a crown jewel, this would be it.&lt;/p&gt;
&lt;h3 id="what-it-does"&gt;What it does&lt;/h3&gt;
&lt;p&gt;The InnoDB buffer pool caches table data and indexes in memory. Reads served from RAM are fast. Reads from disk are… character building.&lt;/p&gt;
&lt;h3 id="how-to-tune-it"&gt;How to tune it&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Dedicated DB server: &lt;strong&gt;60–75% of system RAM&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Shared server: be conservative and leave memory for the OS and other services&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;VARIABLES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'innodb_buffer_pool_size'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="pro-tip"&gt;Pro tip&lt;/h3&gt;
&lt;p&gt;If your working set fits in the buffer pool, MySQL feels magical. If it doesn’t, no amount of query tuning will save you.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="2-innodb_buffer_pool_instances"&gt;2. &lt;code&gt;innodb_buffer_pool_instances&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;This one matters once memory gets big.&lt;/p&gt;
&lt;h3 id="what-it-does-1"&gt;What it does&lt;/h3&gt;
&lt;p&gt;Splits the buffer pool into multiple instances to reduce internal mutex contention.&lt;/p&gt;
&lt;h3 id="how-to-tune-it-1"&gt;How to tune it&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Only relevant if buffer pool is &lt;strong&gt;≥ 1GB&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Rule of thumb: &lt;strong&gt;1 instance per 1–2GB&lt;/strong&gt;, max 8&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;VARIABLES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'innodb_buffer_pool_instances'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="gotcha"&gt;Gotcha&lt;/h3&gt;
&lt;p&gt;More is not always better. Too many instances wastes memory and can hurt performance.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="3-innodb_log_file_size"&gt;3. &lt;code&gt;innodb_log_file_size&lt;/code&gt;&lt;/h2&gt;
&lt;h3 id="real-metrics-to-watch-1"&gt;Real metrics to watch&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Innodb_log%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Pay attention to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Innodb_log_waits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Innodb_log_write_requests&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;If &lt;code&gt;Innodb_log_waits&lt;/code&gt; is non-zero&lt;/strong&gt;, redo logs are too small for your write rate.&lt;/p&gt;
&lt;h3 id="example-graph-1"&gt;Example graph&lt;/h3&gt;
&lt;p&gt;Graph &lt;code&gt;Innodb_log_waits&lt;/code&gt; as a rate per second. Ideally, this line hugs zero like it’s afraid of heights.&lt;/p&gt;
&lt;p&gt;This variable controls how calmly MySQL handles write-heavy workloads.&lt;/p&gt;
&lt;h3 id="what-it-does-2"&gt;What it does&lt;/h3&gt;
&lt;p&gt;Defines the size of redo logs. Larger logs mean fewer checkpoints and smoother writes.&lt;/p&gt;
&lt;h3 id="how-to-tune-it-2"&gt;How to tune it&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;OLTP workloads: &lt;strong&gt;1–4GB total redo log&lt;/strong&gt; is common&lt;/li&gt;
&lt;li&gt;Large transactions benefit from larger logs&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;VARIABLES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'innodb_log_file_size'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="warning"&gt;Warning&lt;/h3&gt;
&lt;p&gt;Changing this requires a restart. Plan accordingly or accept the wrath of your on-call future self.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="4-innodb_flush_log_at_trx_commit"&gt;4. &lt;code&gt;innodb_flush_log_at_trx_commit&lt;/code&gt;&lt;/h2&gt;
&lt;h3 id="real-metrics-to-watch-2"&gt;Real metrics to watch&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Innodb_os_log_fsyncs'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Switching from &lt;code&gt;1&lt;/code&gt; to &lt;code&gt;2&lt;/code&gt; often reduces fsyncs by &lt;strong&gt;orders of magnitude&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id="example-graph-2"&gt;Example graph&lt;/h3&gt;
&lt;p&gt;Overlay two lines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Transactions per second&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Innodb_os_log_fsyncs per second&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On busy systems, this graph alone can justify the change to skeptical auditors.&lt;/p&gt;
&lt;p&gt;Performance versus durability, the eternal duel.&lt;/p&gt;
&lt;h3 id="what-it-does-3"&gt;What it does&lt;/h3&gt;
&lt;p&gt;Controls how often redo logs are flushed to disk.&lt;/p&gt;
&lt;h3 id="common-values"&gt;Common values&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;1&lt;/code&gt; – Safest, slowest (flush every commit)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;2&lt;/code&gt; – Very popular compromise&lt;/li&gt;
&lt;li&gt;&lt;code&gt;0&lt;/code&gt; – Fast, risky&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;VARIABLES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'innodb_flush_log_at_trx_commit'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="reality-check"&gt;Reality check&lt;/h3&gt;
&lt;p&gt;For many production systems, &lt;strong&gt;&lt;code&gt;2&lt;/code&gt; delivers massive performance gains&lt;/strong&gt; with acceptable risk, especially with reliable storage.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="5-innodb_flush_method"&gt;5. &lt;code&gt;innodb_flush_method&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;This decides how MySQL talks to your disks.&lt;/p&gt;
&lt;h3 id="what-it-does-4"&gt;What it does&lt;/h3&gt;
&lt;p&gt;Controls whether MySQL uses OS cache or bypasses it.&lt;/p&gt;
&lt;h3 id="recommended"&gt;Recommended&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;ini&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-ini" data-lang="ini"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;innodb_flush_method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;O_DIRECT&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This avoids double-buffering between MySQL and the OS page cache.&lt;/p&gt;
&lt;h3 id="caveat"&gt;Caveat&lt;/h3&gt;
&lt;p&gt;Some filesystems and older kernels behave differently. Always test.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="6-max_connections"&gt;6. &lt;code&gt;max_connections&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;This is not a performance knob. It is a &lt;strong&gt;damage limiter&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id="what-it-does-5"&gt;What it does&lt;/h3&gt;
&lt;p&gt;Caps the number of concurrent client connections.&lt;/p&gt;
&lt;h3 id="why-it-matters"&gt;Why it matters&lt;/h3&gt;
&lt;p&gt;Each connection consumes memory. Too many and MySQL dies spectacularly.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;VARIABLES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'max_connections'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="advice"&gt;Advice&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Set it realistically&lt;/li&gt;
&lt;li&gt;Use connection pooling&lt;/li&gt;
&lt;li&gt;Monitor &lt;code&gt;Threads_connected&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="7-thread_cache_size"&gt;7. &lt;code&gt;thread_cache_size&lt;/code&gt;&lt;/h2&gt;
&lt;h3 id="real-metrics-to-watch-3"&gt;Real metrics to watch&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Threads%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Key fields:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Threads_created&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Connections&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If &lt;code&gt;Threads_created / Connections&lt;/code&gt; stays above a few percent, your cache is undersized.&lt;/p&gt;
&lt;h3 id="example-graph-3"&gt;Example graph&lt;/h3&gt;
&lt;p&gt;Graph &lt;code&gt;Threads_created&lt;/code&gt; as a counter. A healthy system shows a curve that flattens over time, not a staircase.&lt;/p&gt;
&lt;p&gt;Small change, measurable win.&lt;/p&gt;
&lt;h3 id="what-it-does-6"&gt;What it does&lt;/h3&gt;
&lt;p&gt;Caches threads so MySQL doesn’t constantly create and destroy them.&lt;/p&gt;
&lt;h3 id="how-to-tune"&gt;How to tune&lt;/h3&gt;
&lt;p&gt;Watch:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Threads_created'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If it keeps climbing, increase &lt;code&gt;thread_cache_size&lt;/code&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="8-table_open_cache-and-table_definition_cache"&gt;8. &lt;code&gt;table_open_cache&lt;/code&gt; and &lt;code&gt;table_definition_cache&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Metadata matters more than people expect.&lt;/p&gt;
&lt;h3 id="what-they-do"&gt;What they do&lt;/h3&gt;
&lt;p&gt;Cache open tables and table definitions to avoid repeated filesystem access.&lt;/p&gt;
&lt;h3 id="symptoms-of-being-too-low"&gt;Symptoms of being too low&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;High &lt;code&gt;Opened_tables&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Metadata lock waits&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;VARIABLES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'table_open_cache'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;VARIABLES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'table_definition_cache'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="9-tmp_table_size-and-max_heap_table_size"&gt;9. &lt;code&gt;tmp_table_size&lt;/code&gt; and &lt;code&gt;max_heap_table_size&lt;/code&gt;&lt;/h2&gt;
&lt;h3 id="real-metrics-to-watch-4"&gt;Real metrics to watch&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GLOBAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Created_tmp%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Watch:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Created_tmp_tables&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Created_tmp_disk_tables&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If disk temp tables exceed &lt;strong&gt;5–10%&lt;/strong&gt; of total temp tables, queries are spilling to disk.&lt;/p&gt;
&lt;h3 id="example-graph-4"&gt;Example graph&lt;/h3&gt;
&lt;p&gt;Stacked area chart:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In-memory temp tables&lt;/li&gt;
&lt;li&gt;Disk-based temp tables&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Disk usage creeping upward usually points to reporting queries pretending to be OLTP.&lt;/p&gt;
&lt;p&gt;Disk-based temp tables are silent performance killers.&lt;/p&gt;
&lt;h3 id="what-they-do-1"&gt;What they do&lt;/h3&gt;
&lt;p&gt;Limit how large in-memory temp tables can grow.&lt;/p&gt;
&lt;h3 id="how-to-tune-1"&gt;How to tune&lt;/h3&gt;
&lt;p&gt;Set both to the same value:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;ini&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-ini" data-lang="ini"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;tmp_table_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;256M&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;max_heap_table_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;256M&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="reality"&gt;Reality&lt;/h3&gt;
&lt;p&gt;This helps complex queries, but bad queries still need fixing.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="10-slow_query_log-and-long_query_time"&gt;10. &lt;code&gt;slow_query_log&lt;/code&gt; and &lt;code&gt;long_query_time&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Not a performance variable, but a performance &lt;em&gt;revelation&lt;/em&gt;.&lt;/p&gt;
&lt;h3 id="why-it-matters-1"&gt;Why it matters&lt;/h3&gt;
&lt;p&gt;You cannot tune what you cannot see.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;ini&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-ini" data-lang="ini"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;slow_query_log&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;ON&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="na"&gt;long_query_time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This turns guesswork into evidence.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="a-note-on-graphing-these-metrics"&gt;A Note on Graphing These Metrics&lt;/h2&gt;
&lt;p&gt;You don’t need exotic tools. These work well:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;performance_schema&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sys&lt;/code&gt; schema views&lt;/li&gt;
&lt;li&gt;Prometheus + mysqld_exporter&lt;/li&gt;
&lt;li&gt;Percona Monitoring and Management (PMM)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Golden rule:&lt;/strong&gt; Always graph rates, not raw counters.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="final-thoughts"&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;Tuning MySQL is less about endless knobs and more about &lt;strong&gt;understanding pressure points&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Memory first&lt;/li&gt;
&lt;li&gt;I/O second&lt;/li&gt;
&lt;li&gt;Concurrency third&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Most performance wins come from &lt;strong&gt;a handful of variables&lt;/strong&gt;, not heroic config files full of folklore.&lt;/p&gt;
&lt;p&gt;If you tune one thing today, make it the buffer pool. If you tune two, add redo logs. Everything else is refinement.&lt;/p&gt;
&lt;p&gt;And if you’re bored again tomorrow, congratulations. You’re officially a database person.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Opensource</category>
      <category>Percona</category>
      <category>MySQL</category>
      <category>Community</category>
      <category>Percona Server</category>
      <category>tuning</category>
      <category>innodb</category>
      <media:thumbnail url="https://percona.community/blog/2026/02/tuning_mysql_for_performance_hu_90816541b912e5ef.jpg"/>
      <media:content url="https://percona.community/blog/2026/02/tuning_mysql_for_performance_hu_63ec80e68bf1ede3.jpg" medium="image"/>
    </item>
    <item>
      <title>Configuring the Component Keyring in Percona Server and PXC 8.4</title>
      <link>https://percona.community/blog/2026/01/13/configuring-the-component-keyring-in-percona-server-and-pxc-8.4/</link>
      <guid>https://percona.community/blog/2026/01/13/configuring-the-component-keyring-in-percona-server-and-pxc-8.4/</guid>
      <pubDate>Tue, 13 Jan 2026 00:00:00 UTC</pubDate>
      <description>Configuring the Component Keyring in Percona Server and PXC 8.4 (Or: how to make MySQL encryption boring, which is the goal)</description>
      <content:encoded>&lt;h1 id="configuring-the-component-keyring-in-percona-server-and-pxc-84"&gt;Configuring the Component Keyring in Percona Server and PXC 8.4&lt;/h1&gt;
&lt;p&gt;&lt;em&gt;(Or: how to make MySQL encryption boring, which is the goal)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Encryption is one of those things everyone agrees is important, right up until MySQL refuses to start and you’re staring at a JSON file wondering which brace ruined your evening.&lt;/p&gt;
&lt;p&gt;With &lt;strong&gt;MySQL 8.4&lt;/strong&gt;, encryption has firmly moved into the &lt;strong&gt;component world&lt;/strong&gt;, and if you’re running &lt;strong&gt;Percona Server 8.4&lt;/strong&gt; or &lt;strong&gt;Percona XtraDB Cluster (PXC) 8.4&lt;/strong&gt;, the supported path forward is the &lt;code&gt;component_keyring_file&lt;/code&gt; component.&lt;/p&gt;
&lt;p&gt;The good news: the setup is mostly identical for Percona Server and PXC.&lt;br&gt;
The bad news: PXC 8.4.4 and 8.4.5 shipped with a bug that makes this less fun than it should be.&lt;/p&gt;
&lt;p&gt;Let’s walk through a setup that works, keeps your keys locked down, and avoids the usual landmines.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="step-1-tell-mysql-which-component-to-load"&gt;Step 1: Tell MySQL Which Component to Load&lt;/h2&gt;
&lt;p&gt;Components are registered using &lt;strong&gt;JSON&lt;/strong&gt;, not traditional MySQL configuration syntax. This is important, because MySQL will not politely warn you if you get it wrong. It will simply refuse to start.&lt;/p&gt;
&lt;p&gt;Create the file:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo vi /usr/sbin/mysqld.my&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;json&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-json" data-lang="json"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"components"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"file://component_keyring_file"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Take a second to double-check the formatting. One missing quote here will cost you more time than you want to admit.&lt;/p&gt;
&lt;p&gt;Now lock it down:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chown root:root /usr/sbin/mysqld.my
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chmod &lt;span class="m"&gt;644&lt;/span&gt; /usr/sbin/mysqld.my&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This is configuration, not data. MySQL only needs to read it.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="step-2-prepare-the-keyring-directory-handle-with-care"&gt;Step 2: Prepare the Keyring Directory (Handle With Care)&lt;/h2&gt;
&lt;p&gt;This directory will hold encryption keys. Treat it accordingly.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /var/lib
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir mysql-keyring
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chown mysql:mysql mysql-keyring
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chmod &lt;span class="m"&gt;750&lt;/span&gt; mysql-keyring&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;A simple rule that saves headaches:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;mysql owns the keys&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MySQL is allowed to access them&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Nobody else gets any ideas&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="step-3-configure-the-keyring-component-itself"&gt;Step 3: Configure the Keyring Component Itself&lt;/h2&gt;
&lt;p&gt;Next, move to the plugin directory:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /usr/lib64/mysql/plugin&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create the component configuration file:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo vi component_keyring_file.cnf&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;json&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-json" data-lang="json"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"/var/lib/mysql-keyring/component_keyring_file"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"read_only"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This file tells MySQL where the keyring lives and ensures it can’t be casually modified at runtime.&lt;/p&gt;
&lt;p&gt;Set ownership and permissions:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chown root:root component_keyring_file.cnf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chmod &lt;span class="m"&gt;640&lt;/span&gt; component_keyring_file.cnf&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Again: configuration belongs to root. MySQL just reads it.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="step-4-the-pxc-844--845-bug-yes-theres-one"&gt;Step 4: The PXC 8.4.4 / 8.4.5 Bug (Yes, There’s One)&lt;/h2&gt;
&lt;p&gt;If you’re running &lt;strong&gt;Percona Server&lt;/strong&gt;, you can skip this entire section and enjoy your day.&lt;/p&gt;
&lt;p&gt;If you’re running &lt;strong&gt;Percona XtraDB Cluster 8.4.4 or 8.4.5&lt;/strong&gt;, there is a known issue with plugin paths that prevents the component keyring from loading correctly. This was fixed in &lt;strong&gt;PXC 8.4.6&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If upgrading isn’t an option yet, you’ll need one of the following workarounds.&lt;/p&gt;
&lt;h3 id="option-a-create-a-symlink-preferred"&gt;Option A: Create a Symlink (Preferred)&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo ln -s &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt;/usr/bin/pxc_extra/pxb-8.4/lib/lib64/xtrabackup/plugin &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt;/usr/bin/pxc_extra/pxb-8.4/lib/plugin&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="option-b-copy-the-plugin-directory"&gt;Option B: Copy the Plugin Directory&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo cp -ar &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt;/usr/bin/pxc_extra/pxb-8.4/lib/lib64/xtrabackup/plugin &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt;/usr/bin/pxc_extra/pxb-8.4/lib&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you’re on &lt;strong&gt;PXC 8.4.6 or newer&lt;/strong&gt;, this problem is already behind you and you can safely pretend it never existed.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="step-5-restart-mysql"&gt;Step 5: Restart MySQL&lt;/h2&gt;
&lt;p&gt;Time for the moment of truth:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo systemctl restart mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or &lt;code&gt;mysqld&lt;/code&gt;, depending on your system.&lt;/p&gt;
&lt;p&gt;If MySQL starts cleanly, you’re doing well. If not, go back and check your JSON files. It’s almost always the JSON.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="step-6-verify-the-keyring-is-actually-loaded"&gt;Step 6: Verify the Keyring Is Actually Loaded&lt;/h2&gt;
&lt;p&gt;Never assume. Always verify.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keyring_component_status&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You should see the &lt;code&gt;component_keyring_file&lt;/code&gt; listed and active. If it’s there, the keyring is live.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;---------------------+-----------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS_KEY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATUS_VALUE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;---------------------+-----------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Component_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;component_keyring_file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Author&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Oracle&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Corporation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;License&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;GPL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Implementation_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;component_keyring_file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;Version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Component_status&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Active&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Data_file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;lib&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keyring&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;component_keyring_file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Read_only&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Yes&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;---------------------+-----------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="a-note-for-percona-server-users"&gt;A Note for Percona Server Users&lt;/h2&gt;
&lt;p&gt;Percona Server may still include &lt;strong&gt;legacy keyring plugins&lt;/strong&gt; such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;keyring_file&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;keyring_vault&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Do not mix legacy keyring plugins with component keyrings. They come from different eras of MySQL design and do not coexist peacefully.&lt;/p&gt;
&lt;p&gt;Choose one model. For MySQL 8.4 and forward, &lt;strong&gt;components are the future&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="additional-steps-for-percona-xtradb-cluster-pxc"&gt;Additional Steps for Percona XtraDB Cluster (PXC)&lt;/h2&gt;
&lt;p&gt;Percona XtraDB Cluster introduces one critical difference compared to standalone Percona Server: the keyring file itself is not replicated by Galera. Only metadata and transactional state are replicated. The encryption keys remain node-local filesystem artifacts and must be handled deliberately.&lt;/p&gt;
&lt;h3 id="node-1-establish-the-authoritative-keyring"&gt;Node 1: Establish the Authoritative Keyring&lt;/h3&gt;
&lt;p&gt;Choose a single node to initialize the keyring. This is typically Node1, but the choice itself is not important as long as you are consistent.&lt;/p&gt;
&lt;p&gt;On this node:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Complete all previous steps in this document&lt;/li&gt;
&lt;li&gt;Start MySQL successfully&lt;/li&gt;
&lt;li&gt;Verify the keyring component is loaded:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keyring_component_status&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Once this node is running, the file below will be created and populated:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;swift&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-swift" data-lang="swift"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;lib&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keyring&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;component_keyring_file&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This file becomes the authoritative source of encryption keys for the entire cluster.&lt;/p&gt;
&lt;h3 id="why-the-keyring-file-must-be-copied"&gt;Why the Keyring File Must Be Copied&lt;/h3&gt;
&lt;p&gt;PXC ensures that encrypted data remains readable on all nodes, but it does not distribute encryption keys themselves. Each node must have access to the same key material, or encrypted tablespaces will fail to open.&lt;/p&gt;
&lt;p&gt;If a node starts without the correct keyring file, you may see:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tablespace open failures&lt;/li&gt;
&lt;li&gt;Startup errors related to encryption&lt;/li&gt;
&lt;li&gt;Inconsistent behavior during SST or IST&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is expected behavior and not a bug.&lt;/p&gt;
&lt;h3 id="distribute-the-keyring-file-to-other-nodes"&gt;Distribute the Keyring File to Other Nodes&lt;/h3&gt;
&lt;p&gt;On each remaining PXC node:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Ensure MySQL is stopped&lt;/li&gt;
&lt;li&gt;Create the keyring directory if it does not exist:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /var/lib/mysql-keyring
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chown mysql:mysql /var/lib/mysql-keyring
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chmod &lt;span class="m"&gt;750&lt;/span&gt; /var/lib/mysql-keyring&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;Securely copy the keyring file from Node1:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;scp /var/lib/mysql-keyring/component_keyring_file node2:/var/lib/mysql-keyring/component_keyring_file&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt;
Do not modify the file. Do not recreate it. Do not allow MySQL to generate a new one on secondary nodes.&lt;/p&gt;
&lt;h3 id="start-mysql-on-each-node-and-verify"&gt;Start MySQL on Each Node and Verify&lt;/h3&gt;
&lt;p&gt;After the keyring file is in place:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo systemctl start mysqld&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verify the component is active:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keyring_component_status&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Each node should report the component_keyring_file as loaded and active.&lt;/p&gt;
&lt;p&gt;At this point:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Encrypted tablespaces will open correctly&lt;/li&gt;
&lt;li&gt;SST and IST operations will succeed&lt;/li&gt;
&lt;li&gt;The cluster will behave consistently during restarts&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="operational-notes-and-best-practices"&gt;Operational Notes and Best Practices&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Treat the keyring file like a secret, not configuration&lt;/li&gt;
&lt;li&gt;Restrict access to root only&lt;/li&gt;
&lt;li&gt;Include the keyring file in your secure backup strategy&lt;/li&gt;
&lt;li&gt;When provisioning new nodes, copy the keyring file before first startup&lt;/li&gt;
&lt;li&gt;Never rotate or regenerate the keyring independently on individual nodes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If the keyring is lost and encrypted data exists, recovery is not possible.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="final-thoughts"&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;This setup works reliably for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Percona Server 8.4&lt;/li&gt;
&lt;li&gt;Percona XtraDB Cluster 8.4&lt;br&gt;
(with the known exception of 8.4.4–8.4.5)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Most failures come down to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Treating JSON like a &lt;code&gt;.cnf&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Loose ownership on sensitive files&lt;/li&gt;
&lt;li&gt;Forgetting the PXC-specific workaround&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once those are handled, the component keyring fades into the background where it belongs. And when it comes to encryption, boring, quiet, and uneventful is exactly the outcome you want.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <author>Stan Lipinski</author>
      <category>Opensource</category>
      <category>Percona</category>
      <category>key ring</category>
      <category>MySQL</category>
      <category>Community</category>
      <category>Percona Server</category>
      <category>PXC</category>
      <category>Security</category>
      <media:thumbnail url="https://percona.community/blog/2026/01/keyring-component_hu_55d2689b19226a60.jpg"/>
      <media:content url="https://percona.community/blog/2026/01/keyring-component_hu_248d0e8fdd379cd5.jpg" medium="image"/>
    </item>
    <item>
      <title>What is New in Percona Toolkit 3.7.1</title>
      <link>https://percona.community/blog/2025/12/17/what-is-new-in-percona-toolkit-3.7.1/</link>
      <guid>https://percona.community/blog/2025/12/17/what-is-new-in-percona-toolkit-3.7.1/</guid>
      <pubDate>Wed, 17 Dec 2025 00:00:00 UTC</pubDate>
      <description>Percona Toolkit 3.7.1 has been released on Dec 17, 2025. The most important updates in this version are:</description>
      <content:encoded>&lt;p&gt;Percona Toolkit 3.7.1 has been released on &lt;strong&gt;Dec 17, 2025&lt;/strong&gt;. The most important updates in this version are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Finalized SSL/TLS support for MySQL&lt;/li&gt;
&lt;li&gt;Added support for Debian 13 and Amazon Linux 2023&lt;/li&gt;
&lt;li&gt;Fixed MariaDB support broken in version 3.7.0&lt;/li&gt;
&lt;li&gt;Added options to skip certain collections in &lt;code&gt;pt-k8s-debug-collector&lt;/code&gt; and &lt;code&gt;pt-stalk&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Documentation improvements&lt;/li&gt;
&lt;li&gt;Other performance improvements&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this blog, I will outline the most significant changes. A full list of improvements and bug fixes can be found in the &lt;a href="https://docs.percona.com/percona-toolkit/release_notes.html" target="_blank" rel="noopener noreferrer"&gt;release notes&lt;/a&gt;.&lt;/p&gt;
&lt;h1 id="ssltls-support-for-mysql"&gt;SSL/TLS support for MySQL&lt;/h1&gt;
&lt;p&gt;Percona Toolkit historically did not have consistent SSL support. This was reported at &lt;a href="https://perconadev.atlassian.net/browse/PT-191" target="_blank" rel="noopener noreferrer"&gt;https://perconadev.atlassian.net/browse/PT-191&lt;/a&gt;. In version 3.7.0, option &lt;code&gt;s&lt;/code&gt; for &lt;code&gt;DSN&lt;/code&gt; was introduced. This option instructs &lt;code&gt;DBD::mysql&lt;/code&gt; to open a secure connection with the database. This version also adds command-line option &lt;code&gt;--mysql-ssl&lt;/code&gt; and its short form &lt;code&gt;-s&lt;/code&gt; to all tools. All other SSL/TLS-related options, such as &lt;code&gt;ssl-ca&lt;/code&gt;, &lt;code&gt;ssl-cert&lt;/code&gt;, &lt;code&gt;ssl-cipher&lt;/code&gt;, etc, could be specified in the configuration file if necessary. This completes SSL/TLS support for MySQL. For more details and information check &lt;a href="https://www.percona.com/blog/unlocking-secure-connections-ssl-tls-support-in-percona-toolkit/" target="_blank" rel="noopener noreferrer"&gt;this blog post&lt;/a&gt;.&lt;/p&gt;
&lt;h1 id="supported-platforms-update"&gt;Supported Platforms Update&lt;/h1&gt;
&lt;p&gt;Percona repositories now have Percona Toolkit packages for Debian 13 and Amazon Linux 2023. To install them enable repository &lt;code&gt;pt&lt;/code&gt; with the &lt;code&gt;percona-release&lt;/code&gt; utility. Ubuntu Focal reached its EOL and support for this platform has been removed. More information on Percona repositories is available in the &lt;a href="https://docs.percona.com/percona-software-repositories/index.html" target="_blank" rel="noopener noreferrer"&gt;User Reference Manual&lt;/a&gt;.&lt;/p&gt;
&lt;h1 id="regression-bug-fixes"&gt;Regression Bug Fixes&lt;/h1&gt;
&lt;p&gt;Recent major changes introducing MySQL 8.4 support missed ignore case modificator for the regular expression that checks if MySQL flavor is MariaDB. As a result, tools executed replication statements not compatible with MariaDB. Version 3.7.1 fixes the regular expression and re-adds MariaDB support back (&lt;a href="https://perconadev.atlassian.net/browse/PT-2451" target="_blank" rel="noopener noreferrer"&gt;PT-2451&lt;/a&gt;). Future versions of Percona Toolkit will have better MariaDB support, including MariaDB-specific versions of non-offensive replication commands.&lt;/p&gt;
&lt;p&gt;Utility &lt;code&gt;pt-sift&lt;/code&gt; stopped working, because dependent library &lt;code&gt;alt_cmds.sh&lt;/code&gt; was not included (&lt;a href="https://perconadev.atlassian.net/browse/PT-2498" target="_blank" rel="noopener noreferrer"&gt;PT-2498&lt;/a&gt;). This was not found during previous release testing, because regression test for the tool was not run. Now this miss is fixed and the utility works properly again. Additionally, regression test is updated.&lt;/p&gt;
&lt;p&gt;Helper utility &lt;code&gt;version_cmp&lt;/code&gt; was written in some compiled language and source code for it was not available (&lt;a href="https://perconadev.atlassian.net/browse/PT-2469" target="_blank" rel="noopener noreferrer"&gt;PT-2469&lt;/a&gt;). This broke version checking on platforms not compatible with the unknown platform where the binary was originally compiled. Now this utility rewritten as a Bourne-Again shell script.&lt;/p&gt;
&lt;h1 id="modern-mysql-support"&gt;Modern MySQL Support&lt;/h1&gt;
&lt;p&gt;Percona Toolkit uses legacy MySQL syntax in many places to be compatible with older versions of MySQL. In other places, it misses modern MySQL diagnostic additions. This version makes first steps to improve this situation by adding such features as invisible index support in &lt;code&gt;pt-duplicate-key-checker&lt;/code&gt; (&lt;a href="github.com/percona/percona-toolkit/pull/996"&gt;PR-996&lt;/a&gt;) and &lt;code&gt;performance_schema.threads&lt;/code&gt; collecton in &lt;code&gt;pt-stalk&lt;/code&gt; (&lt;a href="https://perconadev.atlassian.net/browse/PT-1718" target="_blank" rel="noopener noreferrer"&gt;PT-1718&lt;/a&gt;). Currently, data from &lt;code&gt;performance_schema.threads&lt;/code&gt; is collected along with the deprecated &lt;code&gt;information_schema.processlist&lt;/code&gt;. In the future, support for &lt;code&gt;information_schema.processlist&lt;/code&gt; will be deprecated, then removed.&lt;/p&gt;
&lt;p&gt;Future versions of Percona Toolkit will have more modern MySQL diagnostic support.&lt;/p&gt;
&lt;h1 id="performance-improvements"&gt;Performance Improvements&lt;/h1&gt;
&lt;p&gt;&lt;code&gt;pt-stalk&lt;/code&gt; now has new option, &lt;code&gt;--skip-collection&lt;/code&gt;, that allows to skip one or more collections. Supported values for this option are: &lt;code&gt;ps-locks-transactions&lt;/code&gt;, &lt;code&gt;thread-variables&lt;/code&gt;, &lt;code&gt;innodbstatus&lt;/code&gt;, &lt;code&gt;lock-waits&lt;/code&gt;, &lt;code&gt;mysqladmin&lt;/code&gt;, &lt;code&gt;processlist&lt;/code&gt;, &lt;code&gt;rocksdbstatus&lt;/code&gt;, &lt;code&gt;transactions&lt;/code&gt;. To skip two or more collections, separate them with a comma. E.g., &lt;code&gt;--skip-collection=processlist,innodbstatus&lt;/code&gt;. You will find more information at &lt;a href="https://perconadev.atlassian.net/browse/PT-2289" target="_blank" rel="noopener noreferrer"&gt;PT-2289&lt;/a&gt; and in the &lt;a href="https://docs.percona.com/percona-toolkit/pt-stalk.html" target="_blank" rel="noopener noreferrer"&gt;User Reference Manual for &lt;code&gt;pt-stalk&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pt-k8s-debug-collector&lt;/code&gt; introduces option &lt;code&gt;-skip-pod-summary&lt;/code&gt; allowing to skip pod summary collections, such as &lt;code&gt;pt-mysql-summary&lt;/code&gt;, &lt;code&gt;pt-mongodb-summary&lt;/code&gt;, or &lt;code&gt;pg_gather&lt;/code&gt;. Check &lt;a href="https://perconadev.atlassian.net/browse/PT-2453" target="_blank" rel="noopener noreferrer"&gt;PT-2453&lt;/a&gt; and the &lt;a href="https://docs.percona.com/percona-toolkit/pt-k8s-debug-collector.html" target="_blank" rel="noopener noreferrer"&gt;User Reference Manual for &lt;code&gt;pt-k8s-debug-collector&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Originally, tools output was always buffered. This is usually good for performance but you may want to disable this feature when need to see output of the tools faster. For example, if you run &lt;code&gt;pt-archiver&lt;/code&gt; or &lt;code&gt;pt-table-checksum&lt;/code&gt; on large table in Kubernetes, you won’t see progress (&lt;a href="https://perconadev.atlassian.net/browse/PT-2052" target="_blank" rel="noopener noreferrer"&gt;PT-2052&lt;/a&gt;) until the tool finishes. New option, &lt;code&gt;--[no]buffer-stdout&lt;/code&gt;, allows to disable buffering when needed.&lt;/p&gt;
&lt;h2 id="incompatilbe-change"&gt;Incompatilbe change&lt;/h2&gt;
&lt;p&gt;Earlier, if &lt;code&gt;--chunk-size&lt;/code&gt; was enabled for &lt;code&gt;pt-online-schema-change&lt;/code&gt;, option &lt;code&gt;--chunk-time&lt;/code&gt; was ignored. This caused situations when a user has to start with default automatic chunk size even if it was not effective for some tables, and wait when chunk size is adjusted in subsequent iterations. Alternatively, they had to guess fixed chunk size that implies time consuming &lt;a href="https://en.wikipedia.org/wiki/Trial_and_error" target="_blank" rel="noopener noreferrer"&gt;try and error&lt;/a&gt; approach (&lt;a href="https://perconadev.atlassian.net/browse/PT-1423" target="_blank" rel="noopener noreferrer"&gt;PT-1423&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Starting from version 3.7.1, if both options &lt;code&gt;--chunk-size&lt;/code&gt; and &lt;code&gt;--chunk-time&lt;/code&gt; are specified, initial chunk size will be as specified by the option &lt;code&gt;--chunk-size&lt;/code&gt;, but later it will be adjusted, so that the next query takes specified amount of time (in seconds) to execute.&lt;/p&gt;
&lt;h1 id="documentation-improvements"&gt;Documentation Improvements&lt;/h1&gt;
&lt;p&gt;While working on this release we found undocumented featues such as &lt;code&gt;--recursion-method=dsn&lt;/code&gt; support in &lt;code&gt;pt-table-sync&lt;/code&gt; (&lt;a href="https://perconadev.atlassian.net/browse/PT-2470" target="_blank" rel="noopener noreferrer"&gt;PT-2470&lt;/a&gt;), broken man page for &lt;code&gt;pt-secure-collect&lt;/code&gt; and other tools written in Go language (&lt;a href="https://perconadev.atlassian.net/browse/PT-1564" target="_blank" rel="noopener noreferrer"&gt;PT-1564&lt;/a&gt;), as well as minor documentation issues. Now all of them are fixed.&lt;/p&gt;
&lt;h1 id="community-contributions"&gt;Community contributions&lt;/h1&gt;
&lt;p&gt;This release includes contributions from Community and Percona Engineers who do not actively work on the project. We want to thank:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Iwo Panowicz for option &lt;code&gt;-skip-pod-summary&lt;/code&gt; in &lt;code&gt;pt-k8s-debug-collector&lt;/code&gt; (&lt;a href="https://perconadev.atlassian.net/browse/PT-2453" target="_blank" rel="noopener noreferrer"&gt;PT-2453&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Matthew Boehm for invisible indexes support in &lt;code&gt;pt-duplicate-key-checker&lt;/code&gt; (&lt;a href="https://github.com/percona/percona-toolkit/pull/996" target="_blank" rel="noopener noreferrer"&gt;PR-996&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Nilnandan Joshi for collecting &lt;code&gt;performance_schema.threads&lt;/code&gt; along with &lt;code&gt;information_schema.processlist&lt;/code&gt; in &lt;code&gt;pt-stalk&lt;/code&gt; (&lt;a href="https://perconadev.atlassian.net/browse/PT-1718" target="_blank" rel="noopener noreferrer"&gt;PT-1718&lt;/a&gt;) and fix for &lt;a href="https://perconadev.atlassian.net/browse/PT-2014" target="_blank" rel="noopener noreferrer"&gt;PT-2014 - pt-config-diff does not honor case insensitivity flag&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Paweł Kudzia for the updated documentation of pt-query-digest (&lt;a href="https://github.com/percona/percona-toolkit/pull/953" target="_blank" rel="noopener noreferrer"&gt;PR-953&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Maciej Dobrzanski for fixing &lt;a href="https://github.com/percona/percona-toolkit/pull/890" target="_blank" rel="noopener noreferrer"&gt;PR-890 - pt-config-diff: MySQL truncates run-time variable values longer than 1024 characters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Marek Knappe for fixing &lt;a href="https://perconadev.atlassian.net/browse/PT-2418" target="_blank" rel="noopener noreferrer"&gt;PT-2418 - pt-online-schema-change 3.7.0 lost data when exe alter xxx rename column xxx&lt;/a&gt; and &lt;a href="https://perconadev.atlassian.net/browse/PT-2458" target="_blank" rel="noopener noreferrer"&gt;PT-2458 - remove-data-dir defaults to True&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yoann La Cancellera for his work on &lt;code&gt;pt-galera-log-explainer&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Nyele for restoring MariaDB support (&lt;a href="https://perconadev.atlassian.net/browse/PT-2465" target="_blank" rel="noopener noreferrer"&gt;PT-2465&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Taehyung Lim for fixing &lt;a href="https://perconadev.atlassian.net/browse/PT-2401" target="_blank" rel="noopener noreferrer"&gt;PT-2401 - pt-online-schema-change ’table does not exist’ on macos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Viktoras Agejevas for fixing &lt;a href="https://github.com/percona/percona-toolkit/pull/989" target="_blank" rel="noopener noreferrer"&gt;PR-989 - Fix script crashing with precedence error&lt;/a&gt; in &lt;code&gt;pt-online-schema-change&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Hartley McGuire for fixing &lt;a href="https://perconadev.atlassian.net/browse/PT-2015" target="_blank" rel="noopener noreferrer"&gt;PT-2015 - pt-config-diff does not sort variable flags&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Sveta Smirnova</author>
      <category>Toolkit</category>
      <category>MySQL</category>
      <category>Percona</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2025/12/toolkit-371_hu_976a247c7fe02282.jpg"/>
      <media:content url="https://percona.community/blog/2025/12/toolkit-371_hu_bcdcfd45e2eeedce.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL Replication Best Practices: How to Keep Your Replicas Sane (and Your Nights Quiet)</title>
      <link>https://percona.community/blog/2025/12/03/mysql-replication-best-practices-how-to-keep-your-replicas-sane-and-your-nights-quiet/</link>
      <guid>https://percona.community/blog/2025/12/03/mysql-replication-best-practices-how-to-keep-your-replicas-sane-and-your-nights-quiet/</guid>
      <pubDate>Wed, 03 Dec 2025 00:00:00 UTC</pubDate>
      <description>MySQL replication has been around forever, and yet… people still manage to set it up in ways that break at the worst possible moment. Even in 2025, you can get burned by tiny schema differences, missing primary keys, or one forgotten config flag. I’ve seen replicas drift so far out of sync they might as well live in a different universe.</description>
      <content:encoded>&lt;p&gt;MySQL replication has been around forever, and yet… people still manage to set it up in ways that break at the worst possible moment. Even in 2025, you can get burned by tiny schema differences, missing primary keys, or one forgotten config flag. I’ve seen replicas drift so far out of sync they might as well live in a different universe.&lt;/p&gt;
&lt;p&gt;This guide covers the practical best practices—the stuff real DBAs use every day to keep replication stable, predictable, and boring. (Boring is a compliment in database land.)&lt;/p&gt;
&lt;h3 id="always-use-gtids-yes-always"&gt;Always Use GTIDs. Yes, Always.&lt;/h3&gt;
&lt;p&gt;GTID-based replication is one of those features that people resist turning on, and then once they do, they never want to go back.&lt;/p&gt;
&lt;p&gt;Why GTIDs?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Failover become sane&lt;/li&gt;
&lt;li&gt;Reparenting replicas stops being a headache&lt;/li&gt;
&lt;li&gt;Missing transactions are easy to detect&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Your my.cnf should absolutely include:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gtid_mode=ON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;enforce_gtid_consistency=ON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log_replica_updates=ON&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Once GTIDs are enabled, do not mix in old-style replication. That path leads straight to confusion.&lt;/p&gt;
&lt;h3 id="use-row-based-replication-rbr"&gt;Use Row-Based Replication (RBR)&lt;/h3&gt;
&lt;p&gt;Statement-based replication is a nostalgia trip that nobody asked for. It breaks on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NOW(), UUID(), and similar functions&lt;/li&gt;
&lt;li&gt;Floating point differences&lt;/li&gt;
&lt;li&gt;Collation mismatches&lt;/li&gt;
&lt;li&gt;Triggers behaving differently&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Just skip the pain and use:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;binlog_format=ROW&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;RBR is slightly more verbose, but 100× more predictable. When something breaks, it’s never because you chose ROW.&lt;/p&gt;
&lt;h3 id="every-table-needs-a-primary-key-no-exceptions"&gt;Every Table Needs a Primary Key. No Exceptions.&lt;/h3&gt;
&lt;p&gt;If you take nothing else from this guide, take this:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Replication without primary keys is a bad time.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Row-based replication needs a way to find the row that changed. Without a PK (or at least a UNIQUE index), the server has to use every column as a lookup. That’s slow, error-prone, and sometimes impossible.&lt;/p&gt;
&lt;p&gt;The usual symptoms:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Replication lag slowly creeping up&lt;/li&gt;
&lt;li&gt;Replica doing full table scans on updates&lt;/li&gt;
&lt;li&gt;Rows failing to apply&lt;/li&gt;
&lt;li&gt;Errors like:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Error 1032: Can't find record in table&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Save yourself hours of debugging and just make sure every table has a primary key.&lt;/p&gt;
&lt;h3 id="keep-the-schema-identical-everywhere"&gt;Keep the Schema Identical Everywhere&lt;/h3&gt;
&lt;p&gt;Replication assumes that everyone’s using the same schema. MySQL will happily keep going even if your schemas don’t match—and then quietly drift out of sync.&lt;/p&gt;
&lt;p&gt;Here are the practical ways to keep schemas aligned:&lt;/p&gt;
&lt;h4 id="approach-a--mysqldump-most-common"&gt;Approach A — mysqldump (most common)&lt;/h4&gt;
&lt;p&gt;Export schemas only:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysqldump --no-data mydb &gt; schema.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;From both servers, then:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;diff source-schema.sql replica-schema.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="approach-b--information_schema-metadata"&gt;Approach B — information_schema metadata&lt;/h4&gt;
&lt;p&gt;This approach is great for automaton:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT table_name, column_name, column_type, is_nullable, column_default
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FROM information_schema.columns
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WHERE table_schema = 'mydb'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ORDER BY table_name, ordinal_position;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Execute this query on each server and diff the results. Update mydb to match the database whose schema metadata you want to examine.&lt;/p&gt;
&lt;h4 id="approach-c--pt-table-checksum-data-only"&gt;Approach C — pt-table-checksum (data only)&lt;/h4&gt;
&lt;p&gt;This doesn’t compare schemas — it catches data drift.
You should consider running it on a schedule such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;high-change OLTP DBs run weekly or even daily&lt;/li&gt;
&lt;li&gt;huge multi-TB DBs run quarterly&lt;/li&gt;
&lt;li&gt;some sensitive systems avoid running it during peak hours&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-table-checksum --replicate=percona.checksums&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can fix drift with:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-table-sync --execute --replicate=percona.checksums&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Schema checks + data checks = safe replication.&lt;/p&gt;
&lt;h3 id="harden-your-binary-log-settings"&gt;Harden Your Binary Log Settings&lt;/h3&gt;
&lt;p&gt;Your binlogs are the backbone of replication. Treat them carefully.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sync_binlog=1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;binlog_row_image=FULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;binlog_expire_logs_seconds=604800 # 7 days&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;sync_binlog=1 is the big one—without it, a crash can corrupt binlogs or the GTID position, and that leads to a very bad day.&lt;/p&gt;
&lt;h3 id="protect-your-replicas-with-super_read_only"&gt;Protect Your Replicas with super_read_only&lt;/h3&gt;
&lt;p&gt;Never allow accidental writes to replicas, in your my.cnf set:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;read_only=ON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;super_read_only=ON&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;super_read_only&lt;/strong&gt; closes the loophole that even SUPER users could previously use to write to replicas.&lt;/p&gt;
&lt;h3 id="use-a-dedicated-replication-user"&gt;Use a Dedicated Replication User&lt;/h3&gt;
&lt;p&gt;Give the minimal permissions:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE USER 'repl'@'%' IDENTIFIED BY 'strong_password';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;GRANT REPLICATION REPLICA ON *.* TO 'repl'@'%';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This user should do exactly one thing: replicate.
Don’t reuse app users—you’re just begging for trouble.&lt;/p&gt;
&lt;h3 id="replication-lag-watch-it-like-a-hawk"&gt;Replication Lag: Watch It Like a Hawk&lt;/h3&gt;
&lt;p&gt;Seconds_Behind_Source lies more often than you’d expect. It’s okay for a quick glance but don’t rely on it.&lt;/p&gt;
&lt;p&gt;Better options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Performance Schema: replication_applier_status_by_worker&lt;/li&gt;
&lt;li&gt;Percona Monitoring and Management (PMM)&lt;/li&gt;
&lt;li&gt;Custom heartbeat tables&lt;/li&gt;
&lt;li&gt;pt-heartbeat&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Lag is one of the biggest causes of outages—monitor it continuously. Lag is usually the first sign something is wrong—catch it early.&lt;/p&gt;
&lt;h3 id="use-parallel-replication-but-dont-overdo-it"&gt;Use Parallel Replication (But Don’t Overdo It)&lt;/h3&gt;
&lt;p&gt;If your primary has multiple writers or many concurrent transactions, in your my.cnf enable parallel workers:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;replica_parallel_type=LOGICAL_CLOCK
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;replica_parallel_workers=4&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;4–8 workers is a sweet spot for most systems. More workers ≠ more speed; after a point it just increases memory footprint without real benefit.&lt;/p&gt;
&lt;p&gt;But when it helps, it really helps—like cutting lag by 80–90%.&lt;/p&gt;
&lt;h3 id="use-ssl-anywhere-outside-the-lan"&gt;Use SSL Anywhere Outside the LAN&lt;/h3&gt;
&lt;p&gt;Replication traffic isn’t something you want exposed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;source_ssl=1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;source_ssl_ca=/path/ca.pem&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Earlier versions used the master_ssl_* variables, but the idea is the same: encrypt the connection when it leaves your trusted network.&lt;/p&gt;
&lt;h2 id="final-thoughts"&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;MySQL replication can be rock-solid, but only if you follow a handful of rules that experienced DBAs know by heart:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use GTIDs&lt;/li&gt;
&lt;li&gt;Use RBR&lt;/li&gt;
&lt;li&gt;Always have primary keys&lt;/li&gt;
&lt;li&gt;Keep schemas aligned&lt;/li&gt;
&lt;li&gt;Check for data drift&lt;/li&gt;
&lt;li&gt;Harden binlog settings&lt;/li&gt;
&lt;li&gt;Protect replicas from accidental writes&lt;/li&gt;
&lt;li&gt;Monitor lag properly&lt;/li&gt;
&lt;li&gt;Use parallel workers when appropriate&lt;/li&gt;
&lt;li&gt;Encrypt connections over untrusted networks&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Follow these, and your replicas will stay healthy, consistent, and (mostly) invisible—which is exactly how you want them.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Opensource</category>
      <category>Percona</category>
      <category>replication</category>
      <category>MySQL</category>
      <category>Community</category>
      <category>Percona Server</category>
      <category>toolkit</category>
      <media:thumbnail url="https://percona.community/blog/2025/12/mysql-replication-best-practice_hu_fdef2ac6195fc52b.jpg"/>
      <media:content url="https://percona.community/blog/2025/12/mysql-replication-best-practice_hu_401bb1385cddcf3a.jpg" medium="image"/>
    </item>
    <item>
      <title>Community Recap: Percona.Connect London 2025, Building the Future of Open Source Together</title>
      <link>https://percona.community/blog/2025/12/02/community-recap-percona.connect-london-2025-building-the-future-of-open-source-together/</link>
      <guid>https://percona.community/blog/2025/12/02/community-recap-percona.connect-london-2025-building-the-future-of-open-source-together/</guid>
      <pubDate>Tue, 02 Dec 2025 00:00:00 UTC</pubDate>
      <description>Percona.Connect London 2025 brought the open-source database community together for a half-day of learning and collaboration. The event focused on providing practical, technical insights for DBAs, DevOps engineers, and developers. The main takeaway was clear: Stability, Openness, and Automation are essential for modern, large-scale data infrastructure.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://connect.percona.com/london/" target="_blank" rel="noopener noreferrer"&gt;Percona.Connect London 2025&lt;/a&gt; brought the open-source database community together for a half-day of learning and collaboration. The event focused on providing practical, technical insights for DBAs, DevOps engineers, and developers. The main takeaway was clear: Stability, Openness, and Automation are essential for modern, large-scale data infrastructure.&lt;/p&gt;
&lt;h2 id="top-discussions--key-takeaways"&gt;Top Discussions &amp; Key Takeaways&lt;/h2&gt;
&lt;h2 id="1-the-rise-of-valkey-a-truly-open-caching-alternative"&gt;1. The Rise of Valkey: A Truly Open Caching Alternative&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Martin Visser&lt;/strong&gt;, Valkey Technical Lead, explained the changes to the Redis license, the community needs a trusted, open-source replacement. Valkey was highlighted as the leading solution.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/valkey-io/valkey" target="_blank" rel="noopener noreferrer"&gt;Valkey&lt;/a&gt; was started by former Redis contributors quickly after Redis removed its open source license in 2024.&lt;/li&gt;
&lt;li&gt;It is a true open-source project governed under the Linux Foundation.&lt;/li&gt;
&lt;li&gt;It offers enhancements like better memory efficiency, performance, and scalability.&lt;/li&gt;
&lt;li&gt;In a recent Percona survey of 200 DBAs, &lt;strong&gt;Valkey was the most preferred alternative to Redis&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/12/img1.png" alt="Percona Connect London 2025" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="2-running-postgresql-in-a-cloud-native-context"&gt;2. Running PostgreSQL in a Cloud Native context&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Takis Stathopoulos&lt;/strong&gt;, Enterprise Architect, presented on running PostgreSQL in a Cloud Native context, explaining how Kubernetes Operators simplify complex deployments.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cloud Native vs. Cloud First&lt;/strong&gt;: Cloud Native (Kubernetes) offers Portability and No vendor lock-in, allowing you to run the database consistently across different clouds and on-premise infrastructure.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Percona Operator for PostgreSQL&lt;/strong&gt;: This tool automates crucial operations like setting up high availability (using Patroni), backups (using pgBackrest), and scaling.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;When to use Cloud Native&lt;/strong&gt;: It’s ideal for large, microservice-based applications and teams prioritizing portability and avoiding vendor lock-in.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/12/img2.png" alt="Percona Connect London 2025" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/12/img3.png" alt="Percona Connect London 2025" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="3-native-postgresql-tde-is-here-securing-data-simply"&gt;3. Native PostgreSQL TDE is Here: Securing Data Simply&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Alastair Turner&lt;/strong&gt;, Postgres Community Advocate, introduced the new Native Transparent Data Encryption (TDE) for PostgreSQL.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/12/extra_hu_4cb02671c333d4f0.png 480w, https://percona.community/blog/2025/12/extra_hu_f013f2a78c02ab84.png 768w, https://percona.community/blog/2025/12/extra_hu_b5bf0b10e32f0c99.png 1400w"
src="https://percona.community/blog/2025/12/extra.png" alt="Percona Connect London 2025" /&gt;&lt;/figure&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/12/img4_hu_d815369516251e84.png 480w, https://percona.community/blog/2025/12/img4_hu_a0219eb57316ebe2.png 768w, https://percona.community/blog/2025/12/img4_hu_d7ca9ec236e7f32b.png 1400w"
src="https://percona.community/blog/2025/12/img4.png" alt="Percona Connect London 2025" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="4-the-future-of-mysql-vector-search--binlog-server"&gt;4. The Future of MySQL: Vector Search &amp; Binlog Server&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Dennis Kittrell&lt;/strong&gt;, MySQL Product Manager, discussed two key features planned for MySQL that address major operational and feature challenges.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;MySQL Binlog Server MVP&lt;/strong&gt;: This component aims to solve the problem of quick disaster recovery by acting as a stable, reliable replication source. It enables Precise Point-in-Time Recovery (PITR) using simple time or GTID coordinates.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Native Vector Support MVP&lt;/strong&gt;: This feature allows users to eliminate the complexity of using a separate vector database. You can store, index, and search vector embeddings directly in MySQL, allowing you to combine vector searches with standard business logic in a single, transactional query&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="our-community-focus"&gt;Our Community Focus&lt;/h3&gt;
&lt;p&gt;A common theme from the use cases was that while open source adoption is high, operational teams often lack the proper support and visibility.&lt;/p&gt;
&lt;p&gt;Percona’s goal is to support the community by providing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Stability when under heavy load or during maintenance.&lt;/li&gt;
&lt;li&gt;Faster Troubleshooting with better monitoring and observability.&lt;/li&gt;
&lt;li&gt;Safer Deployments through expert configuration and security support.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/12/img5_hu_6ae561b777bf29e9.jpeg 480w, https://percona.community/blog/2025/12/img5_hu_5659d491a5fc0e02.jpeg 768w, https://percona.community/blog/2025/12/img5_hu_56250e8f6d310114.jpeg 1400w"
src="https://percona.community/blog/2025/12/img5.jpeg" alt="Percona Connect London 2025" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Thank you to everyone who joined us in London for a dynamic event. We hope the insights gained will help you with your open source database deployments.&lt;/p&gt;
&lt;p&gt;The conversations continue in the Percona Community! You can reach out directly to the speakers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Martin Visser (Valkey Technical Lead) &lt;a href="https://www.linkedin.com/in/martinrvisser/" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Dennis Kittrell (MySQL Product Manager) &lt;a href="https://www.linkedin.com/in/kittrell/" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Alastair Turner (Postgres Community Advocate) &lt;a href="https://www.linkedin.com/in/decodableminion/" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Takis Stathopoulos (Enterprise Architect) &lt;a href="https://www.linkedin.com/in/pgstathopoulos/" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andre Pons (Enterprise Sales Manager) &lt;a href="https://www.linkedin.com/in/andre-pons-8b4a1013/" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/12/img7_hu_76f0dbd195c0f33e.jpeg 480w, https://percona.community/blog/2025/12/img7_hu_59b77cc8d2bc6bca.jpeg 768w, https://percona.community/blog/2025/12/img7_hu_fc7b7a2d735e605a.jpeg 1400w"
src="https://percona.community/blog/2025/12/img7.jpeg" alt="Percona Connect London 2025" /&gt;&lt;/figure&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/12/img6_hu_6a3e1a691ef057b0.jpeg 480w, https://percona.community/blog/2025/12/img6_hu_25c14f5bbd6d636d.jpeg 768w, https://percona.community/blog/2025/12/img6_hu_f1c55e1c066f602c.jpeg 1400w"
src="https://percona.community/blog/2025/12/img6.jpeg" alt="Percona Connect London 2025" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Join the Percona Community Conversation!&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://forum.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona Forum&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/company/percona/" target="_blank" rel="noopener noreferrer"&gt;Percona on LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Community</category>
      <category>Event Recap</category>
      <category>Open Source</category>
      <category>MySQL</category>
      <category>PostgreSQL</category>
      <category>Valkey</category>
      <media:thumbnail url="https://percona.community/blog/2025/12/intro_hu_7df69ab6ed8fa3fb.jpeg"/>
      <media:content url="https://percona.community/blog/2025/12/intro_hu_7f699ac013017a33.jpeg" medium="image"/>
    </item>
    <item>
      <title>The Right Tool for the Job</title>
      <link>https://percona.community/blog/2025/11/24/the-right-tool-for-the-job/</link>
      <guid>https://percona.community/blog/2025/11/24/the-right-tool-for-the-job/</guid>
      <pubDate>Mon, 24 Nov 2025 00:00:00 UTC</pubDate>
      <description>When I first got into woodworking, my mentor shared a piece of advice that has stuck with me ever since: “Use the right tool for the job.” You wouldn’t reach for a belt sander to flatten a board when a planer can accomplish the task faster, cleaner, and with far better results.</description>
      <content:encoded>&lt;p&gt;When I first got into woodworking, my mentor shared a piece of advice that has stuck with me ever since: “Use the right tool for the job.” You wouldn’t reach for a belt sander to flatten a board when a planer can accomplish the task faster, cleaner, and with far better results.&lt;/p&gt;
&lt;p&gt;The same principle applies in the world of database engineering. When working with MySQL or Percona Server, choosing the correct tool can be the difference between efficient diagnostics and unnecessary downtime.&lt;/p&gt;
&lt;p&gt;In this post, I’ll highlight several of the most practical and commonly used utilities from the Percona Toolkit. While the toolkit includes many powerful commands, I’ll focus on the ones that provide the most value in day-to-day operations, troubleshooting, and gathering actionable details for support cases.&lt;/p&gt;
&lt;h2 id="pt-summary"&gt;PT Summary&lt;/h2&gt;
&lt;p&gt;A Percona Toolkit utility that provides a concise, high-level overview of a system’s hardware, OS configuration and performance-related metrics. It’s designed to quickly capture the essential details needed for diagnostics or support cases—CPU, memory, disk layout, kernel parameters and more all in a single, easy-to-read report.&lt;/p&gt;
&lt;h3 id="example"&gt;Example&lt;/h3&gt;
&lt;p&gt;Run pt-summary with no arguments to generate a full system summary. When possible, run it with sudo to allow the tool to collect additional details that require elevated privileges:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo pt-summary
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Percona Toolkit System Summary Report ######################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Date | 2025-11-24 17:15:19 UTC (local TZ: EST -0500)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Hostname | pi16gb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Uptime | 41 days, 2:27, 4 users, load average: 0.00, 0.00, 0.00
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Platform | Linux
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Release | Debian GNU/Linux 12 (bookworm) (bookworm)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Kernel | 6.12.47+rpt-rpi-2712
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Architecture | CPU = 32-bit, OS = 64-bit
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Threading | NPTL 2.36
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SELinux | No SELinux detected
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Virtualized | No virtualization detected
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Processor ##################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Processors | physical = 4, cores = 0, virtual = 4, hyperthreading = no
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Speeds |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Models |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Caches |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Designation Configuration Size Associativity
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ========================= ============================== ======== ======================
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Memory #####################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Total | 15.8G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Free | 675.0M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Used | physical = 5.3G, swap allocated = 512.0M, swap used = 0.0, virtual = 5.3G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Shared | 44.7M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Buffers | 10.6G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Caches | 10.5G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Dirty | 128 kB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; UsedRSS | 5.1G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Swappiness | 60
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; DirtyPolicy | 20, 10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; DirtyStatus | 0, 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Locator Size Speed Form Factor Type Type Detail
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ========= ======== ================= ============= ============= ===========
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Mounted Filesystems ########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Filesystem Size Used Type Opts Mountpoint
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; /dev/nvme0n1p1 510M 14%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; /dev/nvme0n1p2 458G 5%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; /dev/sda1 117G 16%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Disk Schedulers And Queue Size #############################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; nvme0n1 | [none] 255
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sda | [mq-deadline] 60
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Disk Partitioning ##########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Kernel Inode State #########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;dentry-state | 107782 98346 45 0 32304 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; file-nr | 3680 0 9223372036854775807
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; inode-nr | 99614 20818
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# LVM Volumes ################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Unable to collect information
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# LVM Volume Groups ##########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Unable to collect information
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# RAID Controller ############################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Controller | No RAID controller detected
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Network Config #############################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Controller | 00.0 Ethernet controller
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; FIN Timeout | 60
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Port Range | 60999
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Interface Statistics #######################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; interface rx_bytes rx_packets rx_errors tx_bytes tx_packets tx_errors
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ========= ========= ========== ========== ========== ========== ==========
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; lo 6000000000 175000 0 6000000000 175000 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; eth0 0 0 0 0 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; wlan0 5000000000 30000000 0 15000000000 22500000 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Network Devices ############################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Device Speed Duplex
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ========= ========= =========
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; eth0 Unknown! Unknown!
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Network Connections ########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Connections from remote IP addresses
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 192.168.1.91 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 192.168.1.251 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2603 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Connections to local IP addresses
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 192.168.1.145 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2603 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Connections to top 10 local ports
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 3306 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 6011:ef0:7260:::22 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; States of connections
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ESTABLISHED 3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; LISTEN 6
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; TIME_WAIT 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Top Processes ##############################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 95842 root 20 0 0 0 0 I 6.7 0.0 0:00.08 kworker+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1 root 20 0 169520 13088 8672 S 0.0 0.1 0:18.56 systemd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2 root 20 0 0 0 0 S 0.0 0.0 0:01.70 kthreadd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 pool_wo+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 7 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 8 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Notable Processes ##########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; PID OOM COMMAND
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ? ? sshd doesn't appear to be running
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Simplified and fuzzy rounded vmstat (wait please) ##########
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; procs ---swap-- -----io---- ---system---- --------cpu--------
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; r b si so bi bo ir cs us sy il wa st
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2 0 0 0 1 6 100 150 0 0 100 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1 0 0 0 0 0 1750 3000 1 3 97 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1 0 0 0 0 0 250 400 0 0 100 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1 0 0 0 0 0 300 450 0 0 100 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1 0 0 0 0 0 300 450 0 0 100 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Memory management ##########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# The End ####################################################&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Redirect output to a file.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-summary &gt; server-summary.txt&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="pt-mysql-summary"&gt;PT MySQL Summary&lt;/h2&gt;
&lt;p&gt;A Percona Toolkit utility that collects and displays a concise overview of a MySQL or Percona Server instance, including key configuration settings, performance metrics, storage engine details, replication status, buffer pool usage, and important global variables. It provides a fast, structured snapshot of the database environment, making it ideal for troubleshooting, tuning, and preparing information for support teams.&lt;/p&gt;
&lt;h3 id="example-1"&gt;Example&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-mysql-summary
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Percona Toolkit MySQL Summary Report #######################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; System time | 2025-11-24 17:45:54 UTC (local TZ: EST -0500)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Instances ##################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Port Data Directory Nice OOM Socket
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ===== ========================== ==== === ======
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 3306 /data0/mysql/data/ 0 0 /usr/local/mysql/mysql.sock
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# MySQL Executable ###########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Path to executable | /usr/local/mysql/bin/mysqld
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Has symbols | Yes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Report On Port 3306 ########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; User | wayne@localhost
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Time | 2025-11-24 12:45:54 (EST)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Hostname | pi16gb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Version | 8.4.6-6 Source distribution
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Built On | Linux aarch64
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Started | 2025-10-14 09:48 (up 41+02:57:35)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Databases | 10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Datadir | /data0/mysql/data/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Processes | 2 connected, 2 running
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Replication | Is not a replica, has 1 replicas connected
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Pidfile | /usr/local/mysql/mysqld.pid (exists)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Processlist ################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Command COUNT(*) Working SUM(Time) MAX(Time)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ------------------------------ -------- ------- --------- ---------
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Binlog Dump GTID 1 1 3000000 3000000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Query 1 1 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; User COUNT(*) Working SUM(Time) MAX(Time)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ------------------------------ -------- ------- --------- ---------
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; replication 1 1 3000000 3000000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; wayne 1 1 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Host COUNT(*) Working SUM(Time) MAX(Time)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ------------------------------ -------- ------- --------- ---------
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 192.168.1.251 1 1 3000000 3000000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; localhost 1 1 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; db COUNT(*) Working SUM(Time) MAX(Time)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ------------------------------ -------- ------- --------- ---------
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; NULL 2 2 3000000 3000000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; State COUNT(*) Working SUM(Time) MAX(Time)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ------------------------------ -------- ------- --------- ---------
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; init 1 1 0 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Source has sent all binlog to 1 1 3000000 3000000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Status Counters (Wait 10 Seconds) ##########################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Variable Per day Per second 11 secs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Aborted_clients 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Binlog_snapshot_position 350000 4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Binlog_cache_use 6000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Bytes_received 20000000 225 600
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Bytes_sent 2250000000 25000 4000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[...]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Table_open_cache_misses 400
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Table_open_cache_overflows 225
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Threads_created 9
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Uptime 90000 1 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Table cache ################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Size | 1000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Usage | 100%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Key Percona Server features ################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Table &amp; Index Stats | Disabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Multiple I/O Threads | Enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Corruption Resilient | Enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Durable Replication | Not Supported
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Import InnoDB Tables | Not Supported
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Fast Server Restarts | Not Supported
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Enhanced Logging | Disabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Replica Perf Logging | Disabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Response Time Hist. | Not Supported
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Smooth Flushing | Not Supported
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; HandlerSocket NoSQL | Not Supported
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Fast Hash UDFs | Unknown
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Percona XtraDB Cluster #####################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Plugins ####################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; InnoDB compression | ACTIVE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Schema #####################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Specify --databases or --all-databases to dump and summarize schemas
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Noteworthy Technologies ####################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SSL | Yes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Explicit LOCK TABLES | No
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Delayed Insert | No
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; XA Transactions | No
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; NDB Cluster | No
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Prepared Statements | Yes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Prepared statement count | 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# InnoDB #####################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Version | 8.4.6-6
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Buffer Pool Size | 8.0G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Buffer Pool Fill | 30%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Buffer Pool Dirty | 0%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File Per Table | ON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Page Size | 16k
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Log File Size | 2 * 48.0M = 96.0M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Log Buffer Size | 64M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Flush Method | O_DIRECT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Flush Log At Commit | 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; XA Support |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Checksums |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Doublewrite | ON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; R/W I/O Threads | 4 4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; I/O Capacity | 200
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Thread Concurrency | 4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Concurrency Tickets | 5000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Commit Concurrency | 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Txn Isolation Level | REPEATABLE-READ
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Adaptive Flushing | ON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Adaptive Checkpoint |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Checkpoint Age | 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; InnoDB Queue | 0 queries inside InnoDB, 0 queries in queue
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Oldest Transaction | 0 Seconds
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; History List Len | 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Read Views | 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Undo Log Entries | 0 transactions, 0 total undo, 0 max undo
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Pending I/O Reads | 0 buf pool reads, 0 normal AIO, 0 ibuf AIO, 0 preads
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Pending I/O Writes | 0 buf pool (0 LRU, 0 flush list, 0 page); 0 AIO, 0 sync, 0 log IO (0 log, 0 chkp); 1 pwrites
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Pending I/O Flushes | 0 buf pool, 0 log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Transaction States | 3xnot started
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# MyISAM #####################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Key Cache | 8.0M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Pct Used | 20%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Unflushed | 0%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Security ###################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Users | 8 users, 0 anon, 0 w/o pw, 7 old pw
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Old Passwords |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Encryption #################################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;No keyring plugins found
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Binary Logging #############################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Binlogs | 3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Zero-Sized | 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Total Size | 437.9M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; binlog_format | ROW
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; expire_logs_days |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sync_binlog | 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; server_id | 10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; binlog_do_db |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; binlog_ignore_db |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Noteworthy Variables #######################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Auto-Inc Incr/Offset | 1/1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; default_storage_engine | InnoDB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; flush_time | 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; init_connect |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; init_file |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sql_mode | ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; join_buffer_size | 256k
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sort_buffer_size | 256k
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; read_buffer_size | 128k
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; read_rnd_buffer_size | 256k
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; bulk_insert_buffer | 0.00
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; max_heap_table_size | 16M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; tmp_table_size | 16M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; max_allowed_packet | 64M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; thread_stack | 1M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; log |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; log_error | /var/log/mysql/mysqld.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; log_warnings |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; log_slow_queries |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log_queries_not_using_indexes | OFF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; log_replica_updates | ON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Configuration File #########################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Config File | /etc/my.cnf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;character-set-server = utf8mb4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;authentication_policy = '*'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;port = 3306
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;socket = /usr/local/mysql/mysql.sock
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pid-file = /usr/local/mysql/mysqld.pid
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;basedir = /usr/local/mysql/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;datadir = /data0/mysql/data/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tmpdir = /data0/mysql/tmp/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;general_log_file = /var/log/mysql/mysql-general.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log-error = /var/log/mysql/mysqld.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log_file = /var/log/mysql/slow_query.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[...]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_data_home_dir = /data0/mysql/data/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_group_home_dir = /data0/mysql/data/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_temp_data_file_path = ../tmp/ibtmp1:12M:autoextend:max:8G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_size = 8G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb-redo-log-capacity = 2G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_flush_log_at_trx_commit = 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_lock_wait_timeout = 50
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_flush_method = O_DIRECT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_file_per_table = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_io_capacity = 200
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_instances = 8
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_thread_concurrency = 4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Memory management library ##################################
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;jemalloc is not enabled in mysql config for process with id 788
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# The End ####################################################&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Redirect output to file.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-mysql-summary &gt; percona-server.txt&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Capture both pt-summary and pt-mysql-summary into a single file.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-summary &gt; percona-server-summary.txt
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-mysql-summary &gt;&gt; percona-server-summary.txt&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="pt-online-schema-change"&gt;PT Online Schema Change&lt;/h2&gt;
&lt;p&gt;A Percona Toolkit utility that performs online ALTER TABLE operations by creating a shadow copy of the table, applying the schema change to that copy, and keeping it in sync with the original using triggers until it is ready to swap. This workflow minimizes locking and reduces downtime, allowing large production tables to be altered safely with minimal impact on applications. However, it’s important to remind users that long-running queries or transactions holding metadata locks (MDL) on the table will still block the final swap, potentially delaying completion of the schema change.&lt;/p&gt;
&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;
&lt;h4 id="adding-a-new-column"&gt;Adding a New Column&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-online-schema-change \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --alter "ADD COLUMN status TINYINT NOT NULL DEFAULT 0" \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; D=mydb,t=orders \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --execute&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This safely introduces a new column to a busy table without blocking reads or writes. The tool handles the copy, synchronization, and final table swap automatically.&lt;/p&gt;
&lt;h4 id="modifying-a-column-type"&gt;Modifying a Column Type&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-online-schema-change \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --alter "MODIFY COLUMN price DECIMAL(10,2)" \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; D=shop,t=products \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --execute&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Changing column definitions—especially on large datasets—can be disruptive using standard SQL. With pt-osc, the migration happens online, keeping applications responsive throughout the operation.&lt;/p&gt;
&lt;h4 id="dropping-an-unused-column"&gt;Dropping an Unused Column&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-online-schema-change \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --alter "DROP COLUMN old_flag" \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; D=analytics,t=events \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --execute&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Column drops can require a full table rebuild, making them great candidates for pt-osc. This example removes a legacy column while avoiding table locks.&lt;/p&gt;
&lt;h4 id="adding-an-index"&gt;Adding an Index&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-online-schema-change \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --alter "ADD INDEX idx_user_id (user_id)" \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; D=app,t=logins \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --execute&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Index creation is another expensive operation for large tables. Here, pt-osc allows the index to be added online, improving performance without interrupting the application.&lt;/p&gt;
&lt;h4 id="changing-a-primary-key"&gt;Changing a Primary Key&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-online-schema-change \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --alter "DROP PRIMARY KEY, ADD PRIMARY KEY(id, created_at)" \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; D=orders,t=order_items \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --execute&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Primary key modifications usually require a full table rewrite. pt-osc makes this process safer and easier on production systems by performing the change on a temporary shadow table.&lt;/p&gt;
&lt;h4 id="performing-a-dry-run"&gt;Performing a Dry Run&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-online-schema-change \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --alter "ADD COLUMN test INT" \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; D=mydb,t=mytable \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --dry-run&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;A dry run allows you to validate the plan and review the process without making any actual changes—a critical safeguard when preparing for production schema work.&lt;/p&gt;
&lt;h4 id="printing-sql-changes-before-execution"&gt;Printing SQL Changes Before Execution&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-online-schema-change \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --alter "ADD COLUMN updated_at TIMESTAMP NULL" \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; D=crm,t=customers \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --print \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --execute&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Using –print provides transparency into the SQL operations the tool will perform. This is particularly useful during code reviews or change-control processes.&lt;/p&gt;
&lt;h2 id="pt-show-grants"&gt;PT Show Grants&lt;/h2&gt;
&lt;p&gt;A Percona Toolkit utility that extracts MySQL user accounts and privileges and outputs them as clean, executable CREATE USER and GRANT statements. It normalizes and orders the privileges for readability, making it valuable for auditing security, documenting access, migrating users between servers, or preparing accurate privilege information for support and compliance purposes.&lt;/p&gt;
&lt;h3 id="examples-1"&gt;Examples&lt;/h3&gt;
&lt;h4 id="dump-all-grants-for-all-users"&gt;Dump All Grants for All Users&lt;/h4&gt;
&lt;p&gt;The simplest and most common use case is generating a complete privilege snapshot:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-show-grants&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This returns normalized CREATE USER and GRANT statements for every account in the instance. It’s ideal for audits, environment comparisons, and creating human-readable privilege reports.&lt;/p&gt;
&lt;h4 id="show-grants-for-a-specific-user"&gt;Show Grants for a Specific User&lt;/h4&gt;
&lt;p&gt;If you want to inspect privileges for a single account, you can filter by user/host:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-show-grants --accounts='user@localhost'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This makes privilege debugging and user-level audits quick and targeted.&lt;/p&gt;
&lt;h4 id="export-all-grants-to-a-file"&gt;Export All Grants to a File&lt;/h4&gt;
&lt;p&gt;To create a reusable backup of every user account:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-show-grants &gt; grants.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The resulting file is a set of CREATE USER and GRANT statements that can be restored simply by executing:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql &lt; grants.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This is an excellent practice before server upgrades, user cleanup, or major permission changes.&lt;/p&gt;
&lt;h4 id="show-grants-for-multiple-accounts"&gt;Show Grants for Multiple Accounts&lt;/h4&gt;
&lt;p&gt;You can provide a comma-separated list of accounts to extract only what you need:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-show-grants --accounts='app@%,reporting@localhost,backup@localhost'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This is ideal for teams that manage groups of service accounts across environments.&lt;/p&gt;
&lt;h4 id="ignore-specific-system-accounts"&gt;Ignore Specific System Accounts&lt;/h4&gt;
&lt;p&gt;For cleanup scripts or custom inventory reports, skip built-in MySQL accounts:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-show-grants --ignore='mysql.sys@localhost,mysql.infoschema@localhost'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This focuses output on only the accounts relevant to your application.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;This post highlights the importance of using the right tool for the job—both in woodworking and in database engineering. For MySQL and Percona Server environments, the Percona Toolkit offers a set of powerful utilities that simplify diagnostics, troubleshooting, schema changes, and security audits.&lt;/p&gt;
&lt;p&gt;It introduces four key tools:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;pt-summary – Generates a high-level report of system hardware, OS settings, filesystems, networking, and performance metrics. Useful for support cases and quick environment overviews.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;pt-mysql-summary – Produces a structured snapshot of a MySQL instance, including configuration, performance counters, replication status, storage engine details, and important variables. Ideal for tuning and issue analysis.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;pt-online-schema-change – Enables online ALTER TABLE operations by copying and syncing the table in the background, minimizing downtime. Several examples show how to add, drop, or modify columns and indexes safely.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;pt-show-grants – Extracts all MySQL users and privileges into clean, reproducible CREATE USER and GRANT statements. Helpful for audits, migrations, backups, and security reviews.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Opensource</category>
      <category>Percona</category>
      <category>toolkit</category>
      <category>MySQL</category>
      <category>Community</category>
      <category>Percona Server</category>
      <category>PXC</category>
      <media:thumbnail url="https://percona.community/blog/2025/11/vintage-toolbox-open_hu_84f49383f15a7e1c.jpg"/>
      <media:content url="https://percona.community/blog/2025/11/vintage-toolbox-open_hu_f74717849a7b76fb.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Operator for MySQL Is Now GA, More MySQL Options for the Community on Kubernetes</title>
      <link>https://percona.community/blog/2025/11/19/percona-operator-for-mysql-is-now-ga-more-mysql-options-for-the-community-on-kubernetes/</link>
      <guid>https://percona.community/blog/2025/11/19/percona-operator-for-mysql-is-now-ga-more-mysql-options-for-the-community-on-kubernetes/</guid>
      <pubDate>Wed, 19 Nov 2025 11:00:00 UTC</pubDate>
      <description>We’re excited to share that the new Percona Operator for MySQL (based on Percona Server for MySQL) is officially in General Availability (GA)!</description>
      <content:encoded>&lt;p&gt;We’re excited to share that the new &lt;strong&gt;&lt;a href="https://docs.percona.com/percona-operator-for-mysql/ps/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MySQL (based on Percona Server for MySQL)&lt;/a&gt;&lt;/strong&gt; is officially in General Availability (GA)!&lt;/p&gt;
&lt;p&gt;This release introduces native &lt;strong&gt;MySQL Group Replication&lt;/strong&gt; support for &lt;strong&gt;Kubernetes&lt;/strong&gt;, providing our community with another open-source option for running reliable, consistent MySQL clusters at scale.&lt;/p&gt;
&lt;p&gt;This is about more choices for the community. Each MySQL replication technology addresses different real-world needs, and now you can choose the one that best fits your workloads.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/11/introm.jpeg" alt="MySQL Operator for MySQL Intro" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="what-this-means-for-the-community"&gt;What This Means for the Community&lt;/h2&gt;
&lt;p&gt;With this release, Percona now supports two &lt;strong&gt;fully open-source MySQL Operators&lt;/strong&gt;:&lt;/p&gt;
&lt;h3 id="1-percona-operator-for-mysql-percona-server-for-mysql-new-and-ga"&gt;1. &lt;a href="https://docs.percona.com/percona-operator-for-mysql/ps/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MySQL (Percona Server for MySQL)&lt;/a&gt;, New and GA&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Group Replication (synchronous)&lt;/li&gt;
&lt;li&gt;Asynchronous replication (Technical Preview)&lt;/li&gt;
&lt;li&gt;Native MySQL experience&lt;/li&gt;
&lt;li&gt;Auto-failover&lt;/li&gt;
&lt;li&gt;Kubernetes-native design&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="2-percona-xtradb-cluster-operator-pxc"&gt;2. &lt;a href="https://docs.percona.com/percona-operator-for-mysql/pxc/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona XtraDB Cluster Operator (PXC)&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Galera-based synchronous replication&lt;/li&gt;
&lt;li&gt;Strong high availability&lt;/li&gt;
&lt;li&gt;Auto-failover&lt;/li&gt;
&lt;li&gt;Battle-tested for mission-critical workloads&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;These Operators complement each other; they are not replacements&lt;/strong&gt;. They give users the freedom to choose the right replication model for their business and technical priorities.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This GA release is a step in that direction, and we will continue publishing technical blog posts to explain when to use each Operator, how Group Replication works, and how this all fits into real-world Kubernetes environments&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/11/two-operators_hu_e3d8f6c73604ca02.png 480w, https://percona.community/blog/2025/11/two-operators_hu_d68c37cdeea667e6.png 768w, https://percona.community/blog/2025/11/two-operators_hu_300c39d76a6bdf0a.png 1400w"
src="https://percona.community/blog/2025/11/two-operators.png" alt="MySQL Operator for MySQL Intro Chart" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="call-for-community-testing-and-feedback"&gt;Call for Community Testing and Feedback&lt;/h2&gt;
&lt;p&gt;Asynchronous replication is now available in Technical Preview, we invite you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Test it in your clusters&lt;/li&gt;
&lt;li&gt;Share your feedback&lt;/li&gt;
&lt;li&gt;Open GitHub issues&lt;/li&gt;
&lt;li&gt;Contribute docs or examples&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Your feedback will guide the next features we bring to the Operator.&lt;/p&gt;
&lt;h3 id="explore-percona-operator-for-mysql"&gt;Explore Percona Operator for MySQL:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-operator-for-mysql/ps/ReleaseNotes/Kubernetes-Operator-for-PS-RN1.0.0.html" target="_blank" rel="noopener noreferrer"&gt;Docs Percona Operator for MySQL&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server-mysql-operator" target="_blank" rel="noopener noreferrer"&gt;GitHub: Try it, test it, open issues, or contribute&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/posts/percona_the-percona-cloud-native-team-is-happy-activity-7396585512536473600-bFZR/?utm_source=share&amp;utm_medium=member_ios&amp;rcm=ACoAAA_uTn0BQWSwnqQ-mUMcVZ7icaVGYa4mlVs" target="_blank" rel="noopener noreferrer"&gt;Announcement Percona Blog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>MySQL</category>
      <category>Opensource</category>
      <category>Cloud</category>
      <category>Kubernetes</category>
      <category>Operators</category>
      <media:thumbnail url="https://percona.community/blog/2025/11/init_hu_93b48671fe9b894f.jpg"/>
      <media:content url="https://percona.community/blog/2025/11/init_hu_9b71a12a7bfbff67.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL Memory Usage: A Guide to Optimization</title>
      <link>https://percona.community/blog/2025/11/11/mysql-memory-usage-a-guide-to-optimization/</link>
      <guid>https://percona.community/blog/2025/11/11/mysql-memory-usage-a-guide-to-optimization/</guid>
      <pubDate>Tue, 11 Nov 2025 00:00:00 UTC</pubDate>
      <description>Struggling with MySQL memory spikes? Knowing how and where memory is allocated can make all the difference in maintaining a fast, reliable database. From global buffers to session-specific allocations, understanding the details of MySQL’s memory management can help you optimize performance and avoid slowdowns. Let’s explore the core elements of MySQL memory usage with best practices for trimming excess in demanding environments.</description>
      <content:encoded>&lt;p&gt;Struggling with MySQL memory spikes? Knowing how and where memory is allocated can make all the difference in maintaining a fast, reliable database. From global buffers to session-specific allocations, understanding the details of MySQL’s memory management can help you optimize performance and avoid slowdowns. Let’s explore the core elements of MySQL memory usage with best practices for trimming excess in demanding environments.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/11/mysql_memory_usage_graph_hu_182833534dd8f7b.png 480w, https://percona.community/blog/2025/11/mysql_memory_usage_graph_hu_565e9bc65d1675a2.png 768w, https://percona.community/blog/2025/11/mysql_memory_usage_graph_hu_d497e9659ccf47c0.png 1400w"
src="https://percona.community/blog/2025/11/mysql_memory_usage_graph.png" alt="Releem Dashboard - RAM usage" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="how-mysql-uses-memory"&gt;How MySQL Uses Memory&lt;/h2&gt;
&lt;p&gt;MySQL dynamically manages memory across several areas to process queries, handle connections, and optimize performance. The two primary areas of memory usage include:&lt;/p&gt;
&lt;h3 id="global-buffers"&gt;Global Buffers&lt;/h3&gt;
&lt;p&gt;These are shared by the entire MySQL server and include components like the InnoDB buffer pool, key buffer, and query cache. The InnoDB buffer pool is particularly memory-intensive, especially in data-heavy applications, as it stores frequently accessed data and indexes to speed up queries.&lt;/p&gt;
&lt;h3 id="connection-per-thread-buffers"&gt;Connection (per thread) Buffers&lt;/h3&gt;
&lt;p&gt;When a client connects, MySQL allocates memory specifically for that session. This includes sort buffers, join buffers, and temporary table memory. The more concurrent connections you have, the more memory is consumed. Session buffers are critical to monitor in high-traffic environments.&lt;/p&gt;
&lt;h2 id="why-mysql-memory-usage-might-surge"&gt;Why MySQL Memory Usage Might Surge&lt;/h2&gt;
&lt;p&gt;Memory spikes in MySQL often result from specific scenarios or misconfigurations. Here are a few examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;High Traffic with Large Connection Buffers&lt;/strong&gt;: A surge in concurrent connections can quickly exhaust memory if sort or join buffers are set too large.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Complex Queries&lt;/strong&gt;: Queries with large joins, subqueries, or extensive temporary table usage can temporarily allocate significant memory, especially when poorly optimized.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Oversized InnoDB Buffer Pool&lt;/strong&gt; : Setting the &lt;a href="https://releem.com/docs/mysql-performance-tuning/innodb_buffer_pool_size" target="_blank" rel="noopener noreferrer"&gt;InnoDB buffer pool size&lt;/a&gt; too large for the server’s available memory can trigger swapping, severely degrading database and server performance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Large Temporary Tables&lt;/strong&gt; : When temporary tables exceed the in-memory limit ( &lt;a href="https://releem.com/docs/mysql-performance-tuning/tmp_table_size" target="_blank" rel="noopener noreferrer"&gt;tmp_table_size&lt;/a&gt; ), they are written to disk, consuming additional resources and slowing down operations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Inefficient Indexing&lt;/strong&gt; : A lack of proper indexes forces MySQL to perform full table scans, increasing memory and CPU usage for even moderately complex queries.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="best-practices-for-controlling-mysql-memory-usage"&gt;Best Practices for Controlling MySQL Memory Usage&lt;/h2&gt;
&lt;p&gt;When you notice MySQL using more memory than expected, consider the following strategies:&lt;/p&gt;
&lt;h3 id="1-set-limits-on-global-buffers"&gt;1. Set Limits on Global Buffers&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Configure &lt;a href="https://releem.com/docs/mysql-performance-tuning/innodb_buffer_pool_size" target="_blank" rel="noopener noreferrer"&gt;innodb_buffer_pool_size&lt;/a&gt; to 60-70% of available memory for InnoDB-heavy workloads. For smaller workloads, scale it down to avoid overcommitting memory.&lt;/li&gt;
&lt;li&gt;Keep &lt;a href="https://releem.com/docs/mysql-performance-tuning/innodb_log_buffer_size" target="_blank" rel="noopener noreferrer"&gt;innodb_log_buffer_size&lt;/a&gt; at a practical level (e.g., 16MB) unless write-heavy workloads demand more.&lt;/li&gt;
&lt;li&gt;Adjust &lt;a href="https://releem.com/docs/mysql-performance-tuning/key_buffer_size" target="_blank" rel="noopener noreferrer"&gt;key_buffer_size&lt;/a&gt; for MyISAM tables, ensuring it remains proportionate to table usage to avoid unnecessary memory allocation.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="2-adjust-connection-buffer-sizes"&gt;2. Adjust Connection Buffer Sizes&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Reduce &lt;a href="https://releem.com/docs/mysql-performance-tuning/sort_buffer_size" target="_blank" rel="noopener noreferrer"&gt;sort_buffer_size&lt;/a&gt; and &lt;a href="https://releem.com/docs/mysql-performance-tuning/join_buffer_size" target="_blank" rel="noopener noreferrer"&gt;join_buffer_size&lt;/a&gt; to balance memory usage with query performance, especially in environments with high concurrency.&lt;/li&gt;
&lt;li&gt;Optimize &lt;a href="https://releem.com/docs/mysql-performance-tuning/tmp_table_size" target="_blank" rel="noopener noreferrer"&gt;tmp_table_size&lt;/a&gt; and &lt;a href="https://releem.com/docs/mysql-performance-tuning/max_heap_table_size" target="_blank" rel="noopener noreferrer"&gt;max_heap_table_size&lt;/a&gt; to control in-memory temporary table allocation and avoid excessive disk usage.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="3-fine-tune-table-caches"&gt;3. Fine-Tune Table Caches&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Adjust &lt;a href="https://releem.com/docs/mysql-performance-tuning/table_open_cache" target="_blank" rel="noopener noreferrer"&gt;table_open_cache&lt;/a&gt; to avoid bottlenecks while considering OS file descriptor limits.&lt;/li&gt;
&lt;li&gt;Configure &lt;a href="https://releem.com/docs/mysql-performance-tuning/table_definition_cache" target="_blank" rel="noopener noreferrer"&gt;table_definition_cache&lt;/a&gt; to manage table metadata efficiently, especially in environments with many tables or foreign key relationships.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="4-control-thread-cache-and-connection-limits"&gt;4. Control Thread Cache and Connection Limits&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Use &lt;a href="https://releem.com/docs/mysql-performance-tuning/thread_cache_size" target="_blank" rel="noopener noreferrer"&gt;thread_cache_size&lt;/a&gt; to reuse threads effectively and reduce overhead from frequent thread creation.&lt;/li&gt;
&lt;li&gt;Adjust &lt;a href="https://releem.com/docs/mysql-performance-tuning/thread_stack" target="_blank" rel="noopener noreferrer"&gt;thread_stack&lt;/a&gt; and &lt;strong&gt;net_buffer_length&lt;/strong&gt; to suit your workload while keeping memory usage scalable.&lt;/li&gt;
&lt;li&gt;Limit &lt;a href="https://releem.com/docs/mysql-performance-tuning/max_connections" target="_blank" rel="noopener noreferrer"&gt;max_connections&lt;/a&gt; to a level appropriate for your workload, preventing excessive session buffers from overwhelming server memory.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="5-track-temporary-table-usage"&gt;5. Track Temporary Table Usage&lt;/h3&gt;
&lt;p&gt;Monitor temporary table usage and reduce memory pressure by optimizing queries that rely on GROUP BY, ORDER BY, or UNION.&lt;/p&gt;
&lt;h3 id="6-use-mysql-memory-calculator"&gt;6. Use MySQL Memory Calculator&lt;/h3&gt;
&lt;p&gt;Incorporate tools like the &lt;a href="https://releem.com/tools/mysql-memory-calculator" target="_blank" rel="noopener noreferrer"&gt;MySQL Memory Calculator by Releem&lt;/a&gt; to estimate memory usage. Input your MySQL configuration values, and the calculator will provide real-time insights into maximum memory usage. This prevents overcommitting your server’s memory and helps allocate resources effectively.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/11/mysql_memory_usage_calc_hu_ab941316c0d444f0.png 480w, https://percona.community/blog/2025/11/mysql_memory_usage_calc_hu_b4753a1fcc563d24.png 768w, https://percona.community/blog/2025/11/mysql_memory_usage_calc_hu_49c89ec803b47b54.png 1400w"
src="https://percona.community/blog/2025/11/mysql_memory_usage_calc.png" alt="MySQL Memory Calculator" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="7-monitor-query-performance"&gt;7. Monitor Query Performance&lt;/h3&gt;
&lt;p&gt;High-memory-consuming queries, such as those with large joins or sorts, queries without indexes, can affect memory usage. Use &lt;a href="https://releem.com/query-analytics" target="_blank" rel="noopener noreferrer"&gt;Releem’s Query Analytics and Optimization feature&lt;/a&gt; to determine inefficient queries and gain insights on further tuning opportunities.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/11/mysql_memory_usage_query_analytics_hu_2e76ea22e4cb632d.png 480w, https://percona.community/blog/2025/11/mysql_memory_usage_query_analytics_hu_7f5e70dfeff93692.png 768w, https://percona.community/blog/2025/11/mysql_memory_usage_query_analytics_hu_a17d852be60cc80.png 1400w"
src="https://percona.community/blog/2025/11/mysql_memory_usage_query_analytics.png" alt="Releem Dashboard - Query Analytics" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="simplifying-mysql-memory-tuning-with-releem"&gt;Simplifying MySQL Memory Tuning with Releem&lt;/h2&gt;
&lt;p&gt;Releem takes the guesswork out of MySQL optimization by automatically analyzing your setup and suggesting configuration changes that align with your memory limits and performance needs. Whether you’re dealing with complex workloads or simply don’t have time for manual adjustments, Releem makes it easier to keep MySQL running smoothly.&lt;/p&gt;</content:encoded>
      <author>Roman Agabekov</author>
      <category>MySQL</category>
      <category>MariaDB</category>
      <category>Percona</category>
      <category>DBA Tools</category>
      <media:thumbnail url="https://percona.community/blog/2025/11/mysql_memory_usage_badge_hu_cc1a4044f70f1723.jpg"/>
      <media:content url="https://percona.community/blog/2025/11/mysql_memory_usage_badge_hu_a3243c5f91ca16a8.jpg" medium="image"/>
    </item>
    <item>
      <title>A thread through my 2025 Postgres events</title>
      <link>https://percona.community/blog/2025/11/10/thread-through-2025-pgconfs/</link>
      <guid>https://percona.community/blog/2025/11/10/thread-through-2025-pgconfs/</guid>
      <pubDate>Mon, 10 Nov 2025 07:00:00 UTC</pubDate>
      <description>I recently got back from PostgreSQL Conference Europe in Riga, marking the end of my conference activities for 2025. The speakers were great. The audience, for the Extensions Showcase on Community Day on Tuesday and my Kubernetes from the database out talk, were great. The event team was great. The singing at karaoke was terrible, but it’s supposed to be.</description>
      <content:encoded>&lt;p&gt;I recently got back from PostgreSQL Conference Europe in Riga, marking the end of my conference activities for 2025. The speakers were great. The audience, for the Extensions Showcase on Community Day on Tuesday and my Kubernetes from the database out talk, were great. The event team was great. The singing at karaoke was terrible, but it’s supposed to be.&lt;/p&gt;
&lt;p&gt;After attending a good few events this year, starting with CERN PGDay in mid-January, I wanted to write something about more than just the most recent event. I see a common thread across presentations and sessions at a number of events over the year, that is, scale-out Postgres and particularly, its use in non-profit scientific environments.&lt;/p&gt;
&lt;h3 id="the-beginning-and-end-users"&gt;The (beginning and) end users&lt;/h3&gt;
&lt;p&gt;Far fewer data processing challenges require pooling the resources of many physical servers these days, with servers getting bigger and storage faster. Scientific data analysis and managing large, complex scientific facilities still do. I saw three presentations on this: Rafal Kulaga, Antonin Kveton and Martin Zemko’s on &lt;a href="https://indico.cern.ch/event/1471762/contributions/6280212/" target="_blank" rel="noopener noreferrer"&gt;managing CERN’s SCADA data&lt;/a&gt;; Daniel Krefl and Krzysztof Nienartowicz at CERN on &lt;a href="https://indico.cern.ch/event/1471762/contributions/6280216/" target="_blank" rel="noopener noreferrer"&gt;how Sendai queries variable star data&lt;/a&gt;; and Jaoquim Oliveira in Riga on &lt;a href="https://www.postgresql.eu/events/pgconfeu2025/schedule/session/7138-from-stars-to-storage-engines-migrating-big-science-workloads-beyond-greenplum/" target="_blank" rel="noopener noreferrer"&gt;managing the European Space Agency’s (ESA’s) survey mission data&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I admit a fondness for ESA’s GAIA catalog dataset. After I was lucky enough to do a proof of concept project on joining it with other catalog data, it has provided significant intellectual interest. Don’t let me get started on the possible ways to optimise computationally expensive inequality joins on horribly skewed data, unless you really care about the problem. My interest in a dataset discussed in two of these talks is not why the thread connecting them is worth commenting on. All three presentations had a lot of content on selecting or developing database technologies for the work they were doing. That’s worth discussing a bit further.&lt;/p&gt;
&lt;h3 id="getting-the-details-right"&gt;Getting the details right&lt;/h3&gt;
&lt;p&gt;The thread of sharded, scale out, or Massively Parallel Processing (MPP) Postgres connects end user stories at my first event of the year and my last, along with stories of building this software at events in between. At PGConf.dev in Montreal David Wein gave a very condensed explanation of how AWS’s Aurora Limitless handles distributed snapshot isolation (&lt;a href="https://www.youtube.com/watch?v=UrRkHSxP2xE&amp;t=378s" target="_blank" rel="noopener noreferrer"&gt;watch the lightning talk at on YouTube&lt;/a&gt;), there was also an unconference session on handling the issue in core Postgres the next day. For an in-depth explanation of of what the distributed snapshot problem is and how it may be addressed, see &lt;a href="https://www.postgresql.eu/events/pgconfeu2024/schedule/session/5710-high-concurrency-distributed-snapshots/" target="_blank" rel="noopener noreferrer"&gt;Ants Aasma’s talk from PGConf.EU 2024&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The organisations with the data are looking for open source software solutions and bumping into issues around open core licensing, project contribution breadth, project activity levels, project governance. The Postgres developer community is working on the knottiest of the problems in this space, trying to get it absolutely right. In the mean-time, various forks and extensions are delivering useful functionality for the owners of these big, complex datasets.
Useful, but could do better&lt;/p&gt;
&lt;p&gt;If this were working out for everyone, there wouldn’t be a story to tell. Sednai are building Potgres-XZ, which builds on TBase, which built on Postgres-XL. The ESAC Science Data Centre (ESDC) is facing a decision between two single-vendor projects, where one vendor doesn’t provide support for on-premises deployments. CERN procurement sought written assurances over license terms for TimescaleDB, since the CERN facilities organisation may be viewed as a service provider to their hosted scientific projects.&lt;/p&gt;
&lt;p&gt;This pattern of licenses built specifically to avoid “AWS stealing our innovation/lunch/…”, (and it is always AWS set up as the bogeyman in these stories), is particularly unfortunate here, because it just isn’t true for Postgres. AWS, and Azure, employ big teams of community contributors to work on open source Postgres. The progress on statistics management, asynchronous IO, and vacuum in Postgres 18 are, among others, thanks to these teams’ efforts.&lt;/p&gt;
&lt;p&gt;No matter how positive the involvement of the hyperscalers may be for Postgres, there are organisations who will prefer to run their own databases. On-premises hosting is a clear choice for organisations with big facilities capabilities, capital-centric budgeting, extreme requirements and predictable, always on workloads. Many of these organisations are publicly funded scientific projects. It would be great if there were broad-based open source solutions to meet their data management needs.&lt;/p&gt;
&lt;h3 id="doing-better-together"&gt;Doing better, together&lt;/h3&gt;
&lt;p&gt;At PGConf in Riga the Percona team took a few, early steps towards building a joint effort to deliver the components of such a solution. I hope that the big, open managers of structured scientific data (or their subcontractors, depending on their engagement model) and a few vendors can come together to build event data compression, columnar storage, and all the other bits which can be implemented as extensions.&lt;/p&gt;
&lt;p&gt;The current Postgres extensions and forks for scale out systems were built on older versions of Postgres, so they had to build features which now exist in core Postgres. Their implementation of partitioning, for instance, differs subtly from the capabilities now available in modern Postgres. As feature-specific extensions can take over capabilities which are currently intertwined with sharding (like compression in Timescale or columnar storage in Citus), users will be less locked in to vertical stacks of features, some useful to them and some not. Simple sharding can then become a proxy (like pgDog), an automation on DDL on a gateway server or even a core Postgres feature.&lt;/p&gt;
&lt;p&gt;Which leaves those special cases where moving data between shards during execution is key to performance. This is mattering less with ever bigger servers, improving Postgres parallelism and tools like DuckDB - but when it matters it still really matters. Here the sons of the ‘plum - CloudberryDB and WarehousePG, forked from Greenplum when it closed source - work their magic (hat tip to Jimmy Angelakos for the “the ‘plum” contraction). Managing that particular capability will always be a big, complex code base. If the patches carried to make it happen shrink as Postgres and extensions fill the gap, we’ll have a more sustainable route to all good database things being openly available.&lt;/p&gt;</content:encoded>
      <author>Alastair Turner</author>
      <category>PostgreSQL</category>
      <category>Opensource</category>
      <category>pg_alastair</category>
      <category>Community</category>
      <media:thumbnail url="https://percona.community/blog/2025/11/cover-map-blue_hu_209117837932fc65.jpg"/>
      <media:content url="https://percona.community/blog/2025/11/cover-map-blue_hu_bdfd6ce3de2b5882.jpg" medium="image"/>
    </item>
    <item>
      <title>Encryption support in PMM Dump</title>
      <link>https://percona.community/blog/2025/10/30/encryption-support-in-pmm-dump/</link>
      <guid>https://percona.community/blog/2025/10/30/encryption-support-in-pmm-dump/</guid>
      <pubDate>Thu, 30 Oct 2025 11:00:00 UTC</pubDate>
      <description>The pmm-dump client utility performs a logical backup of the performance metrics collected by the PMM Server and imports them into a different PMM Server instance. PMM Dump allows you to share monitoring data collected by your PMM server with the Percona Support team securely.</description>
      <content:encoded>&lt;p&gt;The &lt;code&gt;pmm-dump&lt;/code&gt; client utility performs a logical backup of the performance metrics collected by the PMM Server and imports them into a different PMM Server instance. PMM Dump allows you to share monitoring data collected by your PMM server with the Percona Support team securely.&lt;/p&gt;
&lt;p&gt;Up until now dumps, created by the tool, were not encrypted. It was possible to encrypt them after they are done but this required additional actions from the user.&lt;/p&gt;
&lt;p&gt;Starting from the upcoming PMM Dump version 0.8.0-ga released on October 29, 2025, dumps are encrypted by default.&lt;/p&gt;
&lt;h2 id="key-points"&gt;Key points&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Dump files are encrypted by default with AES-256-based encryption.&lt;/li&gt;
&lt;li&gt;An auto-generated password is produced for each encrypted dump; it is printed at the end of the export operation or can be written to a file with &lt;code&gt;--pass-filepath&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You can provide a custom password with &lt;code&gt;--pass&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Disable encryption with &lt;code&gt;--no-encryption&lt;/code&gt; only when you understand the risks.&lt;/li&gt;
&lt;li&gt;By default, for encrypted dumps, export logging to STDOUT is suppressed; use &lt;code&gt;--no-just-key&lt;/code&gt; to override.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="why-this-matters"&gt;Why this matters&lt;/h2&gt;
&lt;p&gt;Encrypting PMM dumps prevents accidental exposure of monitoring and query data that may contain sensitive information (query text, hostnames, metrics). It brings PMM Dump in line with secure data-handling best practices and simplifies safe sharing with Percona Support.&lt;/p&gt;
&lt;h2 id="quick-examples"&gt;Quick examples&lt;/h2&gt;
&lt;p&gt;Export (encryption enabled by default):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ pmm-dump export --pmm-url='https://admin:admin@127.0.0.1' --allow-insecure-certs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Password: ****************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ ls pmm-dump-&lt;TIMESTAMP&gt;.tar.gz.enc&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Provide a custom password:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ pmm-dump export --pmm-url='https://admin:admin@127.0.0.1' --pass='My$trongP@ss'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Save auto-generated password to file:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ pmm-dump export --pmm-url='https://admin:admin@127.0.0.1' --pass-filepath=/tmp/pmm-dump.pass&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Disable encryption (not recommended):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ pmm-dump export --pmm-url='https://admin:admin@127.0.0.1' --no-encryption&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Import an encrypted dump:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ pmm-dump import --pmm-url='https://admin:admin@127.0.0.1' --allow-insecure-certs \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--dump-path=pmm-dump-1758017090.tar.gz.enc --pass='My$trongP@ss'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Decrypt an encrypted dump (if needed):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ openssl enc -d -aes-256-ctr -pbkdf2 -in dump.tar.gz.enc -out dump.tar.gz&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="recommendations"&gt;Recommendations&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Prefer leaving encryption enabled.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;--pass-filepath&lt;/code&gt; to store passwords securely rather than relying on terminal output.&lt;/li&gt;
&lt;li&gt;Transfer encrypted archives over secure channels (SCP/SFTP) and share passwords via secure out-of-band channels.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="availability"&gt;Availability&lt;/h2&gt;
&lt;p&gt;Encryption support is included starting in the recent PMM Dump 0.8.0-ga release. Check your PMM Dump version (&lt;code&gt;pmm-dump version&lt;/code&gt;) and the docs for exact version details.&lt;/p&gt;
&lt;h2 id="additional-information"&gt;Additional information&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://percona.com/get/pmm-dump" target="_blank" rel="noopener noreferrer"&gt;Latest version for x86_64 platforms&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Percona-Lab/percona-on-arm/releases/tag/v0.12" target="_blank" rel="noopener noreferrer"&gt;ARM binaries&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/pmm-dump-documentation/" target="_blank" rel="noopener noreferrer"&gt;PMM Dump Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/pmm-dump" target="_blank" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Sveta Smirnova</author>
      <category>PMM Dump</category>
      <category>PMM</category>
      <category>monitoring</category>
      <media:thumbnail url="https://percona.community/blog/2025/10/Sveta-PMM-Dump_hu_4169866126775b73.jpg"/>
      <media:content url="https://percona.community/blog/2025/10/Sveta-PMM-Dump_hu_b118ee6552823d62.jpg" medium="image"/>
    </item>
    <item>
      <title>Audit Log Filters Part II</title>
      <link>https://percona.community/blog/2025/10/08/audit-log-filters-part-ii/</link>
      <guid>https://percona.community/blog/2025/10/08/audit-log-filters-part-ii/</guid>
      <pubDate>Wed, 08 Oct 2025 00:00:00 UTC</pubDate>
      <description>In my first post on the MySQL 8.4 Audit Log Filter component, I covered how to install the component and configure a basic filter that captures all events. The Audit Log Filter framework offers a highly granular and configurable auditing mechanism, enabling administrators to log specific events based on criteria such as user, host, or event type. This selective approach enhances observability, supports compliance initiatives, and minimizes unnecessary logging overhead.</description>
      <content:encoded>&lt;p&gt;In my first post on the &lt;a href="https://percona.community/blog/2025/09/18/audit-log-filter-component/" target="_blank" rel="noopener noreferrer"&gt;MySQL 8.4 Audit Log Filter component&lt;/a&gt;, I covered how to install the component and configure a basic filter that captures all events. The Audit Log Filter framework offers a highly granular and configurable auditing mechanism, enabling administrators to log specific events based on criteria such as user, host, or event type. This selective approach enhances observability, supports compliance initiatives, and minimizes unnecessary logging overhead.&lt;/p&gt;
&lt;p&gt;In this follow-up, we’ll take a deeper technical look at defining and optimizing audit log filters to capture only the most relevant database activities—delivering actionable audit data while significantly reducing noise and log volume.&lt;/p&gt;
&lt;h3 id="example-1"&gt;Example 1&lt;/h3&gt;
&lt;p&gt;Audit all events:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT audit_log_filter_set_filter('log_all_events', '{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "filter": {"log": true}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Once this filter is created and assigned to a user (for example, with SELECT audit_log_filter_set_user(’%’, ’log_all_events’);), every database event triggered by that user—or by all users if % is used—will be written to the audit log file.&lt;/p&gt;
&lt;p&gt;In short:&lt;/p&gt;
&lt;p&gt;This is the most permissive audit configuration possible. It’s typically used:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;As a baseline test to verify that the audit log component is working.&lt;/li&gt;
&lt;li&gt;In diagnostic or forensic scenarios where full visibility is required.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For production environments, however, it’s recommended to create more selective filters (e.g., by event class, command type, or user) to reduce log volume and improve performance. Which we will go into more detail in the upcoming examples.&lt;/p&gt;
&lt;h3 id="example-2"&gt;Example 2&lt;/h3&gt;
&lt;p&gt;Log table access:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;select audit_log_filter_set_filter('log_table_access', '{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "filter": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "class": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "table_access" },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "connection" },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "general" }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="included-event-classes"&gt;Included Event Classes&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;table_access
Logs events when MySQL reads from or writes to tables.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Useful for tracking which users or applications are accessing specific tables.&lt;/li&gt;
&lt;li&gt;Helps in auditing data access patterns and detecting unauthorized data reads/writes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;connection
Logs connection-related events such as user logins, logouts, and failed authentication attempts.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Important for tracking session activity and security auditing.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;general
Logs general query execution events—like statements sent to the server (e.g., SELECT, INSERT, UPDATE, etc.).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Useful for general SQL activity auditing.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id="what-it-does-functionally"&gt;What It Does Functionally&lt;/h4&gt;
&lt;p&gt;After this filter is defined and assigned to a user or host (for example, with
SELECT audit_log_filter_set_user(’%’, ’log_table_access’);), MySQL will only log events that fall into one of these three classes.&lt;/p&gt;
&lt;p&gt;All other event types—like administrative commands, stored program executions, or system-level actions—will be excluded from the audit log.&lt;/p&gt;
&lt;h4 id="why-use-this-filter"&gt;Why use this filter&lt;/h4&gt;
&lt;p&gt;This configuration strikes a balance between completeness and efficiency:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Captures key operational and access-related activity.&lt;/li&gt;
&lt;li&gt;Avoids excessive log volume from irrelevant events.&lt;/li&gt;
&lt;li&gt;Suitable for data access auditing, security monitoring, and compliance logging.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In short, log_table_access provides targeted visibility into table usage, connections, and general query activity—ideal for environments where tracking who accessed what data is more important than recording every internal event.&lt;/p&gt;
&lt;h3 id="example-3"&gt;Example 3&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT audit_log_filter_set_filter('log_minimum', '{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "filter": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "class":
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [ { "name": "connection" }, { "name": "table_access", "event": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "delete"}, { "name": "insert"}, { "name": "update"} ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; } ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="included-event-classes-1"&gt;Included Event Classes&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;“class”: “connection”
&lt;ul&gt;
&lt;li&gt;Logs all connection-level events:&lt;/li&gt;
&lt;li&gt;connect: when a user logs in.&lt;/li&gt;
&lt;li&gt;disconnect: when a session ends.&lt;/li&gt;
&lt;li&gt;Failed logins and other connection-related actions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Purpose: provides visibility into who connected, from where, and when.&lt;/p&gt;
&lt;ol start="2"&gt;
&lt;li&gt;“class”: “table_access” with “event”:
&lt;ul&gt;
&lt;li&gt;Limits logging to specific table access events:
&lt;ul&gt;
&lt;li&gt;“delete” when rows are deleted.&lt;/li&gt;
&lt;li&gt;“insert” when new rows are added.&lt;/li&gt;
&lt;li&gt;“update” when existing rows are modified.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Read operations (like SELECT) and metadata queries are excluded.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id="what-it-does-functionally-1"&gt;What It Does Functionally&lt;/h4&gt;
&lt;p&gt;Once assigned to a user or host (e.g.
SELECT audit_log_filter_set_user(’%’, ’log_minimum’);), this filter will produce audit entries only when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A user connects or disconnects from MySQL.&lt;/li&gt;
&lt;li&gt;A user performs a DML (Data Manipulation Language) operation that changes data in a table.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All other events — such as simple SELECT queries, schema reads, or administrative commands — will be ignored.&lt;/p&gt;
&lt;h4 id="why-use-this-filter-1"&gt;Why Use This Filter&lt;/h4&gt;
&lt;p&gt;This is a minimalist, high-value audit configuration. It’s designed to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Track security-relevant activity (connections and data changes).&lt;/li&gt;
&lt;li&gt;Meet compliance requirements with low performance overhead.&lt;/li&gt;
&lt;li&gt;Prevent excessive logging and disk usage.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In Short log_minimum is an efficient auditing strategy for production environments where you only need to know:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Who accessed the database, and&lt;/li&gt;
&lt;li&gt;What data they changed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It gives you essential accountability and change tracking without the overhead of logging every read or administrative event.&lt;/p&gt;
&lt;h3 id="example-4"&gt;Example 4&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT audit_log_filter_set_filter('log_connections', '{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "filter": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "class": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "connection",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "event": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "connect"},
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "disconnect"}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="included-event-classes-2"&gt;Included Event Classes&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;“class”: “connection”&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This class captures events related to user sessions and authentication.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“event”: [“connect”, “disconnect”]&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;connect, Logged when a client establishes a connection to the MySQL server.
Includes details like username, host, client program, IP address, and connection method.&lt;/li&gt;
&lt;li&gt;disconnect, Logged when that client session ends or times out.
Useful for tracking session duration and identifying abnormal terminations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Important Note on pre_authenticate Events&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The pre_authenticate events are not included in the examples above.&lt;/li&gt;
&lt;li&gt;These events occur before the MySQL server has received authentication information from the client—meaning no user account details are available at this stage of the connection lifecycle. Because of that, if a filter that includes pre_authenticate events is assigned to a specific user (rather than a wildcard like %) using audit_log_filter_set_user(), those events will not be filtered or logged.&lt;/li&gt;
&lt;li&gt;This behavior often leads to confusion, as users may expect pre_authenticate events to appear in user-specific logs. Several reports and support cases have been filed on this topic, but it is expected behavior due to the timing of authentication during connection initialization.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id="why-use-this-filter-2"&gt;Why Use This Filter&lt;/h4&gt;
&lt;p&gt;This filter is particularly useful when you need to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Monitor user logins and logouts without recording query activity.&lt;/li&gt;
&lt;li&gt;Audit connection patterns (e.g., who connected, from where, and when).&lt;/li&gt;
&lt;li&gt;Maintain minimal log size and low overhead.&lt;/li&gt;
&lt;li&gt;Support security investigations or session tracking without performance impact.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In short the log_connections filter provides a focused, low-overhead auditing strategy that records only connection lifecycle events. It’s ideal for environments where you primarily need to know who connected to the database, when, and from where without capturing every SQL statement or table access.&lt;/p&gt;
&lt;h3 id="example-5"&gt;Example 5&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT audit_log_filter_set_filter('log_full_table_access', '{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "filter": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "class": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "name": "connection",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "event": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "connect"},
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "name": "disconnect"}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "name": "query",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "event": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "name": "start",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "log": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "or": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "field": { "name": "sql_command_id", "value": "select"} },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "field": { "name": "sql_command_id", "value": "insert"} },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "field": { "name": "sql_command_id", "value": "update"} },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "field": { "name": "sql_command_id", "value": "delete"} },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "field": { "name": "sql_command_id", "value": "truncate"} },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "field": { "name": "sql_command_id", "value": "create_table"} },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "field": { "name": "sql_command_id", "value": "alter_table"} },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "field": { "name": "sql_command_id", "value": "drop_table"} }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This statement defines a MySQL Audit Log Filter called log_full_table_access, which is designed to capture both connection activity and all table-related SQL operations — including reads, writes, and schema changes. It provides broad visibility into how users interact with tables in the database while filtering out unrelated or low-value events.&lt;/p&gt;
&lt;h4 id="included-event-classes-3"&gt;Included Event Classes&lt;/h4&gt;
&lt;p&gt;After assigning it to users or hosts (e.g.
SELECT audit_log_filter_set_user(’%’, ’log_full_table_access’);), MySQL will log:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Connection lifecycle events (connect, disconnect)&lt;/li&gt;
&lt;li&gt;All DML statements (SELECT, INSERT, UPDATE, DELETE, TRUNCATE)&lt;/li&gt;
&lt;li&gt;All DDL statements that create, modify, or remove tables (CREATE TABLE, ALTER TABLE, DROP TABLE)&lt;/li&gt;
&lt;li&gt;A full list of SQL_COMMANDS can be obtained from:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT NAME FROM performance_schema.setup_instruments WHERE NAME LIKE 'statement/sql/%' ORDER BY NAME;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| NAME |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_db |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_event |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_function |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_instance |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_procedure |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_resource_group |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_server |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_table |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_tablespace |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_user |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/alter_user_default_role |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/analyze |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/assign_to_keycache |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/begin |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/binlog |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/call_procedure |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/change_db |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/change_repl_filter |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/change_replication_source |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/check |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/checksum |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/commit |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| statement/sql/create_compression_dictionary |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[...]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;168 rows in set (0.07 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Everything else — administrative commands, stored procedure calls, replication control, etc. — will be excluded.&lt;/p&gt;
&lt;h4 id="why-use-this-filter-3"&gt;Why Use This Filter&lt;/h4&gt;
&lt;p&gt;This filter offers a comprehensive audit view of how users interact with data and schema structures — perfect for compliance, forensic analysis, or access accountability.
It ensures that all table reads, writes, and structure changes are tracked without overwhelming the log with irrelevant internal events.&lt;/p&gt;
&lt;p&gt;In Short, log_full_table_access provides a broad but targeted audit scope:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tracks connections for user session context.&lt;/li&gt;
&lt;li&gt;Logs all table-level operations, both data and schema-related.&lt;/li&gt;
&lt;li&gt;Delivers complete visibility into how data is accessed and changed, making it ideal for security auditing and regulatory compliance scenarios.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="final-summary"&gt;Final Summary&lt;/h3&gt;
&lt;p&gt;The MySQL 8.4 Audit Log Filter component provides a powerful and flexible framework for controlling how database activity is captured and logged. By allowing administrators to define granular filters based on event class, event type, user, or host, it transforms auditing from an all-or-nothing process into a precisely tuned observability tool.&lt;/p&gt;
&lt;p&gt;In this post, we explored a range of filter examples—from the most permissive (log_all_events) to more focused configurations like log_minimum, log_connections, and log_full_table_access. Each serves a different operational or compliance purpose:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;log_all_events – Captures every event for baseline validation or forensic debugging.&lt;/li&gt;
&lt;li&gt;log_table_access – Balances visibility and performance by logging table, connection, and general query activity.&lt;/li&gt;
&lt;li&gt;log_minimum – Targets critical actions such as connections and data modifications, providing essential accountability with minimal overhead.&lt;/li&gt;
&lt;li&gt;log_connections – Focuses solely on login and logout events, ideal for lightweight session auditing.&lt;/li&gt;
&lt;li&gt;log_full_table_access – Delivers comprehensive insight into all table-level DML and DDL operations along with connection tracking, suitable for compliance and change auditing.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By tailoring filters to specific operational needs, administrators can significantly reduce log volume, improve performance, and focus on high-value security and compliance events. The result is a leaner, more informative audit log that provides actionable insight into how users and applications interact with your MySQL environment—without the burden of unnecessary data.&lt;/p&gt;
&lt;h3 id="reference"&gt;Reference&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/8.4/audit-log-filter-overview.html" target="_blank" rel="noopener noreferrer"&gt;Audit Log Filter Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/8.4/write-filter-definitions.html" target="_blank" rel="noopener noreferrer"&gt;Write audit_log_filter definitons&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/8.4/audit-log-filter-variables.html#audit-log-filter-functions" target="_blank" rel="noopener noreferrer"&gt;Audit log filter functions, options, and variables&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="special-thanks"&gt;Special Thanks&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Yura Sorokin&lt;/strong&gt; for the collaboration that made this blog post possible.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Opensource</category>
      <category>Audit Log</category>
      <category>filter</category>
      <category>component</category>
      <category>MySQL</category>
      <category>Community</category>
      <category>Percona Server</category>
      <category>PXC</category>
      <media:thumbnail url="https://percona.community/blog/2025/10/audit-log-filters_hu_a18f51ff6cb4cc34.jpg"/>
      <media:content url="https://percona.community/blog/2025/10/audit-log-filters_hu_c2edeb7c9bc7446a.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona at PostgreSQL Conference Europe 2025</title>
      <link>https://percona.community/blog/2025/10/03/percona-at-postgresql-conference-europe-2025/</link>
      <guid>https://percona.community/blog/2025/10/03/percona-at-postgresql-conference-europe-2025/</guid>
      <pubDate>Fri, 03 Oct 2025 11:00:00 UTC</pubDate>
      <description>We’re proud to announce that Percona is a Platinum Sponsor of PostgreSQL Conference Europe (PGConf.EU) 2025, taking place October 21–24, 2025 in Riga, Latvia 🇱🇻 at the Radisson Blu Latvija Conference Center.</description>
      <content:encoded>&lt;p&gt;We’re proud to announce that Percona is a Platinum Sponsor of &lt;a href="https://2025.pgconf.eu/" target="_blank" rel="noopener noreferrer"&gt;PostgreSQL Conference Europe (PGConf.EU) 2025&lt;/a&gt;, taking place October 21–24, 2025 in Riga, Latvia 🇱🇻 at the Radisson Blu Latvija Conference Center.&lt;/p&gt;
&lt;p&gt;As a Platinum Sponsor, you can find Percona in a prime main exhibit floor location; a chance to connect directly with our PostgreSQL experts from around the world. Visitors stopping by our booth will be able to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Enter raffles and win Percona &amp; PostgreSQL SWAG&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Watch live demos and learn from expert-led sessions&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Explore open source PostgreSQL solutions built for scalability, performance, and security&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/10/intro_hu_c497148933b9f02f.png 480w, https://percona.community/blog/2025/10/intro_hu_bf6434543ed6833a.png 768w, https://percona.community/blog/2025/10/intro_hu_837f2bd377283fe2.png 1400w"
src="https://percona.community/blog/2025/10/intro.png" alt="All Things Open 2021" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As a leader in open source database management and services, Percona supports innovation, collaboration, and the sharing of knowledge that drives the open source ecosystem forward. By &lt;a href="https://2025.pgconf.nyc/" target="_blank" rel="noopener noreferrer"&gt;participating in PostgreSQL Conference Europe&lt;/a&gt;, Percona connects with developers, contributors, and industry leaders to discuss the latest trends, challenges, and advancements. We look forward to further showcasing Percona’s dedication to empowering organizations with robust, scalable, and secure open source database solutions.&lt;/p&gt;
&lt;h2 id="about-the-event"&gt;About the event&lt;/h2&gt;
&lt;p&gt;This year’s conference is the 15th Annual PostgreSQL Conference Europe. The conference is organised by &lt;strong&gt;PostgreSQL Europe&lt;/strong&gt;, with participation from most of the &lt;strong&gt;PostgreSQL user groups around Europe&lt;/strong&gt;, and is intended to be an important meeting and cooperation point for users both in and out of Europe.
PGConf.EU is a unique chance for European PostgreSQL users and developers to catch up, learn, build relationships, get to know each other and consolidate a real network of professionals that use and work with PostgreSQL.&lt;/p&gt;
&lt;h2 id="percona-speaker-agenda"&gt;Percona Speaker Agenda&lt;/h2&gt;
&lt;p&gt;Our experts are taking the stage across multiple tracks to share insights, hands-on experience, and forward-looking innovations:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/10/all.png" alt="All Things Open 2021" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="wednesday-october-22"&gt;Wednesday, October 22&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.postgresql.eu/events/pgconfeu2025/schedule/session/7056-what-implementing-pg_tde-taught-us-about-postgresql/" target="_blank" rel="noopener noreferrer"&gt;What implementing pg_tde taught us about PostgreSQL&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;👤 &lt;strong&gt;Jan Wieremjewicz&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;⏰ 16:05 – 16:55 | Room: Alfa | Track: Community (45 minutes)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="thursday-october-23"&gt;Thursday, October 23&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.postgresql.eu/events/pgconfeu2025/schedule/session/7134-kubernetes-from-the-database-out/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes from the Database Out&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;👤 &lt;strong&gt;Alastair Turner&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;⏰ 10:25 – 10:55 | Room: Omega 2 | Track: DBA (25 minutes)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.postgresql.eu/events/pgconfeu2025/schedule/session/7191-tde-as-an-extension-a-different-path-for-postgresql-encryption/" target="_blank" rel="noopener noreferrer"&gt;TDE as an Extension: A Different Path for PostgreSQL Encryption&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;👤 &lt;strong&gt;Zsolt Parragi&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;⏰ 11:25 – 12:15 | Room: Beta | Track: Sponsors&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.postgresql.eu/events/pgconfeu2025/schedule/session/7189-why-postgresql-took-the-crown-from-mysql-and-what-lies-ahead/" target="_blank" rel="noopener noreferrer"&gt;Why PostgreSQL took the crown from MySQL and what lies ahead&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;👤 &lt;strong&gt;Peter Zaitsev&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;⏰ 17:20 – 17:35 | Room: Omega 1 | Track: Platinum Sponsor Keynotes&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="friday-october-24"&gt;Friday, October 24&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.postgresql.eu/events/pgconfeu2025/schedule/session/7192-lessons-from-two-decades-of-hacking-the-proprietary-value-into-open-source-databases/" target="_blank" rel="noopener noreferrer"&gt;Lessons from two decades of hacking the proprietary value into open source databases&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;👤 &lt;strong&gt;Michal Nosek&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;⏰ 09:25 – 10:15 | Room: Beta | Track: Sponsors&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See the full agenda &lt;a href="https://www.postgresql.eu/events/pgconfeu2025/schedule/" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="community-leadership--kai-wagner"&gt;Community Leadership – Kai Wagner&lt;/h2&gt;
&lt;p&gt;We are also proud that &lt;a href="https://www.linkedin.com/in/kai-wagner-b1b661152/" target="_blank" rel="noopener noreferrer"&gt;Kai Wagner&lt;/a&gt;, Senior Engineering Manager for PostgreSQL at Percona, is serving on the PGConf.EU 2025 Selection Committee.
Kai is a long-time open source contributor and speaker, actively involved in projects like Ceph, openATTIC, and PostgreSQL. As a community builder and PGConf Germany organizer, he helps shape the PostgreSQL ecosystem and ensure diverse, impactful content is represented at the conference.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/10/kai-wagner.png" alt="All Things Open 2021" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Join us in Riga this October to connect, learn, and celebrate PostgreSQL with Percona and the wider open source community!&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <author>Jan Wieremjewicz</author>
      <category>sponsorship</category>
      <category>opensource</category>
      <category>Event</category>
      <media:thumbnail url="https://percona.community/blog/2025/10/intro_hu_c5ccf24ef223a13c.jpg"/>
      <media:content url="https://percona.community/blog/2025/10/intro_hu_ba91e7415034f8f0.jpg" medium="image"/>
    </item>
    <item>
      <title>Audit Log Filter Component</title>
      <link>https://percona.community/blog/2025/09/18/audit-log-filter-component/</link>
      <guid>https://percona.community/blog/2025/09/18/audit-log-filter-component/</guid>
      <pubDate>Thu, 18 Sep 2025 00:00:00 UTC</pubDate>
      <description>The audit log filter component in MySQL 8.4 provides administrators with a powerful mechanism for auditing database activity at a fine-grained level. While it offers significant flexibility—such as selectively logging events based on users, hosts, or event types—it can also be challenging to understand and configure correctly.</description>
      <content:encoded>&lt;p&gt;The audit log filter component in MySQL 8.4 provides administrators with a powerful mechanism for auditing database activity at a fine-grained level. While it offers significant flexibility—such as selectively logging events based on users, hosts, or event types—it can also be challenging to understand and configure correctly.&lt;/p&gt;
&lt;p&gt;In this article, we will examine how the audit log filter component works, walk through its core concepts, and share practical tips for configuring and managing audit filters effectively. Our goal is to help you leverage this feature to improve observability, meet compliance requirements, and reduce unnecessary logging overhead.&lt;/p&gt;
&lt;h3 id="enabling-audit-log-filter"&gt;Enabling Audit Log Filter&lt;/h3&gt;
&lt;p&gt;We will be using Percona Server 8.4.4 or higher in the examples below. First, we need to enable the audit log filter component. To install the audit log filter component, we need to run the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -u root -p &lt; /usr/share/percona-server/mysql/share/audit_log_filter_linux_install.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verify that the audit log filter component is enabled by running:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;select * from mysql.component;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------------+-----------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| component_id | component_group_id | component_urn |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------------+-----------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 2 | 1 | file://component_audit_log_filter |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------------+-----------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Installing the component creates two new tables in the mysql system database: audit_log_filter and audit_log_user. These tables store the audit log filter definitions and the user-to-filter mappings. Together, they are referred to as the audit log filter tables.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Tables_in_mysql |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| audit_log_filter |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| audit_log_user |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------------------------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Although the configuration is persisted in these tables, they are not usually modified directly with INSERT or UPDATE statements. Instead, MySQL provides built-in functions such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;audit_log_filter_set_filter()&lt;/li&gt;
&lt;li&gt;audit_log_filter_set_user()&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To manage filter definitions and user assignments safely.&lt;/p&gt;
&lt;p&gt;Configure the my.cnf file to define the desired audit log output format and specify the location of the audit.log file. In the example below, the log format is set to JSON, but other formats (e.g., NEW or OLD) can also be configured depending on your requirements. The audit log file can be written to any path accessible to the MySQL server process.&lt;/p&gt;
&lt;p&gt;Example Changes:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# auditlog
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;audit_log_filter.format=JSON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;audit_log_filter.file=/var/lib/mysql/audit.log&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Restart mysql server to apply the changes:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo systemctl restart mysqld&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The audit log filter install is complete. Now we can start using the audit log filter component.&lt;/p&gt;
&lt;h3 id="creating-audit-log-filters"&gt;Creating Audit Log Filters&lt;/h3&gt;
&lt;p&gt;The audit log filter component in MySQL 8.4 provides fine-grained control over database auditing. Instead of logging all events indiscriminately, administrators can define audit log filters, which are rule sets that determine exactly which events should be captured and which should be excluded.&lt;/p&gt;
&lt;p&gt;This allows you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Log only the activity relevant to security, compliance, or troubleshooting.&lt;/li&gt;
&lt;li&gt;Reduce unnecessary noise and audit log volume.&lt;/li&gt;
&lt;li&gt;Apply different filters to specific users, hosts, or accounts for tailored auditing.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Because filters can be customized and assigned at the user or host level, the audit log filter component offers both flexibility and efficiency, making it a powerful mechanism for monitoring and securing database activity while minimizing overhead.&lt;/p&gt;
&lt;p&gt;Lets create a rule that will log all events:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT audit_log_filter_set_filter('log_all_events', '{ "filter": {"log": true } }');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Lets assign the rule to the user:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT audit_log_filter_set_user('%', 'log_all_events');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT audit_log_filter_flush();&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now all users will have all their events logged.&lt;/p&gt;
&lt;p&gt;Before proceeding, ensure that the jq utility is installed on your system. The installation commands provided in the examples below are compatible with both RHEL-based and Debian-based distributions.&lt;/p&gt;
&lt;p&gt;RHEL builds&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo yum install jq&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Debian builds&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt install jq&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To validate that the audit log filter is functioning as expected, we can inspect the raw contents of the audit.log file. Since the log entries are in JSON format, using jq provides an efficient way to query and extract specific events. For example, to filter and display only connection-related events, run:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat audit.log | jq '.[]|select(.class=="connection")'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This command streams the log file, parses the JSON structure, and returns only entries where the “class” field is equal to “connection”. This approach allows for targeted analysis, making it easier to verify filter behavior, troubleshoot issues, or monitor specific event classes without manually parsing large volumes of log data.&lt;/p&gt;
&lt;p&gt;Example Output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "timestamp": "2025-07-24 07:18:00",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "id": 27110,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "class": "connection",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "event": "connect",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "connection_id": 415,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "account": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "user": "wayne",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "host": "localhost"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "login": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "user": "wayne",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "os": "",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "ip": "",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "proxy": ""
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "connection_data": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "connection_type": "socket",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "status": 0,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "db": ""
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "connection_attributes": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "_pid": "717914",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "_platform": "aarch64",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "_os": "Linux",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "_client_name": "libmysql",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "os_user": "wayne",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "_client_version": "8.4.5-5",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "program_name": "mysql"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I’ll cover more advanced filter configurations in a follow-up post—stay tuned for Part 2 of the Audit Log Filter Component series.&lt;/p&gt;
&lt;p&gt;In summary, the audit log filter component in MySQL 8.4 provides administrators with a flexible and fine-grained approach to database auditing. By tailoring filters to specific users, hosts, and event types, you can ensure that only the most relevant activity is logged, making it easier to meet compliance requirements while reducing overhead. With proper configuration and careful use of filters, you can transform the audit log from a noisy data dump into a precise monitoring tool that strengthens both security and observability in your MySQL environment.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Opensource</category>
      <category>Audit Log</category>
      <category>filter</category>
      <category>component</category>
      <category>MySQL</category>
      <category>Community</category>
      <category>Percona Server</category>
      <category>PXC</category>
      <media:thumbnail url="https://percona.community/blog/2025/09/audit-log-filter_hu_a21f707e924f3dd4.jpg"/>
      <media:content url="https://percona.community/blog/2025/09/audit-log-filter_hu_555048615921e7a5.jpg" medium="image"/>
    </item>
    <item>
      <title>pg_stat_monitor Needs You! Join the Feedback Phase</title>
      <link>https://percona.community/blog/2025/08/13/pg_stat_monitor-needs-you-join-the-feedback-phase/</link>
      <guid>https://percona.community/blog/2025/08/13/pg_stat_monitor-needs-you-join-the-feedback-phase/</guid>
      <pubDate>Wed, 13 Aug 2025 00:00:00 UTC</pubDate>
      <description>At Percona, we believe that great open source software is built with the Community, not just for it. As we plan the next iteration of pg_stat_monitor, our advanced PostgreSQL monitoring extension, we’re taking a closer look at the current feature set and how it aligns with real-world usage.</description>
      <content:encoded>&lt;p&gt;At Percona, we believe that great open source software is built &lt;em&gt;with&lt;/em&gt; the Community, not just &lt;em&gt;for&lt;/em&gt; it. As we plan the next iteration of &lt;a href="https://github.com/percona/pg_stat_monitor" target="_blank" rel="noopener noreferrer"&gt;pg_stat_monitor&lt;/a&gt;, our advanced PostgreSQL monitoring extension, we’re taking a closer look at the current feature set and how it aligns with real-world usage.&lt;/p&gt;
&lt;p&gt;In open source, the community isn’t just a user base, it’s the most important stakeholder. While we set the vision, your feedback is the compass that guides us. Your experiences, bug reports, and feature requests are what validate our direction and keep us focused on what matters most. Without your active involvement, it’s impossible to build a tool that truly solves the problems you face every day. Your input ensures pg_stat_monitor evolves in a way that is both innovative and genuinely useful.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/08/jan_feedback_wanted1.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="your-feedback-is-the-compass"&gt;Your Feedback Is the Compass&lt;/h2&gt;
&lt;p&gt;Over time, &lt;a href="https://docs.percona.com/pg-stat-monitor/" target="_blank" rel="noopener noreferrer"&gt;pg_stat_monitor&lt;/a&gt; has grown beyond &lt;a href="https://www.percona.com/blog/understand-your-postgresql-workloads-better-with-pg_stat_monitor/" target="_blank" rel="noopener noreferrer"&gt;its initial query performance monitoring scope&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;While many features have proven to be extremely useful (especially in use with &lt;a href="https://docs.percona.com/percona-monitoring-and-management/2/setting-up/client/postgresql.html#pg_stat_monitor" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM&lt;/a&gt;)), others may see limited adoption or give no evidence of any adoption at all. To ensure we’re investing in what really matters, we want to understand what you, the users and contributors, actually rely on day-to-day.&lt;/p&gt;
&lt;h2 id="thats-why-were-kicking-off-a-community-feedback-phase"&gt;That’s why we’re kicking off a Community feedback phase&lt;/h2&gt;
&lt;p&gt;We’re especially interested in:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What features of pg_stat_monitor are critical for your workflows?&lt;/li&gt;
&lt;li&gt;Are you using it together with PMM, CLI or in another way?&lt;/li&gt;
&lt;li&gt;Are there parts of the extension that feel unnecessary or unclear?&lt;/li&gt;
&lt;li&gt;What would you love to see in the next release?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is just the beginning of a broader review and improvement effort, one that we want to run transparently and inclusively, true to our open source values.&lt;/p&gt;
&lt;p&gt;Whether you’re a developer, DBA, or platform engineer using pg_stat_monitor directly or via tools like PMM, know that your input matters.&lt;/p&gt;
&lt;p&gt;👉 Leave a comment below, or reach out on our &lt;a href="https://forums.percona.com/c/postgresql/pg-stat-monitor/69" target="_blank" rel="noopener noreferrer"&gt;Percona Community Forums&lt;/a&gt; or via &lt;a href="https://github.com/percona/pg_stat_monitor/issues" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.
Let’s shape the future of pg_stat_monitor together.&lt;/p&gt;</content:encoded>
      <author>Jan Wieremjewicz</author>
      <category>PostgreSQL</category>
      <category>Opensource</category>
      <category>pg_jan</category>
      <category>pg_stat_monitor</category>
      <category>monitoring</category>
      <media:thumbnail url="https://percona.community/blog/2025/08/jan-pgsm-cover1_hu_f25f3c6bafbe318c.jpg"/>
      <media:content url="https://percona.community/blog/2025/08/jan-pgsm-cover1_hu_6eb770cc154f372a.jpg" medium="image"/>
    </item>
    <item>
      <title>GitOps Journey: Part 4 – Observability and Monitoring with Coroot in Kubernetes</title>
      <link>https://percona.community/blog/2025/07/22/gitops-journey-part-4-observability-and-monitoring-with-coroot-in-kubernetes/</link>
      <guid>https://percona.community/blog/2025/07/22/gitops-journey-part-4-observability-and-monitoring-with-coroot-in-kubernetes/</guid>
      <pubDate>Tue, 22 Jul 2025 00:01:00 UTC</pubDate>
      <description>Our PostgreSQL cluster is running, and the demo app is generating traffic — but we have no visibility into the health of the Kubernetes cluster, services, or applications.</description>
      <content:encoded>&lt;p&gt;Our PostgreSQL cluster is running, and the demo app is generating traffic — but we have no visibility into the health of the Kubernetes cluster, services, or applications.&lt;/p&gt;
&lt;p&gt;What happens when disk space runs out? What if the database is under heavy load and needs scaling? What if errors are buried in application logs? How busy are the network and storage layers? What’s the actual cost of the infrastructure?&lt;/p&gt;
&lt;p&gt;This is where &lt;a href="https://coroot.com/" target="_blank" rel="noopener noreferrer"&gt;Coroot&lt;/a&gt; comes in.&lt;/p&gt;
&lt;p&gt;Coroot is an open-source observability platform that provides dashboards for profiling, logs, service maps, and resource usage — helping you track system health and diagnose issues quickly.&lt;/p&gt;
&lt;p&gt;We’ll deploy it using &lt;strong&gt;Helm via ArgoCD&lt;/strong&gt;, continuing with our GitOps workflow.&lt;/p&gt;
&lt;p&gt;This is Part 4 in our series. Previously, we:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Set up ArgoCD and a GitHub repository for declarative manifests (&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-1-getting-started-with-argocd-and-github/"&gt;Part 1&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installed a PostgreSQL cluster using Percona Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deployed a demo application to simulate traffic and interact with the database&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;All infrastructure is defined declaratively and deployed from the GitHub repository, following GitOps practices.&lt;/p&gt;
&lt;p&gt;So far, we’ve explored cluster scaling, user management, and dynamic configuration — and now it’s time for observability.&lt;/p&gt;
&lt;p&gt;We’ll install Coroot by following the &lt;a href="https://docs.coroot.com/installation/kubernetes/" target="_blank" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; for Kubernetes.&lt;/p&gt;
&lt;p&gt;Steps ahead:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install the Coroot Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install the Coroot Community Edition&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let’s get started.&lt;/p&gt;
&lt;h2 id="project-structure"&gt;Project Structure&lt;/h2&gt;
&lt;p&gt;We already have a &lt;code&gt;postgres/&lt;/code&gt; directory for PostgreSQL manifests and an &lt;code&gt;apps/&lt;/code&gt; directory for ArgoCD applications.&lt;/p&gt;
&lt;p&gt;We’ll preserve this layout and add a new &lt;code&gt;coroot/&lt;/code&gt; folder for clarity. You can use a different structure if preferred.&lt;/p&gt;
&lt;h2 id="create-manifest-for-installing-the-coroot-operator"&gt;Create Manifest for Installing the Coroot Operator&lt;/h2&gt;
&lt;p&gt;The documentation recommends installing via Helm.&lt;br&gt;
Since we use ArgoCD, we’ll create a manifest that installs via Helm.&lt;/p&gt;
&lt;p&gt;Create file: &lt;code&gt;coroot/operator.yaml&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: argoproj.io/v1alpha1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Application
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: coroot-operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: argocd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; project: default
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; source:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; repoURL: https://coroot.github.io/helm-charts
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; chart: coroot-operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; targetRevision: 0.4.2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; destination:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; server: https://kubernetes.default.svc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: coroot
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncPolicy:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; automated:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; prune: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; selfHeal: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncOptions:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - CreateNamespace=true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Note: I’m using version &lt;code&gt;0.4.2&lt;/code&gt;, which was current at the time of writing.&lt;br&gt;
To check available versions, use &lt;a href="https://github.com/coroot/helm-charts/pkgs/container/charts%2Fcoroot-operator" target="_blank" rel="noopener noreferrer"&gt;this GitHub link&lt;/a&gt; or Helm CLI:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;helm repo add coroot https://coroot.github.io/helm-charts
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;helm repo update
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;helm search repo coroot-operator --versions&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="create-manifest-for-installing-coroot-community-edition"&gt;Create Manifest for Installing Coroot Community Edition&lt;/h2&gt;
&lt;p&gt;Create file: &lt;code&gt;coroot/coroot.yaml&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: argoproj.io/v1alpha1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Application
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: coroot
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: argocd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; project: default
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; source:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; repoURL: https://coroot.github.io/helm-charts
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; chart: coroot-ce
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; targetRevision: 0.3.1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; helm:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; parameters:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: clickhouse.shards
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; value: "2"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: clickhouse.replicas
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; value: "2"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: service.type
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; value: LoadBalancer
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; destination:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; server: https://kubernetes.default.svc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: coroot
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncPolicy:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; automated:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; prune: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; selfHeal: true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This chart creates a minimal Coroot Custom Resource.&lt;br&gt;
I’ve added &lt;code&gt;service.type: LoadBalancer&lt;/code&gt; to expose a public IP.&lt;/p&gt;
&lt;p&gt;If you don’t use LoadBalancer, you’ll need to forward the Coroot port after installation:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl port-forward -n coroot service/coroot-coroot 8080:8080&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="create-argocd-application-manifest"&gt;Create ArgoCD Application Manifest&lt;/h2&gt;
&lt;p&gt;Since we manage our infrastructure via a GitHub repository, we need an ArgoCD Application that tracks changes in the &lt;code&gt;coroot/&lt;/code&gt; folder.&lt;/p&gt;
&lt;p&gt;Create file: &lt;code&gt;apps/argocd-coroot.yaml&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: argoproj.io/v1alpha1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Application
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: coroot-sync-app
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: argocd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; project: default
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; source:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; repoURL: https://github.com/dbazhenov/percona-argocd-pg-coroot.git
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; targetRevision: main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; path: coroot
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; destination:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; server: https://kubernetes.default.svc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: coroot
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncPolicy:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; automated:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; prune: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; selfHeal: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncOptions:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - CreateNamespace=true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This lightweight app will monitor the folder and apply updates automatically if a change is detected (e.g. chart version bump).&lt;/p&gt;
&lt;h2 id="define-chart-installation-order"&gt;Define Chart Installation Order&lt;/h2&gt;
&lt;p&gt;We have two charts: &lt;code&gt;operator.yaml&lt;/code&gt; and &lt;code&gt;coroot.yaml&lt;/code&gt;, and the operator must be installed first.&lt;/p&gt;
&lt;p&gt;Create &lt;code&gt;coroot/kustomization.yaml&lt;/code&gt; to specify resource order:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: kustomize.config.k8s.io/v1beta1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Kustomization
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;resources:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - operator.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - coroot.yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="publish-manifests-to-github"&gt;Publish Manifests to GitHub&lt;/h2&gt;
&lt;p&gt;Check which files were changed:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git status&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add changes:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git add .&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verify staged files:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git status&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Commit:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git commit -m "Installing Coroot Operator and Coroot with ArgoCD"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Push:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git push origin main&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="apply-argocd-application"&gt;Apply ArgoCD Application&lt;/h2&gt;
&lt;p&gt;Deploy the ArgoCD app that installs Coroot from our GitHub repository:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply -f apps/argocd-coroot.yaml -n argocd&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Validate installation and sync:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-coroot-sync-app_hu_d9da7980b8ef31d3.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-coroot-sync-app_hu_d1e3781de9c86bc8.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-coroot-sync-app_hu_d5fb881b1069f790.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-coroot-sync-app.png" alt="GitOps - ArgoCD and Coroot" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We now see &lt;code&gt;coroot&lt;/code&gt;, &lt;code&gt;coroot-operator&lt;/code&gt;, and &lt;code&gt;coroot-sync-app&lt;/code&gt; deployed.&lt;/p&gt;
&lt;h2 id="access-coroot-ui"&gt;Access Coroot UI&lt;/h2&gt;
&lt;p&gt;Since we deployed Coroot using LoadBalancer, retrieve its external IP:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get svc -n coroot&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Open EXTERNAL-IP on port 8080.&lt;br&gt;
For example: &lt;code&gt;http://35.202.140.216:8080/&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;If you didn’t use LoadBalancer, run port-forward:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl port-forward -n coroot service/coroot-coroot 8080:8080&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Then visit &lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You’ll be prompted to set an admin password on first login.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-welcome_hu_e5a5227526a9ab37.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-welcome_hu_6263ca8ac8ccabda.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-welcome_hu_17c070d92b90032a.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-welcome.png" alt="GitOps - ArgoCD and Coroot" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="exploring-coroot-ui"&gt;Exploring Coroot UI&lt;/h2&gt;
&lt;p&gt;On the home page, we see a list of applications running in the cluster and resource usage.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-home-dashboard_hu_19c1882d484471cf.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-home-dashboard_hu_fa34cad47b7230fc.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-home-dashboard_hu_41013580bea140af.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-home-dashboard.png" alt="GitOps - ArgoCD and Coroot - Home" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I increased the load on the PostgreSQL cluster using the Demo App to test observability.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-load_hu_e6e7de6b8f0d14f4.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-load_hu_a6ae4e8000dffd0e.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-load_hu_8dcb75a909611a2d.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-load.png" alt="GitOps - ArgoCD and Coroot - Demo App" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The PostgreSQL cluster dashboard offers several tabs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CPU&lt;/li&gt;
&lt;li&gt;Memory&lt;/li&gt;
&lt;li&gt;Storage&lt;/li&gt;
&lt;li&gt;Instances&lt;/li&gt;
&lt;li&gt;Logs&lt;/li&gt;
&lt;li&gt;Profiling&lt;/li&gt;
&lt;li&gt;Tracing&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster_hu_4d649a08378d2d1a.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster_hu_6875265d603fa92d.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster_hu_c144b4e4c765ae76.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster.png" alt="GitOps - ArgoCD and Coroot - PG Cluster" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Coroot displays a visual map of service interactions — showing which app connects to the PostgreSQL cluster.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-map_hu_50412503e22d3551.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-map_hu_9f70cf4209e1dfbc.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-map_hu_fb80a6a605f0035e.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-map.png" alt="GitOps - ArgoCD and Coroot - PG Cluster" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Profiling&lt;/strong&gt; tab looks excellent and intuitive. Here’s the Demo App profiling view:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-profiling_hu_2e9cfe6701a6f254.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-profiling_hu_23528748fdc62126.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-profiling_hu_6d91351cd2ccedd3.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-profiling.png" alt="GitOps - ArgoCD and Coroot - Demo App Profiling" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I also triggered an intentional error in the demo app.&lt;br&gt;
Coroot correctly displayed it in both the home view and the app details page.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-profiling_hu_2e9cfe6701a6f254.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-profiling_hu_23528748fdc62126.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-profiling_hu_6d91351cd2ccedd3.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-demo-profiling.png" alt="GitOps - ArgoCD and Coroot - Demo App Logs" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I especially liked the &lt;strong&gt;Logs&lt;/strong&gt; and &lt;strong&gt;Costs&lt;/strong&gt; sections in the sidebar — very well implemented.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-logs_hu_d4bb6db723fa13df.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-logs_hu_d753bf1ff228ccdc.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-logs_hu_5c35d225e551d55f.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-logs.png" alt="GitOps - ArgoCD and Coroot - Demo App Logs" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-costs_hu_2381727e8e34df66.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-costs_hu_1e37d17c340e011.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-costs_hu_ecfa20d83bd0892.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-costs.png" alt="GitOps - ArgoCD and Coroot - Demo App Costs" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="first-incident-storage-usage-in-postgresql-turns-yellow"&gt;First Incident: Storage Usage in PostgreSQL Turns Yellow&lt;/h2&gt;
&lt;p&gt;While exploring Coroot and the cluster, I increased the load on the PostgreSQL cluster using the Demo App.&lt;/p&gt;
&lt;p&gt;After a short while, I noticed that the Postgres disk was full.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage_hu_c5a7d8ed00eca3ab.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage_hu_6fd7989d44e7f1da.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage_hu_bf845b4e99005994.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage.png" alt="GitOps - ArgoCD and Coroot - PG Storage" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I opened the cluster details and went to the &lt;strong&gt;Storage&lt;/strong&gt; tab.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-details_hu_d68091fe4936a452.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-details_hu_ed95998788d09a8.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-details_hu_23d67d75d90e808b.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-details.png" alt="GitOps - ArgoCD and Coroot - PG Storage Details" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;By default, the &lt;code&gt;cr.yaml&lt;/code&gt; file allocates just 1Gi of disk space — which is fine for a test setup.&lt;/p&gt;
&lt;p&gt;Let’s increase disk size the GitOps way.&lt;/p&gt;
&lt;h2 id="increase-storage-size"&gt;Increase Storage Size&lt;/h2&gt;
&lt;p&gt;Open the file &lt;code&gt;postgres/cr.yaml&lt;/code&gt; and locate the section:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; dataVolumeClaimSpec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# storageClassName: standard
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; accessModes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ReadWriteOnce
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; resources:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; requests:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; storage: 1Gi&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Change &lt;code&gt;storage&lt;/code&gt; from &lt;code&gt;1Gi&lt;/code&gt; to &lt;code&gt;5Gi&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note: Backup volumes (pgBackRest) are also enabled by default and set to &lt;code&gt;1Gi&lt;/code&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; manual:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; repoName: repo1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; options:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - --type=full
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# initialDelaySeconds: 120
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; repos:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: repo1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; schedules:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; full: "0 0 * * 6"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# differential: "0 1 * * 1-6"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# incremental: "0 1 * * 1-6"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volume:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumeClaimSpec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# storageClassName: standard
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; accessModes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ReadWriteOnce
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; resources:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; requests:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; storage: 1Gi&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Increase this storage to &lt;code&gt;5Gi&lt;/code&gt; as well.&lt;/p&gt;
&lt;p&gt;Save changes to &lt;code&gt;cr.yaml&lt;/code&gt;, then commit and push to the GitHub repository.&lt;/p&gt;
&lt;p&gt;ArgoCD will automatically apply the changes. Pure GitOps magic.&lt;/p&gt;
&lt;p&gt;Check the result in Coroot — everything looks great. The disk is increased to &lt;code&gt;5Gi&lt;/code&gt; and the issue is resolved.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-details-result_hu_3ada1633dcf9a61b.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-details-result_hu_6161ad5c0039ad47.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-details-result_hu_31967d7a0ec33710.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-details-result.png" alt="GitOps - ArgoCD and Coroot - PG Storage Details Results" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-home-result_hu_75b2bce9c4bf908d.png 480w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-home-result_hu_ed09977afbd8979a.png 768w, https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-home-result_hu_b39a70f347ba25be.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-coroot-cluster-storage-home-result.png" alt="GitOps - ArgoCD and Coroot - PG Storage Results" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;We’ve installed and tested a solid monitoring tool, and it really makes a difference.&lt;/p&gt;
&lt;p&gt;Across this 4-part series, we walked through the GitOps journey step by step:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-1-getting-started-with-argocd-and-github/"&gt;Part 1&lt;/a&gt; - Created a Kubernetes cluster, installed ArgoCD, and set up a GitHub repository.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-2-deploying-postgresql-with-gitops-and-argocd/"&gt;Part 2&lt;/a&gt; - Deployed a PostgreSQL cluster using Percona Operator for PostgreSQL.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-3-deploying-a-load-generator-and-connecting-to-postgresql/"&gt;Part 3&lt;/a&gt; - Deployed a demo app via ArgoCD using Helm.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installed and tested &lt;strong&gt;Coroot&lt;/strong&gt;, an excellent open-source observability tool.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Managed the PG cluster through GitHub and ArgoCD — scaled replicas, created users, resized volumes, configured access, and more.&lt;/p&gt;
&lt;p&gt;Thank you for reading — I hope this series was helpful.&lt;/p&gt;
&lt;p&gt;The project files are available in my repository &lt;a href="https://github.com/dbazhenov/percona-argocd-pg-coroot" target="_blank" rel="noopener noreferrer"&gt;https://github.com/dbazhenov/percona-argocd-pg-coroot&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I’d love to hear your questions, feedback, and suggestions for improvement.&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>PostgreSQL</category>
      <category>Coroot</category>
      <category>GitOps</category>
      <category>ArgoCD</category>
      <media:thumbnail url="https://percona.community/blog/2025/07/gitops-part-4_hu_2013b3b0cf1fac01.jpg"/>
      <media:content url="https://percona.community/blog/2025/07/gitops-part-4_hu_df439a8a73d324e6.jpg" medium="image"/>
    </item>
    <item>
      <title>GitOps Journey: Part 3 – Deploying a Load Generator and Connecting to PostgreSQL</title>
      <link>https://percona.community/blog/2025/07/22/gitops-journey-part-3-deploying-a-load-generator-and-connecting-to-postgresql/</link>
      <guid>https://percona.community/blog/2025/07/22/gitops-journey-part-3-deploying-a-load-generator-and-connecting-to-postgresql/</guid>
      <pubDate>Tue, 22 Jul 2025 00:00:50 UTC</pubDate>
      <description>We’ll deploy a demo application into the Kubernetes cluster using ArgoCD to simulate load on the PostgreSQL cluster.</description>
      <content:encoded>&lt;p&gt;We’ll deploy a demo application into the Kubernetes cluster using ArgoCD to simulate load on the PostgreSQL cluster.&lt;/p&gt;
&lt;p&gt;This is a series of articles, in previous parts we:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-1-getting-started-with-argocd-and-github/"&gt;Part 1&lt;/a&gt; - Prepared the environment and installed ArgoCD and GitHub repository.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-2-deploying-postgresql-with-gitops-and-argocd/"&gt;Part 2&lt;/a&gt; - Installed Percona Operator for Postgres and created a Postgres cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The application is a custom Go-based service that generates traffic for PostgreSQL, MongoDB, or MySQL.&lt;/p&gt;
&lt;p&gt;It uses a dataset of GitHub repositories and pull requests, and mimics real-world operations like fetching, creating, updating, and deleting records.&lt;br&gt;
Load intensity is configurable through a browser-based control panel.&lt;/p&gt;
&lt;p&gt;We’ll install it using Helm, tracked and deployed via ArgoCD.&lt;/p&gt;
&lt;p&gt;Reference repository: &lt;a href="https://github.com/dbazhenov/github-stat" target="_blank" rel="noopener noreferrer"&gt;github-stat&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="create-the-argocd-application-manifest"&gt;Create the ArgoCD Application Manifest&lt;/h2&gt;
&lt;p&gt;Create a file named &lt;code&gt;argocd-demo-app.yaml&lt;/code&gt; in the &lt;code&gt;apps/&lt;/code&gt; directory.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: argoproj.io/v1alpha1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Application
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: demo-app
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: argocd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; project: default
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; source:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; repoURL: https://github.com/dbazhenov/github-stat
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; targetRevision: main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; path: k8s/helm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; destination:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; server: https://kubernetes.default.svc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: demo-app
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncPolicy:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; automated:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; prune: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; selfHeal: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncOptions:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - CreateNamespace=true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This will install the Helm chart from&lt;br&gt;
&lt;code&gt;https://github.com/dbazhenov/github-stat/tree/main/k8s/helm&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;By default, the service is configured as &lt;code&gt;LoadBalancer&lt;/code&gt;, making it accessible from the internet.&lt;/p&gt;
&lt;p&gt;To switch to &lt;code&gt;NodePort&lt;/code&gt; (if needed), override the Helm value:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;source:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; helm:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; parameters:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: controlPanelService.type
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; value: NodePort&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We’ll keep default settings in this example.&lt;/p&gt;
&lt;h2 id="push-the-application-manifest-to-github"&gt;Push the Application Manifest to GitHub&lt;/h2&gt;
&lt;p&gt;Track and commit your changes:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git status&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git add .&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git commit -m "Installing Demo Application in ArgoCD by HELM"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git push origin main &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected Git output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) git status
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;On branch main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Your branch is up to date with 'origin/main'.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Untracked files:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; (use "git add &lt;file&gt;..." to include in what will be committed)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; apps/argocd-demo-app.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;nothing added to commit but untracked files present (use "git add" to track)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) ✗ git add .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) ✗ git commit -m "Installing Demo Application in ArgoCD by HELM"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[main 03ce175] Installing Demo Application in ArgoCD by HELM
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1 file changed, 20 insertions(+)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; create mode 100644 apps/argocd-demo-app.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) git push origin main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Enumerating objects: 6, done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Counting objects: 100% (6/6), done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Delta compression using up to 10 threads
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Compressing objects: 100% (4/4), done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Writing objects: 100% (4/4), 686 bytes | 686.00 KiB/s, done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Total 4 (delta 0), reused 0 (delta 0), pack-reused 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;To github.com:dbazhenov/percona-argocd-pg-coroot.git
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 6b2dc98..03ce175 main -&gt; main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="apply-the-argocd-application"&gt;Apply the ArgoCD Application&lt;/h2&gt;
&lt;p&gt;Deploy the app via:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply -f apps/argocd-demo-app.yaml -n argocd&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;ArgoCD will install the app and has started tracking the app’s HELM chart&lt;/p&gt;
&lt;h2 id="validate-the-deployment"&gt;Validate the Deployment&lt;/h2&gt;
&lt;p&gt;Confirm the app status in ArgoCD UI:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-github-argo-demo-app_hu_f7e82c9783a43d91.png 480w, https://percona.community/blog/2025/07/gitops-github-argo-demo-app_hu_8ecf80945d756a3a.png 768w, https://percona.community/blog/2025/07/gitops-github-argo-demo-app_hu_e5087321ad96287d.png 1400w"
src="https://percona.community/blog/2025/07/gitops-github-argo-demo-app.png" alt="GitOps - Percona Operator for Postgres and PG Cluster" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Check running pods:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get pods -n demo-app&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected pods:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) kubectl get pods -n demo-app
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;demo-app-dataset-6d886f67-j648w 1/1 Running 0 2m52s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;demo-app-load-577cff97c9-d8j99 1/1 Running 0 2m52s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;demo-app-valkey-74989c9bf7-gjp4x 1/1 Running 0 2m52s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;demo-app-web-5b98d4c65c-xmkq9 1/1 Running 0 2m52s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;demo-app-dataset - loads dataset&lt;/li&gt;
&lt;li&gt;demo-app-load - generates traffic&lt;/li&gt;
&lt;li&gt;demo-app-valkey - Redis-compatible DB backend&lt;/li&gt;
&lt;li&gt;demo-app-web - UI dashboard&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="open-the-application-dashboard"&gt;Open the Application Dashboard&lt;/h2&gt;
&lt;p&gt;Retrieve the external IP:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get svc -n demo-app&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Find the &lt;code&gt;EXTERNAL-IP&lt;/code&gt; of &lt;code&gt;demo-app-web-service&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Sample output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) kubectl get svc -n demo-app
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;demo-app-valkey-service ClusterIP 34.118.235.203 &lt;none&gt; 6379/TCP 4m59s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;demo-app-web-service LoadBalancer 34.118.232.144 34.28.221.107 80:31308/TCP 4m59s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Access the app in your browser:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-demo-app-ui_hu_a5216002601d44a.png 480w, https://percona.community/blog/2025/07/gitops-demo-app-ui_hu_ecc6617f019380e0.png 768w, https://percona.community/blog/2025/07/gitops-demo-app-ui_hu_43937f9d77a63202.png 1400w"
src="https://percona.community/blog/2025/07/gitops-demo-app-ui.png" alt="GitOps - ArgoCD Demo App UI" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Navigate to the &lt;strong&gt;Settings&lt;/strong&gt; tab to configure a PostgreSQL connection.&lt;/p&gt;
&lt;h2 id="postgresql-credentials-setup"&gt;PostgreSQL Credentials Setup&lt;/h2&gt;
&lt;p&gt;Percona Operator has already (&lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/users.html" target="_blank" rel="noopener noreferrer"&gt;Application and system users&lt;/a&gt;):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Created schema and database &lt;code&gt;cluster1&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Created user &lt;code&gt;cluster1&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Stored credentials in &lt;code&gt;cluster1-pguser-cluster1&lt;/code&gt; secret&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Extract the password:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get secret cluster1-pguser-cluster1 -n postgres-operator --template='{{.data.password | base64decode}}{{"\n"}}'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s connect to the database from the Demo application using the given user and cluster1-pgbouncer.postgres-operator.svc host&lt;/p&gt;
&lt;p&gt;In the Connection String field enter&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;user=cluster1 password='[PASSWORD]' dbname=cluster1 host=cluster1-pgbouncer.postgres-operator.svc port=5432&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-demo-app-ui-connect_hu_af293f7503ec6a79.png 480w, https://percona.community/blog/2025/07/gitops-demo-app-ui-connect_hu_44e6451a426458aa.png 768w, https://percona.community/blog/2025/07/gitops-demo-app-ui-connect_hu_d38c2f7dd717129a.png 1400w"
src="https://percona.community/blog/2025/07/gitops-demo-app-ui-connect.png" alt="GitOps - ArgoCD Demo App UI - Connect" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The connection has been successfully created, this is good.&lt;/p&gt;
&lt;p&gt;To start generating the load, we need to import the Dataset using the Import Dataset button.&lt;/p&gt;
&lt;h2 id="dataset-import-error-create-schema-denied"&gt;Dataset Import Error: Create Schema Denied&lt;/h2&gt;
&lt;p&gt;During import, the app tries to create a schema.&lt;br&gt;
By default, pgBouncer limits user privileges, preventing this action.&lt;/p&gt;
&lt;p&gt;Percona &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/users.html#superuser-and-pgbouncer" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; suggests enabling &lt;code&gt;proxy.pgBouncer.exposeSuperusers&lt;/code&gt; and creating a privileged user.&lt;/p&gt;
&lt;p&gt;We’ll handle this via GitOps. It seems cool that we’ll be doing this with tracking in Git, as these are important settings and we shouldn’t forget about them and turn them off in the future.&lt;/p&gt;
&lt;h2 id="define-a-new-postgresql-user"&gt;Define a New PostgreSQL User&lt;/h2&gt;
&lt;p&gt;We will make changes to postgres/cr.yaml that will add a new user and also enable the proxy.pgBouncer.exposeSuperusers option.&lt;/p&gt;
&lt;p&gt;In the postgres/cr.yaml file I found the users section, uncommented and added my user data.&lt;/p&gt;
&lt;p&gt;In &lt;code&gt;postgres/cr.yaml&lt;/code&gt;, add:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; users:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: daniil
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; databases:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - demo
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; options: "SUPERUSER"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; password:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; type: ASCII
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; secretName: "daniil-credentials"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Note: In production, use scoped permissions like &lt;code&gt;"LOGIN CREATE CREATEDB"&lt;/code&gt; rather than &lt;code&gt;SUPERUSER&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I also found the proxy.pgBouncer.exposeSuperusers setting and set it to true&lt;/p&gt;
&lt;p&gt;Update pgBouncer config:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; proxy:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pgBouncer:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; replicas: 3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: docker.io/percona/percona-pgbouncer:1.24.1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; exposeSuperusers: true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Commit and push:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git status&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git add .&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git commit -m "Postgres cluster: Creating a new user and pgBouncer.exposeSuperusers"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git push origin main&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After a couple of minutes, ArgoCD will synchronize the changes and Percona Operator will create the user and change the configuration.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-pg-new-user_hu_5517ec3b89a13b26.png 480w, https://percona.community/blog/2025/07/gitops-argocd-pg-new-user_hu_132730d99fb3819b.png 768w, https://percona.community/blog/2025/07/gitops-argocd-pg-new-user_hu_799bfdf42374359f.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-pg-new-user.png" alt="GitOps - ArgoCD Demo App UI" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="connect-with-the-new-user"&gt;Connect With the New User&lt;/h2&gt;
&lt;p&gt;Get the password:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get secret daniil-credentials -n postgres-operator --template='{{.data.password | base64decode}}{{"\n"}}'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s replace Connection String in Demo application, I got the following string&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;user=daniil password='iKj:e[wT3*g]OF5+f' dbname=dataset host=cluster1-pgbouncer.postgres-operator.svc port=5432&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocs-demo-app-new-user_hu_e28b8a3338806aa3.png 480w, https://percona.community/blog/2025/07/gitops-argocs-demo-app-new-user_hu_2f0a7cf452382a41.png 768w, https://percona.community/blog/2025/07/gitops-argocs-demo-app-new-user_hu_d7f241e5df838cdf.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocs-demo-app-new-user.png" alt="GitOps - ArgoCD Demo App UI - Connection" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Click the “Import Dataset” button and wait a few minutes until the import is in Done status in the Dataset tab.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocs-demo-app-dataset_hu_b05be9189ce8a29c.png 480w, https://percona.community/blog/2025/07/gitops-argocs-demo-app-dataset_hu_feaa2ca204a6ed74.png 768w, https://percona.community/blog/2025/07/gitops-argocs-demo-app-dataset_hu_bd097498f5e03155.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocs-demo-app-dataset.png" alt="GitOps - ArgoCD Demo App UI - Connection" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="enable-load-generation"&gt;Enable Load Generation&lt;/h2&gt;
&lt;p&gt;Activate the load generator:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Toggle &lt;strong&gt;Enable Load&lt;/strong&gt; in the connection settings&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Update Connection&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-demo-app-enable-load_hu_8e0c4a5db276fb85.png 480w, https://percona.community/blog/2025/07/gitops-argocd-demo-app-enable-load_hu_52432bdb768937d6.png 768w, https://percona.community/blog/2025/07/gitops-argocd-demo-app-enable-load_hu_a82212f4c1e5503a.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-demo-app-enable-load.png" alt="GitOps - ArgoCD Demo App UI - Enable Load" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Open the &lt;strong&gt;Load Generator Control Panel&lt;/strong&gt; and adjust sliders and toggles as needed:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-demo-app-panel_hu_8f57f1fa04e0ebdf.png 480w, https://percona.community/blog/2025/07/gitops-argocd-demo-app-panel_hu_7a50876dac856935.png 768w, https://percona.community/blog/2025/07/gitops-argocd-demo-app-panel_hu_255f70a0ce31e068.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-demo-app-panel.png" alt="GitOps - ArgoCD Demo App UI - Load Generator" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this part, we:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deployed a demo application via Helm in ArgoCD&lt;/li&gt;
&lt;li&gt;Connected it to our PostgreSQL cluster&lt;/li&gt;
&lt;li&gt;Managed PostgreSQL users and access via GitHub and GitOps&lt;/li&gt;
&lt;li&gt;Imported a dataset and activated the traffic generator through the web UI&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In &lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-4-observability-and-monitoring-with-coroot-in-kubernetes/"&gt;Part 4&lt;/a&gt;, we’ll deploy &lt;strong&gt;Coroot&lt;/strong&gt; for observability and profiling.&lt;br&gt;
It’s an impressive tool for diagnosing behavior across services in the Kubernetes cluster.&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>PostgreSQL</category>
      <category>Opensource</category>
      <category>GitOps</category>
      <category>ArgoCD</category>
      <media:thumbnail url="https://percona.community/blog/2025/07/gitops-part-3_hu_89d6e1d2addde317.jpg"/>
      <media:content url="https://percona.community/blog/2025/07/gitops-part-3_hu_c3ba448f33f9a74f.jpg" medium="image"/>
    </item>
    <item>
      <title>GitOps Journey: Part 2 – Deploying PostgreSQL with GitOps and ArgoCD</title>
      <link>https://percona.community/blog/2025/07/22/gitops-journey-part-2-deploying-postgresql-with-gitops-and-argocd/</link>
      <guid>https://percona.community/blog/2025/07/22/gitops-journey-part-2-deploying-postgresql-with-gitops-and-argocd/</guid>
      <pubDate>Tue, 22 Jul 2025 00:00:30 UTC</pubDate>
      <description>We’re now ready to deploy PostgreSQL 17 using GitOps — with ArgoCD, GitHub, and the Percona Operator for PostgreSQL.</description>
      <content:encoded>&lt;p&gt;We’re now ready to deploy &lt;strong&gt;PostgreSQL 17&lt;/strong&gt; using GitOps — with ArgoCD, GitHub, and the Percona Operator for PostgreSQL.&lt;/p&gt;
&lt;p&gt;If you’re a DBA, developer, DevOps engineer, or engineering manager, this part focuses on GitOps in action: deploying and managing a real database cluster using declarative infrastructure.&lt;/p&gt;
&lt;p&gt;In &lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-1-getting-started-with-argocd-and-github/"&gt;Part 1&lt;/a&gt;, we set up the Kubernetes environment and installed ArgoCD.&lt;br&gt;
Now it’s time to define and launch the PostgreSQL cluster — fully versioned and synced through Git.&lt;/p&gt;
&lt;p&gt;We’ll follow the official &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/gke.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator documentation&lt;/a&gt; and reference the &lt;a href="https://github.com/percona/percona-postgresql-operator" target="_blank" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; to build out a production-grade setup.&lt;/p&gt;
&lt;h2 id="preparing-the-environment"&gt;Preparing the Environment&lt;/h2&gt;
&lt;p&gt;There are multiple ways to install the Percona Operator and create a PostgreSQL cluster.&lt;br&gt;
We’ll use the simplest and most GitOps-friendly approach:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Deploy the operator using &lt;code&gt;deploy/bundle.yaml&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deploy the cluster using &lt;code&gt;deploy/cr.yaml&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Source files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;https://github.com/percona/percona-postgresql-operator/blob/main/deploy/bundle.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;https://github.com/percona/percona-postgresql-operator/blob/main/deploy/cr.yaml&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="project-structure"&gt;Project Structure&lt;/h2&gt;
&lt;p&gt;Repository structure can vary depending on your services and infrastructure scale.&lt;br&gt;
For this series, we’ll use:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;postgres/&lt;/code&gt; → Contains all manifests related to PostgreSQL: the operator, clusters, backups&lt;/li&gt;
&lt;li&gt;&lt;code&gt;apps/&lt;/code&gt; → Contains ArgoCD application manifests that track changes in the repository&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You’re free to choose a different structure. Just ensure all paths are correctly referenced in ArgoCD.&lt;/p&gt;
&lt;h2 id="creating-the-postgres-directory-and-saving-manifests"&gt;Creating the Postgres Directory and Saving Manifests&lt;/h2&gt;
&lt;p&gt;You can manually download the files from GitHub or automate it via CLI:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mkdir postgres&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -o postgres/bundle.yaml https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.7.0/deploy/bundle.yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -o postgres/cr.yaml https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.7.0/deploy/cr.yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can also separately clone &lt;a href="https://github.com/percona/percona-postgresql-operator" target="_blank" rel="noopener noreferrer"&gt;the operator repository&lt;/a&gt; and grab the necessary files from there.&lt;/p&gt;
&lt;h2 id="creating-the-argocd-application-manifest"&gt;Creating the ArgoCD Application Manifest&lt;/h2&gt;
&lt;p&gt;This ArgoCD application will track the &lt;code&gt;postgres/&lt;/code&gt; directory and automatically sync changes from GitHub.&lt;/p&gt;
&lt;p&gt;Create the file: &lt;code&gt;apps/argocd-postgres.yaml&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Content:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: argoproj.io/v1alpha1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Application
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: postgres
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: argocd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; project: default
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; source:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; repoURL: https://github.com/dbazhenov/percona-argocd-pg-coroot.git
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; targetRevision: main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; path: postgres
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; destination:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; server: https://kubernetes.default.svc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; namespace: postgres-operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncPolicy:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; automated:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; prune: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; selfHeal: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; syncOptions:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - CreateNamespace=true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ServerSideApply=true &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can also create this app manually via the ArgoCD UI or CLI, but using a manifest aligns better with GitOps principles.&lt;/p&gt;
&lt;p&gt;Double-check your:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;repoURL&lt;/code&gt; → matches your GitHub repository&lt;/li&gt;
&lt;li&gt;&lt;code&gt;path&lt;/code&gt; → corresponds to your PostgreSQL manifest directory&lt;/li&gt;
&lt;li&gt;&lt;code&gt;namespace&lt;/code&gt; → targets the correct namespace for operator and cluster&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="managing-argocd-sync-order-with-waves"&gt;Managing ArgoCD Sync Order with Waves&lt;/h2&gt;
&lt;p&gt;ArgoCD applies manifests based on &lt;code&gt;sync-wave&lt;/code&gt; annotations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The operator (&lt;code&gt;bundle.yaml&lt;/code&gt;) should be applied first&lt;/li&gt;
&lt;li&gt;The cluster (&lt;code&gt;cr.yaml&lt;/code&gt;) comes second&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Add these annotations:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;In &lt;code&gt;bundle.yaml&lt;/code&gt;:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; annotations:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; argocd.argoproj.io/sync-wave: "1"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;In &lt;code&gt;cr.yaml&lt;/code&gt;:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: cluster1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; annotations:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; argocd.argoproj.io/sync-wave: "5"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This ensures a stable deployment sequence.&lt;/p&gt;
&lt;p&gt;Later in the series (e.g. when installing Coroot), we’ll use a more advanced method: defining sync order via &lt;code&gt;kustomization.yaml&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="reviewing-cluster-configuration"&gt;Reviewing Cluster Configuration&lt;/h2&gt;
&lt;p&gt;Before applying the manifests, review and adjust your cluster settings in &lt;code&gt;cr.yaml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Key defaults:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;name: cluster1&lt;/code&gt; → Cluster name&lt;/li&gt;
&lt;li&gt;&lt;code&gt;postgresVersion: 17&lt;/code&gt; → PostgreSQL version&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To keep the cluster lightweight and test horizontal scaling later, reduce replicas to 1:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;instances:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: instance1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; replicas: 1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;proxy:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pgBouncer:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; replicas: 1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can also configure resource limits, disk sizes, backups, and users in this file.&lt;/p&gt;
&lt;h2 id="publishing-the-configuration-to-github"&gt;Publishing the Configuration to GitHub&lt;/h2&gt;
&lt;p&gt;Verify your repo status:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git status&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add files:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git add .&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Review staged files:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git status&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected result:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) ✗ git status
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;On branch main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Your branch is up to date with 'origin/main'.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Untracked files:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; (use "git add &lt;file&gt;..." to include in what will be committed)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; apps/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; postgres/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;nothing added to commit but untracked files present (use "git add" to track)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) ✗ git add .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) ✗ git status
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;On branch main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Your branch is up to date with 'origin/main'.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Changes to be committed:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; (use "git restore --staged &lt;file&gt;..." to unstage)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; new file: apps/argocd-postgres.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; new file: postgres/bundle.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; new file: postgres/cr.yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Commit:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git commit -m "Initial configuration of a Postgres cluster using Percona Operator for Postgres and ArgoCD"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Push to GitHub:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git push origin main&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verify files are published correctly in the repository.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-github-pg-init_hu_403a638106827333.png 480w, https://percona.community/blog/2025/07/gitops-github-pg-init_hu_dc21e446152715f0.png 768w, https://percona.community/blog/2025/07/gitops-github-pg-init_hu_da3654d96e1f48a4.png 1400w"
src="https://percona.community/blog/2025/07/gitops-github-pg-init.png" alt="GitOps - Percona Operator for Postgres and PG Cluster" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="applying-the-argocd-app-manifest"&gt;Applying the ArgoCD App Manifest&lt;/h2&gt;
&lt;p&gt;To initiate the deployment, apply the previously created manifest:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply -f apps/argocd-postgres.yaml -n argocd&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After a minute or two, the ArgoCD dashboard should display the synced PostgreSQL application and the deployed cluster.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-pg-app-sync_hu_746089d5ff24eebb.png 480w, https://percona.community/blog/2025/07/gitops-argocd-pg-app-sync_hu_51470eb9b162b0a8.png 768w, https://percona.community/blog/2025/07/gitops-argocd-pg-app-sync_hu_1dee0f6843cceb4a.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-pg-app-sync.png" alt="GitOps - ArgoCD app - Postgres" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-pg-app-map_hu_b364f85c56a6b115.png 480w, https://percona.community/blog/2025/07/gitops-argocd-pg-app-map_hu_8161680482781052.png 768w, https://percona.community/blog/2025/07/gitops-argocd-pg-app-map_hu_9d188ec511f1c786.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-pg-app-map.png" alt="GitOps - ArgoCD app - Postgres - map" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="verifying-cluster-status"&gt;Verifying Cluster Status&lt;/h2&gt;
&lt;p&gt;Check the running pods:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get pods -n postgres-operator&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected results:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One PostgreSQL instance pod&lt;/li&gt;
&lt;li&gt;One pgBouncer pod&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) kubectl get pods -n postgres-operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-backup-5g98-5b29w 0/1 Completed 0 27m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-instance1-22vd-0 4/4 Running 0 28m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-pgbouncer-649b7cf845-fgs9l 2/2 Running 0 28m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-repo-host-0 2/2 Running 0 28m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-postgresql-operator-79f75d5f76-xjndr 1/1 Running 0 29m&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="scaling-the-cluster-via-gitops"&gt;Scaling the Cluster via GitOps&lt;/h2&gt;
&lt;p&gt;Let’s test the GitOps model by updating the cluster configuration to increase replicas to 3.&lt;/p&gt;
&lt;p&gt;Edit &lt;code&gt;postgres/cr.yaml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; instances:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: instance1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; replicas: 3&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; proxy:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pgBouncer:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; replicas: 3&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Save the changes and push them:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git status&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git add . &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git commit -m "Postgres cluster: Horizontal scaling from 1 replica to 3" &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-22" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-22"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git push origin main&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;ArgoCD will automatically detect and apply this update.&lt;/p&gt;
&lt;p&gt;Expected results:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-23" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-23"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) git status
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;On branch main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Your branch is up to date with 'origin/main'.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Changes not staged for commit:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; (use "git add &lt;file&gt;..." to update what will be committed)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; (use "git restore &lt;file&gt;..." to discard changes in working directory)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; modified: postgres/cr.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;no changes added to commit (use "git add" and/or "git commit -a")
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) ✗ git add .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) ✗ git commit -m "Postgres cluster: Horizontal scaling from 1 replica to 3"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[main 6b2dc98] Postgres cluster: Horizontal scaling from 1 replica to 3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1 file changed, 2 insertions(+), 2 deletions(-)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) git push origin main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Enumerating objects: 7, done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Counting objects: 100% (7/7), done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Delta compression using up to 10 threads
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Compressing objects: 100% (4/4), done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Writing objects: 100% (4/4), 435 bytes | 435.00 KiB/s, done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Total 4 (delta 2), reused 0 (delta 0), pack-reused 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;To github.com:dbazhenov/percona-argocd-pg-coroot.git
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 81ae9e8..6b2dc98 main -&gt; main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected results on GitHub Repo:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-github-scale_hu_8b1f2c5c1536e848.png 480w, https://percona.community/blog/2025/07/gitops-github-scale_hu_acf35b98026c2aa0.png 768w, https://percona.community/blog/2025/07/gitops-github-scale_hu_b0dc2eb9f46961e0.png 1400w"
src="https://percona.community/blog/2025/07/gitops-github-scale.png" alt="GitOps - ArgoCD app - Postgres scale - GitHub" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="confirming-the-update-in-argocd"&gt;Confirming the Update in ArgoCD&lt;/h2&gt;
&lt;p&gt;In the ArgoCD UI, you should now see the application synced to the latest commit with the updated replica count.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-github-scale-argo_hu_3a0e618bada99ba6.png 480w, https://percona.community/blog/2025/07/gitops-github-scale-argo_hu_d5618d1ba625e2d7.png 768w, https://percona.community/blog/2025/07/gitops-github-scale-argo_hu_be9d97706a13a583.png 1400w"
src="https://percona.community/blog/2025/07/gitops-github-scale-argo.png" alt="GitOps - ArgoCD apps - Postgres scale - Argo" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Verify pods:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-24" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-24"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get pods -n postgres-operator&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected result — 3 PostgreSQL pods and 3 pgBouncer pods.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-25" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-25"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) kubectl get pods -n postgres-operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-backup-5g98-5b29w 0/1 Completed 0 38m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-instance1-22vd-0 4/4 Running 0 39m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-instance1-q2r4-0 4/4 Running 0 3m38s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-instance1-r4s2-0 4/4 Running 0 3m39s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-pgbouncer-649b7cf845-9cppx 2/2 Running 0 3m37s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-pgbouncer-649b7cf845-fgs9l 2/2 Running 0 39m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-pgbouncer-649b7cf845-tkf9z 2/2 Running 0 3m36s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster1-repo-host-0 2/2 Running 0 39m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-postgresql-operator-79f75d5f76-xjndr 1/1 Running 0 40m&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="whats-next"&gt;What’s Next&lt;/h2&gt;
&lt;p&gt;We’ve successfully installed the Percona Operator for PostgreSQL and deployed a cluster using GitHub and ArgoCD.&lt;/p&gt;
&lt;p&gt;We also verified GitOps functionality by scaling the cluster through Git-controlled configuration.&lt;br&gt;
All changes are tracked, versioned, and declarative — a solid foundation for modern infrastructure management.&lt;/p&gt;
&lt;p&gt;To continue experimenting:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/connect.html" target="_blank" rel="noopener noreferrer"&gt;Connect to the Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/users.html" target="_blank" rel="noopener noreferrer"&gt;Manage Users&lt;/a&gt;. Note: the default user does not have SUPERUSER privileges. If your app requires creating databases, you’ll need to configure appropriate roles.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/expose.html" target="_blank" rel="noopener noreferrer"&gt;Expose the Cluster&lt;/a&gt;. So you can connect from external clients or apps.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We’ll do exactly that in &lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-3-deploying-a-load-generator-and-connecting-to-postgresql/"&gt;the next part&lt;/a&gt; — by deploying a demo application and connecting it to the database using GitOps.&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>PostgreSQL</category>
      <category>Opensource</category>
      <category>GitOps</category>
      <category>ArgoCD</category>
      <media:thumbnail url="https://percona.community/blog/2025/07/gitops-part-2_hu_2d6aac6b62668af3.jpg"/>
      <media:content url="https://percona.community/blog/2025/07/gitops-part-2_hu_3560831b9d7392f7.jpg" medium="image"/>
    </item>
    <item>
      <title>GitOps Journey: Part 1 – Getting Started with ArgoCD and GitHub</title>
      <link>https://percona.community/blog/2025/07/22/gitops-journey-part-1-getting-started-with-argocd-and-github/</link>
      <guid>https://percona.community/blog/2025/07/22/gitops-journey-part-1-getting-started-with-argocd-and-github/</guid>
      <pubDate>Tue, 22 Jul 2025 00:00:10 UTC</pubDate>
      <description>Welcome to GitOps Journey — a hands-on guide to setting up infrastructure in Kubernetes using Git and automation.</description>
      <content:encoded>&lt;p&gt;Welcome to &lt;strong&gt;GitOps Journey&lt;/strong&gt; — a hands-on guide to setting up infrastructure in Kubernetes using Git and automation.&lt;/p&gt;
&lt;p&gt;GitOps has gained traction alongside Kubernetes, CI/CD, and declarative provisioning.&lt;br&gt;
You’ve probably seen it mentioned in blog posts, tech talks, or conference slides — but what does it actually look like in practice?&lt;/p&gt;
&lt;p&gt;We’ll start from scratch: prepare a cluster, deploy a PostgreSQL database, run a demo app, and set up observability — all managed via Git and GitHub using ArgoCD.&lt;/p&gt;
&lt;h2 id="what-well-build"&gt;What We’ll Build&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ArgoCD&lt;/strong&gt; — syncs manifests from a GitHub repository to your cluster&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;PostgreSQL&lt;/strong&gt; — a production-ready database using Percona Operator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Demo App&lt;/strong&gt; — a real Go-based web app connected to the database&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Coroot&lt;/strong&gt; — an open-source tool for monitoring performance, logs, and service behavior&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This series is for anyone new to GitOps or Kubernetes.&lt;br&gt;
Each part includes clear steps, real-world YAML, and examples you can run yourself.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;This is Part 1 of the GitOps Journey.&lt;/strong&gt;&lt;br&gt;
If you already have ArgoCD and a working Kubernetes cluster, you can skip ahead:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-2-deploying-postgresql-with-gitops-and-argocd/"&gt;Part 2 – Deploying PostgreSQL with Percona Operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-3-deploying-a-load-generator-and-connecting-to-postgresql/"&gt;Part 3 – Connecting a Real App to the Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-4-observability-and-monitoring-with-coroot-in-kubernetes/"&gt;Part 4 – Observability with Coroot&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Copilot assisted with formatting, Markdown structure, and translation.&lt;br&gt;
All ideas, architecture decisions, and hands-on implementation were created by Daniil Bazhenov.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Otherwise, let’s start by preparing the cluster and setting up ArgoCD.&lt;/p&gt;
&lt;h2 id="creating-a-kubernetes-cluster"&gt;Creating a Kubernetes Cluster&lt;/h2&gt;
&lt;p&gt;I’ll be using Google Kubernetes Engine (GKE), but you can use AWS, DigitalOcean, or even run Minikube locally.&lt;/p&gt;
&lt;p&gt;You’ll also need these CLI tools installed on your machine:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/#kubectl" target="_blank" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; - The official CLI tool for Kubernetes — used to manage clusters, view resources, apply manifests, and more.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://helm.sh/docs/intro/install/" target="_blank" rel="noopener noreferrer"&gt;helm&lt;/a&gt; - A package manager for Kubernetes — lets you install complex apps using reusable charts (like PostgreSQL, monitoring tools, etc.)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I use the following command to create a cluster in GKE&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gcloud container clusters create dbazhenov-demo \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --project percona-product \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --zone us-central1-a \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --cluster-version 1.30 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --machine-type n1-standard-8 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --num-nodes=3&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To delete the cluster:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gcloud container clusters delete dbazhenov-demo --zone us-central1-a&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: This command doesn’t remove your LoadBalancers, so I prefer deleting them manually in Google Cloud’s web console to ensure no resources are left running post-experiment.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Here’s the resulting setup:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:(blog_argocd_pg) ✗ kubectl get nodes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME STATUS ROLES AGE VERSION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-dbazhenov-demo-default-pool-b1b48316-8nrj Ready &lt;none&gt; 6m7s v1.30.12-gke.1279000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-dbazhenov-demo-default-pool-b1b48316-8v14 Ready &lt;none&gt; 6m6s v1.30.12-gke.1279000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-dbazhenov-demo-default-pool-b1b48316-zg6z Ready &lt;none&gt; 6m7s v1.30.12-gke.1279000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:(blog_argocd_pg) ✗&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="installing-argocd"&gt;Installing ArgoCD&lt;/h2&gt;
&lt;p&gt;We’ll begin with ArgoCD, the GitOps engine that will deploy:&lt;/p&gt;
&lt;p&gt;PostgreSQL database cluster&lt;/p&gt;
&lt;p&gt;A demo app to simulate real usage&lt;/p&gt;
&lt;p&gt;Coroot for monitoring and profiling workloads&lt;/p&gt;
&lt;p&gt;ArgoCD supports multiple deployment methods — we’ll experiment with different ones during this series.&lt;/p&gt;
&lt;p&gt;Install ArgoCD (&lt;a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" target="_blank" rel="noopener noreferrer"&gt;based on official docs&lt;/a&gt;):&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create namespace:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl create namespace argocd&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Deploy ArgoCD:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Check the pods:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get pods -n argocd&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected output: ArgoCD components running (server, repo, redis, controllers, etc.)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:(blog_argocd_pg) ✗ kubectl get pods -n argocd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd-application-controller-0 1/1 Running 0 57s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd-applicationset-controller-6d569f7895-89kgk 1/1 Running 0 64s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd-dex-server-5b44d67df9-p42z5 1/1 Running 0 62s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd-notifications-controller-5865dfbc8-gqzwt 1/1 Running 0 61s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd-redis-6bb7987874-99j59 1/1 Running 0 61s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd-repo-server-df8b9fd78-64czj 1/1 Running 0 60s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd-server-6d896f6785-82tf2 1/1 Running 0 59s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Access ArgoCD UI&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You have two options (or more):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Port forwarding (local only)&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl port-forward svc/argocd-server -n argocd 8080:443&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Internet-accessible LoadBalancer&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I will use Load Balancer by executing the command above, you need to wait a few minutes to get the IP address.&lt;/p&gt;
&lt;p&gt;Let’s get the IP address of the ArgoCD service in the EXTERNAL-IP field.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get svc argocd-server -n argocd&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:(blog_argocd_pg) ✗ kubectl get svc argocd-server -n argocd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd-server LoadBalancer 34.118.234.162 34.132.39.194 80:30549/TCP,443:32146/TCP 9m51s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Access the UI in your browser using the IP.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-login_hu_2b8e3a18d84ec798.png 480w, https://percona.community/blog/2025/07/gitops-argocd-login_hu_8c25a58ba7500700.png 768w, https://percona.community/blog/2025/07/gitops-argocd-login_hu_e3a7c94ed685b7da.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-login.png" alt="GitOps - ArgoCD UI" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ol start="4"&gt;
&lt;li&gt;Getting Started with ArgoCD Login&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Download Argo CD CLI&lt;/p&gt;
&lt;p&gt;Install ArgoCD CLI (&lt;a href="https://argo-cd.readthedocs.io/en/stable/getting_started/#2-download-argo-cd-cli" target="_blank" rel="noopener noreferrer"&gt;see instructions&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Get the initial password:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd admin initial-password -n argocd&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;ArgoCD recommends changing it to a new secure password, which we will do.&lt;/p&gt;
&lt;p&gt;Log in via CLI:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd login 34.132.39.194 --insecure&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Authorize using initial-password and user admin and execute the password update command&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;argocd account update-password&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;All the steps to get and update your password are below.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:(blog_argocd_pg) ✗ argocd admin initial-password -n argocd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;0mxV6IVcF3qZDR-O
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This password must be only used for first time login. We strongly recommend you update the password using `argocd account update-password`.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:(blog_argocd_pg) ✗ argocd login 34.132.39.194 --insecure
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Username: admin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Password:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'admin:login' logged in successfully
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Context '34.132.39.194' updated
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:(blog_argocd_pg) ✗ argocd account update-password
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*** Enter password of currently logged in user (admin):
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*** Enter new password for user admin:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*** Confirm new password for user admin:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Password updated
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Context '34.132.39.194' updated
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:(blog_argocd_pg) ✗&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="5"&gt;
&lt;li&gt;Now log into the ArgoCD web UI using admin and your new password.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-argocd-dashboard_hu_23c44b0339a87aa8.png 480w, https://percona.community/blog/2025/07/gitops-argocd-dashboard_hu_654e20efe0754008.png 768w, https://percona.community/blog/2025/07/gitops-argocd-dashboard_hu_f0fa19e2c0e740a7.png 1400w"
src="https://percona.community/blog/2025/07/gitops-argocd-dashboard.png" alt="GitOps: ArgoCD web UI" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Welcome to the ArgoCD interface, we don’t have any applications right now, we will install them later.&lt;/p&gt;
&lt;h2 id="setting-up-github-repo"&gt;Setting Up GitHub Repo&lt;/h2&gt;
&lt;p&gt;We’ll need a GitHub repo to store infrastructure manifests. ArgoCD will sync from this repo and apply changes.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install git and create a GitHub account&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add your SSH key to your GitHub profile. &lt;a href="https://github.com/settings/keys" target="_blank" rel="noopener noreferrer"&gt;GitHub SSH settings&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new GitHub repository&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I recommend a public repo for this educational project — no secrets will be committed, and it simplifies ArgoCD setup. Plus, it earns you some green squares on GitHub. If you go with a private repo, make sure it’s properly linked in ArgoCD.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-github-new-repo_hu_c1092ea410e1d482.png 480w, https://percona.community/blog/2025/07/gitops-github-new-repo_hu_2d70c7b49bae26ee.png 768w, https://percona.community/blog/2025/07/gitops-github-new-repo_hu_cb106e4582611ded.png 1400w"
src="https://percona.community/blog/2025/07/gitops-github-new-repo.png" alt="GitOps: GitHub Repo Creation" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ol start="4"&gt;
&lt;li&gt;Clone the repo:&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/07/gitops-github-clone_hu_82856ab44db43d16.png 480w, https://percona.community/blog/2025/07/gitops-github-clone_hu_7aa24a16af6a42c2.png 768w, https://percona.community/blog/2025/07/gitops-github-clone_hu_3e1e4f2077b9bd3c.png 1400w"
src="https://percona.community/blog/2025/07/gitops-github-clone.png" alt="GitOps: GitHub Clone" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Clone the repository using an SSH address.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git clone git@github.com:dbazhenov/percona-argocd-pg-coroot.git&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Navigate to the project directory&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd percona-argocd-pg-coroot&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ gitops git clone git@github.com:dbazhenov/percona-argocd-pg-coroot.git
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Cloning into 'percona-argocd-pg-coroot'...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;remote: Enumerating objects: 3, done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;remote: Counting objects: 100% (3/3), done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;remote: Compressing objects: 100% (2/2), done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Receiving objects: 100% (3/3), done.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ gitops cd percona-argocd-pg-coroot
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main) ls
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;README.md
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-argocd-pg-coroot git:(main)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;We’ve prepared everything to launch our GitOps-powered infrastructure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes cluster&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ArgoCD deployed&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GitHub repo ready&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the next posts, we’ll deploy &lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-2-deploying-postgresql-with-gitops-and-argocd/"&gt;the PostgreSQL cluster&lt;/a&gt;, &lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-3-deploying-a-load-generator-and-connecting-to-postgresql/"&gt;the demo app&lt;/a&gt;, and add &lt;a href="https://percona.community/blog/2025/07/22/gitops-journey-part-4-observability-and-monitoring-with-coroot-in-kubernetes/"&gt;Coroot monitoring&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Stay tuned!&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>PostgreSQL</category>
      <category>Opensource</category>
      <category>GitOps</category>
      <category>ArgoCD</category>
      <media:thumbnail url="https://percona.community/blog/2025/07/gitops-part-1_hu_1404141b76c22066.jpg"/>
      <media:content url="https://percona.community/blog/2025/07/gitops-part-1_hu_4e4052d3f7870720.jpg" medium="image"/>
    </item>
    <item>
      <title>Using replicaSetHorizons in MongoDB</title>
      <link>https://percona.community/blog/2025/07/22/using-replicasethorizons-in-mongodb/</link>
      <guid>https://percona.community/blog/2025/07/22/using-replicasethorizons-in-mongodb/</guid>
      <pubDate>Tue, 22 Jul 2025 00:00:00 UTC</pubDate>
      <description>When running MongoDB replica sets in containerized environments like Docker or Kubernetes, making nodes reachable from inside the cluster as well as from external clients can be a challenge. To solve this problem, this post is going to explain the horizons feature of Percona Server for MongoDB.</description>
      <content:encoded>&lt;p&gt;When running MongoDB replica sets in containerized environments like Docker or Kubernetes, making nodes reachable from inside the cluster as well as from external clients can be a challenge. To solve this problem, this post is going to explain the horizons feature of &lt;a href="https://docs.percona.com/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/07/ivan_cover.png" alt="Using_replicaSetHorizons_in_MongoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Let’s start by looking at what happens behind the scenes when you connect to a replicaset URI.&lt;/p&gt;
&lt;h2 id="node-auto-discovery"&gt;Node auto-discovery&lt;/h2&gt;
&lt;p&gt;After connecting with a replset URI, the driver discovers the list of actual members by running the db.hello() command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongosh "mongodb://mongo1-internal:27017/?replicaSet=rs0"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rs0 [direct: primary] test&gt; db.hello()
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; topologyVersion: {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; processId: ObjectId('6877b5e18a13d54b752ff25c'),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; counter: Long('6')
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; hosts: [ 'mongo1-internal:27017', 'mongo2-internal:27017', 'mongo3-internal:27017' ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The list of hosts returned contains the name of each member as you provided it to the rs.initialize() command.&lt;/p&gt;
&lt;h2 id="the-node-identity-crisis"&gt;The node identity crisis&lt;/h2&gt;
&lt;p&gt;The names are resolvable inside the same network, so all is well in this case. But what happens when connecting from outside?&lt;/p&gt;
&lt;p&gt;Typically you would be using names like mongo1-external.mydomain.com that correctly point to the external IP addresses of the members. The problem is that after the initial connection is made, the driver will perform auto-discovery and try to connect to the names as reported by db.hello(). These are not resolvable from outside.&lt;/p&gt;
&lt;p&gt;What if we connect by IP address directly? again, the driver will get the names from the list above, try to reach those and fail after the initial connection is made:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mongosh mongodb://user:pass@10.30.50.155:32768/?replicaSet=rs0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Current Mongosh Log ID: 6849eb15ba228be45a69e327
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Connecting to: mongodb://&lt;credentials&gt;@10.30.50.155:32768/?replicaSet=rs0&amp;appName=mongosh+2.5.2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MongoNetworkError: getaddrinfo ENOTFOUND mongo1-internal&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Even though mongo1-internal is not part of the connection string, the driver tries to reach it. So if the replica set members advertise their internal IPs or DNS names, clients outside can’t connect unless they can resolve that same name. We could work around that, but there’s another issue: the ports.&lt;/p&gt;
&lt;h2 id="the-port-issue"&gt;The port issue&lt;/h2&gt;
&lt;p&gt;In the containerized world, it is likely that you set up your containers to use default port 27017. However they might be mapped to a different external port, since you have to avoid port collisions (think about the case where containers are co-located in the same host).&lt;/p&gt;
&lt;p&gt;We need a way for replica set members to identify themselves with different names and ports, depending on whether the client is in the same network or outside. A concept similar to split-brain DNS.&lt;/p&gt;
&lt;h2 id="what-is-horizons"&gt;What is Horizons?&lt;/h2&gt;
&lt;p&gt;Horizons is a MongoDB feature that allows replica set members to advertise different identities depending on the client’s access context, such as internal versus external networks.&lt;/p&gt;
&lt;p&gt;With this, you can make the same MongoDB replica set usable from:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Internal container network (using internal hostnames/IPs)&lt;/li&gt;
&lt;li&gt;External applications (using public IPs or DNS names)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;MongoDB’s horizons rely on Server Name Indication (SNI) during the TLS handshake to determine which hostname and port to advertise. At connection time, clients present the hostname they used, and MongoDB uses that to return the proper set of endpoints. For that reason TLS is required in order for horizons to work.&lt;/p&gt;
&lt;p&gt;Let’s walk through an example.&lt;/p&gt;
&lt;h2 id="example-scenario-mongodb-replica-set-in-docker"&gt;Example Scenario: MongoDB Replica Set in Docker&lt;/h2&gt;
&lt;p&gt;You can run the following steps on your local machine to test the feature.&lt;/p&gt;
&lt;h3 id="get-your-certificates-ready"&gt;Get your certificates ready&lt;/h3&gt;
&lt;p&gt;Let’s start by creating the required CA and certificates using &lt;a href="https://github.com/cloudflare/cfssl" target="_blank" rel="noopener noreferrer"&gt;Cloudflare’s PKI and TLS toolkit&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="step-1-create-ca-csrjson"&gt;Step 1: Create ca-csr.json&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mkdir certs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd certs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tee ca-csr.json &lt;&lt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "CN": "MyTestCA",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "key": { "algo": "rsa", "size": 2048 },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "names": [{ "C": "US", "ST": "CA", "L": "SF", "O": "Acme", "OU": "MongoDB CA" }]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EOF&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Generate the CA:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This creates:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ca.pem — CA certificate&lt;/li&gt;
&lt;li&gt;ca-key.pem — CA private key&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="step-2-create-server-csrjson-for-each-server-specifying-both-internal-and-external-names-in-the-hosts-section-so-that-our-certificate-is-valid-for-everything"&gt;Step 2: Create server-csr.json for each server, specifying both internal and external names in the “hosts” section so that our certificate is valid for everything.&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;for i in 1 2 3; do
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name="mongo$i"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; tee "${name}-csr.json" &lt;&lt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "CN": "${name}",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "hosts": ["${name}", "${name}.internal", "localhost", "127.0.0.1"],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "key": { "algo": "rsa", "size": 2048 },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "names": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; { "O": "MongoDB", "OU": "Database", "L": "Internal", "ST": "DC", "C": "US" }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;done&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="step-3-generate-certificates-using-cfssl"&gt;Step 3: Generate certificates using CFSSL&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;for i in 1 2 3; do
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name="mongo$i"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; cfssl gencert \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -ca=ca.pem -ca-key=ca-key.pem \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -config=&lt;(cat &lt;&lt;'JSON'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "signing": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "default": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "expiry": "8760h",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "usages": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "signing",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "key encipherment",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "server auth",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "client auth"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;JSON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;) "${name}-csr.json" | cfssljson -bare "${name}"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat "${name}.pem" "${name}-key.pem" &gt; "${name}-combined.pem"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;done
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd ..&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Resulting files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;mongo{1,2,3}.pem — cert for server&lt;/li&gt;
&lt;li&gt;mongo{1,2,3}-key.pem, key for server&lt;/li&gt;
&lt;li&gt;mongo{1,2,3}-combined.pem, both in a single file as expected by mongo&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="docker-compose-setup"&gt;Docker Compose Setup&lt;/h3&gt;
&lt;p&gt;Create a file with docker compose configuration:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tee test-horizons.yml &lt;&lt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;name: horizons
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;services:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mongo1:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; container_name: mongo1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: percona/percona-server-mongodb:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./certs:/certs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ports:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - "27017:27017"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; command: &gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mongod --replSet rs0 --bind_ip_all
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsMode requireTLS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsCertificateKeyFile /certs/mongo1-combined.pem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsCAFile /certs/ca.pem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mongo2:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; container_name: mongo2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: percona/percona-server-mongodb:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./certs:/certs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ports:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - "27018:27017"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; command: &gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mongod --replSet rs0 --bind_ip_all
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsMode requireTLS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsCertificateKeyFile /certs/mongo2-combined.pem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsCAFile /certs/ca.pem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mongo3:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; container_name: mongo3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: percona/percona-server-mongodb:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./certs:/certs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ports:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - "27019:27017"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; command: &gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mongod --replSet rs0 --bind_ip_all
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsMode requireTLS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsCertificateKeyFile /certs/mongo3-combined.pem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tlsCAFile /certs/ca.pem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;networks:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; default:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; driver: bridge
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EOF&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here we are mapping our containers to ports 27017, 27018 and 27109 externally.&lt;/p&gt;
&lt;p&gt;Now, start the services:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker-compose -f test-horizons.yml up -d&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="initiate-the-replica-set-with-horizons"&gt;Initiate the Replica Set with Horizons&lt;/h3&gt;
&lt;p&gt;Now let’s initiate the replica set with different host names and ports for external access.&lt;/p&gt;
&lt;p&gt;Launch a shell into one of the containers:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker exec -it mongo1 /bin/bash&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Authenticate and initialize the replica set with this config:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mongosh --tls --tlsCertificateKeyFile /certs/mongo1-combined.pem --tlsAllowInvalidCertificates
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rs.initiate({
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; _id: "rs0",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; members: [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; _id: 0,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; host: "mongo1:27017",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; horizons: { external: "localhost:27017" }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; _id: 1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; host: "mongo2:27017",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; horizons: { external: "localhost:27018" }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; _id: 2,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; host: "mongo3:27017",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; horizons: { external: "localhost:27019" }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;})&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Note: The “horizon” field here maps the external context to a different address than the internal one. Since we are going to test connecting from the local machine directly to the containers, set the horizons to localhost and the mapped ports.&lt;/p&gt;
&lt;h3 id="connect-from-inside-docker"&gt;Connect from Inside Docker&lt;/h3&gt;
&lt;p&gt;Spin up a new containerized client, or use one of the existing MongoDB containers:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker exec -it mongo1 mongosh --host rs0/mongo1:27017,mongo2:27017,mongo3:27017 --tls --tlsCertificateKeyFile /certs/mongo1-combined.pem --tlsCAFile /certs/ca.pem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Current Mongosh Log ID: 6877deab6568339f46dfd9c4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Connecting to: mongodb://mongo1:27017,mongo2:27017,mongo3:27017/?replicaSet=rs0&amp;tls=true&amp;tlsCertificateKeyFile=%2Fcerts%2Fmongo1-combined.pem&amp;tlsCAFile=%2Fcerts%2Fca.pem&amp;appName=mongosh+2.5.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Using MongoDB: 8.0.8-3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Using Mongosh: 2.5.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rs0 [primary] test&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It connects using internal Docker hostnames.&lt;/p&gt;
&lt;h3 id="connect-from-outside-docker"&gt;Connect from Outside Docker&lt;/h3&gt;
&lt;p&gt;From your local machine:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mongosh "mongodb://localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0" --tls --tlsCertificateKeyFile /certs/mongo1-combined.pem --tlsCAFile /certs/ca.pem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Current Mongosh Log ID: 6877defabc3f9a2d054a1296
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Connecting to: mongodb://localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0&amp;serverSelectionTimeoutMS=2000&amp;tls=true&amp;tlsCertificateKeyFile=certs%2Fmongo1-combined.pem&amp;tlsCAFile=certs%2Fca.pem&amp;appName=mongosh+2.3.1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Using MongoDB: 8.0.8-3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Using Mongosh: 2.3.1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rs0 [primary] test&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="check-the-identities-returned"&gt;Check the identities returned&lt;/h3&gt;
&lt;p&gt;As we have seen, MongoDB will resolve the external horizon names and connect successfully in both cases. You can verify the advertised hostnames and ports for the external connection:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rs0 [primary] test&gt; db.hello()
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; topologyVersion: {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; processId: ObjectId('6877de4c632adf89fb590f38'),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; counter: Long('6')
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; hosts: [ 'localhost:27017', 'localhost:27018', 'localhost:27019' ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; setName: 'rs0',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; setVersion: 1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; isWritablePrimary: true,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; secondary: false,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; primary: 'localhost:27017',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; me: 'localhost:27017',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Versus the internal case:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rs0 [primary] test&gt; db.hello()
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; topologyVersion: {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; processId: ObjectId('6877de4c632adf89fb590f38'),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; counter: Long('6')
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; hosts: [ 'mongo1:27017', 'mongo2:27017', 'mongo3:27017' ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; setName: 'rs0',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; setVersion: 1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; isWritablePrimary: true,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; secondary: false,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; primary: 'mongo1:27017',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; me: 'mongo1:27017',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The horizons feature in MongoDB is a powerful tool to bridge the gap between internal and external connectivity, especially in containerized or multi-network deployments.&lt;/p&gt;
&lt;p&gt;Horizon also has following limitations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using horizons is only possible with TLS connections&lt;/li&gt;
&lt;li&gt;Duplicating domain names in horizons is not allowed by MongoDB&lt;/li&gt;
&lt;li&gt;Using IP addresses in horizons definitions is not allowed by MongoDB&lt;/li&gt;
&lt;li&gt;Horizons should be set for all members of a replica set, or not set at all&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This feature is not listed in the official MongoDB documentation for some reason, however it is available in both Percona Server for MongoDB and MongoDB Community Edition. Also, Kubernetes users rejoice! &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/expose.html?h=split#exposing-replica-set-with-split-horizon-dns" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB supports horizons&lt;/a&gt; since version 1.16.&lt;/p&gt;</content:encoded>
      <author>Ivan Groenewold</author>
      <category>MongoDB</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2025/07/ivan_cover_hu_252d99e16859146e.jpg"/>
      <media:content url="https://percona.community/blog/2025/07/ivan_cover_hu_51c601d52e2aec6.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Bug Report: June 2025</title>
      <link>https://percona.community/blog/2025/06/30/percona-bug-report-june-2025/</link>
      <guid>https://percona.community/blog/2025/06/30/percona-bug-report-june-2025/</guid>
      <pubDate>Mon, 30 Jun 2025 00:00:00 UTC</pubDate>
      <description>At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.</description>
      <content:encoded>&lt;p&gt;At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.&lt;/p&gt;
&lt;p&gt;We constantly update our &lt;a href="https://jira.percona.com/" target="_blank" rel="noopener noreferrer"&gt;bug reports&lt;/a&gt; and monitor &lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;other boards&lt;/a&gt; to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. This post is a central place to get information on the most noteworthy open and recently resolved bugs.&lt;/p&gt;
&lt;p&gt;In this edition of our bug report, we have the following list of bugs.&lt;/p&gt;
&lt;h3 id="percona-servermysql-bugs"&gt;Percona Server/MySQL Bugs&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9823" target="_blank" rel="noopener noreferrer"&gt;PS-9823&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; &lt;a href="https://dev.mysql.com/doc/refman/8.4/en/mysql-migrate-keyring.html" target="_blank" rel="noopener noreferrer"&gt;mysql_migrate_keyring&lt;/a&gt; fails with PS Components.&lt;/p&gt;
&lt;p&gt;The failure is triggered by a missing symbol, but the underlying cause is the way keyring components are built in Percona Server. When attempting to migrate keyring data (e.g., from Vault to File), the tool fails to load the Percona Server component .so files, making the migration process unusable.&lt;/p&gt;
&lt;p&gt;Percona Server builds a reference to the my_free symbol, which is not properly resolved in the shared libraries. In contrast, upstream MySQL builds do not include this dependency.&lt;/p&gt;
&lt;p&gt;This issue blocks both component-to-component and component-to-plugin keyring migrations, affecting users who rely on secure key management transitions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.x, 8.4.x&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; Under investigation. A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9836" target="_blank" rel="noopener noreferrer"&gt;PS-9836&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; There is a regression issue with &lt;a href="https://docs.percona.com/percona-server/8.0/audit-log-filter-overview.html" target="_blank" rel="noopener noreferrer"&gt;audit_log_filter.so&lt;/a&gt; compared to &lt;a href="https://docs.percona.com/percona-server/8.0/audit-log-plugin.html" target="_blank" rel="noopener noreferrer"&gt;audit_log.so&lt;/a&gt;. The audit_log_filter, whether used as a plugin (8.0) or a component (8.0 and 8.4), shows a significant performance regression. When logging everything, QPS drops by over 70%. While configuring selective logging can reduce the impact, it still results in a 30–35% drop in QPS.&lt;/p&gt;
&lt;p&gt;For this reason, moving to audit_log_filter in 8.0 is not recommended. Additionally, this should be taken into account when planning upgrades to 8.4, as audit logging can significantly impact performance. (audit_log is not available as a component—only as a plugin.)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.42-33, 8.4.5-5&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; Under investigation. A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9837" target="_blank" rel="noopener noreferrer"&gt;PS-9837&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; A crash occurs on replica nodes during parallel replication when an INSERT is executed on a secondary index that recently had a DELETE on the same key. The issue is caused by a race condition in the secondary index reuse logic, leading to an assertion failure (row0ins.cc:268).&lt;/p&gt;
&lt;p&gt;This issue is more likely to occur under &lt;strong&gt;heavy write workloads&lt;/strong&gt;, particularly when the application frequently performs &lt;strong&gt;DELETE followed by INSERT on the same keys&lt;/strong&gt;. It only affects &lt;strong&gt;replica servers&lt;/strong&gt; where replica_parallel_workers &gt; 0 and slave_preserve_commit_order=ON.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.36-28, 8.0.42-33&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; &lt;a href="https://bugs.mysql.com/bug.php?id=118334" target="_blank" rel="noopener noreferrer"&gt;118334&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; The user can modify their logic to use UPDATE instead of DELETE followed by INSERT, which avoids triggering the crash path.&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; Under investigation. A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9861" target="_blank" rel="noopener noreferrer"&gt;PS-9861&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; The audit_log_filter plugin cannot be installed when component_keyring_kmip is enabled with Fortanix DSM. While testing with component_keyring_kmip, we enabled the &lt;strong&gt;“Allow secrets with unknown operations”&lt;/strong&gt; option in Fortanix, which allowed the audit log installation to proceed one step further. At this point, a secret is successfully created for the audit log, but &lt;strong&gt;MySQL crashes upon restart&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This issue is related to &lt;strong&gt;bug&lt;/strong&gt; &lt;a href="https://perconadev.atlassian.net/browse/PS-9609" target="_blank" rel="noopener noreferrer"&gt;PS-9609&lt;/a&gt; and still persists when using &lt;strong&gt;Fortanix DSM&lt;/strong&gt; as the KMIP server.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.42-33&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; The issue has been fixed, and the fix is expected in the upcoming release of Percona Server (PS).&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9914" target="_blank" rel="noopener noreferrer"&gt;PS-9914&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; After running ALTER TABLE … ENGINE=InnoDB to rebuild a large table (~10 million rows) with ROW_FORMAT=COMPRESSED, it was observed approximately a &lt;strong&gt;50% drop in write-only workload throughput&lt;/strong&gt; (measured via sysbench), despite a reduction in .ibd file size and no changes to table structure or indexes. The table had previously undergone heavy deletions (~50%), suggesting possible fragmentation prior to the rebuild.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.37-29, 8.0.42-33&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; &lt;a href="https://bugs.mysql.com/bug.php?id=118411" target="_blank" rel="noopener noreferrer"&gt;118411&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; Under investigation. A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9956" target="_blank" rel="noopener noreferrer"&gt;PS-9956&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; PS 8.4.4-4 with group replication crashes on Oracle Linux 9 during bootstrap or failover when the audit log filter component is enabled, but does not crash on Oracle Linux 8.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.4.4-4&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; Under investigation.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="percona-xtradb-cluster"&gt;Percona Xtradb Cluster&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4652" target="_blank" rel="noopener noreferrer"&gt;PXC-4652&lt;/a&gt;: PXC 8.4 crashes with a SIGSEGV in unordered_map called from rpl_gtid_owned during high activity, while PXC 8.0 under the same workload and data remains stable; the crash occurs randomly during operations like COMMIT or INSERT.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.4.3, 8.4.4&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 8.4.5 – Pending Release&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4684" target="_blank" rel="noopener noreferrer"&gt;PXC-4684&lt;/a&gt;: An UPDATE query that joins two tables but modifies only one—e.g., UPDATE test.t2 JOIN test.t1 USING (i) SET t2.d = t2.d+1, t1.d = t1.d;—causes an MDL BF-BF conflict on other PXC nodes, even without triggers, as both tables are included in the Table_map_log_event.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.41&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 8.0.42 – Released | 8.4.5 – Pending Release&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="percona-toolkit"&gt;Percona Toolkit&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2418" target="_blank" rel="noopener noreferrer"&gt;PT-2418&lt;/a&gt;: In &lt;strong&gt;pt-online-schema-change 3.7.0&lt;/strong&gt;, data was lost when executing the following SQL — the value of column col_2 was unexpectedly set to NULL:&lt;/p&gt;
&lt;p&gt;Eg:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE t RENAME COLUMN col_1 TO col_2;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MySQL version: 8.0+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-online-schema-change --no-version-check \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; h=127.0.0.1,u=root,p=xxx,P=xxx,D=sysbench,t=sbtest1 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --alter="RENAME COLUMN col_1 TO col_2" \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --execute --statistics&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.7.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2419" target="_blank" rel="noopener noreferrer"&gt;PT-2419&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; pt-duplicate-key-checker Ignores DESC in Index Definitions. Users running pt-duplicate-key-checker regularly observed that a newly added composite index was being incorrectly flagged as a duplicate and removed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Before:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;KEY `idx_ts` (`ts`),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;KEY `idx_ts_id` (`ts` DESC, `id`)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;After:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;KEY `idx_ts_id` (`ts`)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The tool appears to ignore the &lt;strong&gt;DESC direction&lt;/strong&gt; in index definitions, leading to incorrect de-duplication. This behaviour may affect query plans and performance in setups relying on sort order.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.7.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2425" target="_blank" rel="noopener noreferrer"&gt;PT-2425&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; Case-Sensitive MariaDB Detection Causes Sync Failure in pt-table-sync. In pt-table-sync 3.7.0, a case-sensitive check for the MariaDB flavor ($vp-&gt;flavor() =~ m/maria/) fails because flavor() returns “MariaDB Server”, causing the condition to evaluate incorrectly. As a result, the tool looks for source_host and source_port in $source, while the actual keys are master_host and master_port, leading to failures or uninitialized value warnings.&lt;/p&gt;
&lt;p&gt;The Error:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Use of uninitialized value in concatenation (.) or string at /usr/bin/pt-table-sync line 7086.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Manually updating the regex to m/maria/i resolves the issue. Similar case-sensitive checks appear elsewhere in the script and may require centralizing the MariaDB detection logic.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.7.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 3.7.1 - Not Yet Released&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2197" target="_blank" rel="noopener noreferrer"&gt;PT-2197&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; In pt-online-schema-change (version 3.7.0), attempting to run an ALTER operation results in the following error:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Use of uninitialized value in string eq at /usr/bin/pt-online-schema-change line 4321&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This occurs even when replica connectivity in both directions is fully functional and multiple replicas are connected. Notably, the issue does &lt;strong&gt;not occur in version 3.5.1&lt;/strong&gt;, where the operation succeeds as expected (with the expected increase in connections). Schema change automation breaks unexpectedly on newer versions despite a valid replication setup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.2, 3.6.0, 3.7.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2432" target="_blank" rel="noopener noreferrer"&gt;PT-2432&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; While pt-replica-find includes internal logic for handling replication channels, it currently lacks a corresponding --channel command-line option. Attempting to use it results in an error:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ pt-replica-find --channel=foo
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Unknown option: channel&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This prevents users from specifying a replication channel directly, limiting the tool’s usability in multi-channel replication environments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.7.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2448" target="_blank" rel="noopener noreferrer"&gt;PT-2448&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; pt-k8s-debug-collector should not collect secret details of pgbouncer&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.7.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2446" target="_blank" rel="noopener noreferrer"&gt;PT-2446&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; When attempting to run &lt;strong&gt;pt-table-checksum&lt;/strong&gt; with Group Replication enabled, and the tool returns the following error:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Error checksumming table schema.table: DBD::mysql::st execute failed: The table does not comply with the requirements by an external plugin. [for Statement "DELETE FROM percona.checksums WHERE db = ? AND tbl = ?" with ParamValues: 0=' ', 1=' '] at /bin/pt-table-checksum line 11323.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It gets suspected that this is caused by the tool attempting to set @@binlog_format := ‘STATEMENT’, which is &lt;strong&gt;not supported under Group Replication&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.7.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="pmm-percona-monitoring-and-management"&gt;PMM [Percona Monitoring and Management]&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13994" target="_blank" rel="noopener noreferrer"&gt;PMM-13994&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; pmm_agent shows disconnected status despite active metrics collection, After a temporary connectivity issue, pmm_agent continues to display a Disconnected status in pmm-admin list, even though connectivity has been restored and dashboards are populating correctly.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ pmm-admin list
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm_agent Disconnected
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.43.2, 2.44.1, 3.1.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Restarting the pmm_agent should fix the issue.&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 3.4.0 - Not Yet Released&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13905" target="_blank" rel="noopener noreferrer"&gt;PMM-13905&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; When adding both a MongoDB Cluster and a standalone MongoDB Replica Set (not part of the cluster) to the same PMM environment (e.g., “test”), the &lt;strong&gt;MongoDB ReplSet Summary dashboard&lt;/strong&gt; does not allow viewing the standalone RS.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;“cluster” filter cannot be unselected&lt;/strong&gt;, making it impossible to visualize replica sets that are not associated with a defined cluster. As a result, only RSs from the cluster are visible, while standalone RSs are excluded from the dashboard view.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.1.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Whenever possible, use &lt;strong&gt;separate environments&lt;/strong&gt; when adding the cluster and standalone RS nodes in PMM (e.g., use “env1” for the cluster and “env2” for the standalone RS).&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13910" target="_blank" rel="noopener noreferrer"&gt;PMM-13910&lt;/a&gt;: In the &lt;strong&gt;MongoDB Sharded Cluster Summary&lt;/strong&gt; and &lt;strong&gt;Collections&lt;/strong&gt; dashboards, several graphs fail to populate correctly. Specifically:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Top Hottest Collections by Read&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Top Hottest Collections by Write&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These graphs display only admin, config, and system collections, even when other collections are under heavy traffic. Additionally, the &lt;strong&gt;Collections&lt;/strong&gt; dashboard shows no data across all graphs—&lt;strong&gt;except for the first one&lt;/strong&gt; (Top 5 Databases By Size), which populates as expected.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.1.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13950" target="_blank" rel="noopener noreferrer"&gt;PMM-13950&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; In both &lt;strong&gt;PMM 2&lt;/strong&gt; and &lt;strong&gt;PMM 3&lt;/strong&gt;, with &lt;strong&gt;MySQL 5.7&lt;/strong&gt; and &lt;strong&gt;MySQL 8.0&lt;/strong&gt;, the server_uuid is not being collected from MySQL’s global variables as expected. Despite being available via SHOW GLOBAL VARIABLES LIKE ‘server_uuid’;, the PMM agent fails to parse or capture this value.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.1.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13792" target="_blank" rel="noopener noreferrer"&gt;PMM-13792&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; In PMM 2.44.0, the Advisor Insights incorrectly reports that &lt;em&gt;journaling is not enabled&lt;/em&gt; for MongoDB 7.0.9-15, despite journaling being enabled by default in this version.&lt;/p&gt;
&lt;p&gt;Attempts to explicitly enable journaling in the MongoDB config result in a startup warning:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;The storage.journal.enabled option and the corresponding --journal and --nojournal command-line options have no effect in this version... Journaling is always enabled. Please remove those options from the config.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;False alert may confuse users and lead to misconfiguration attempts that prevent MongoDB from starting.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.44, 3.1.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h3 id="percona-kubernetes-operator"&gt;Percona Kubernetes Operator&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-772" target="_blank" rel="noopener noreferrer"&gt;K8SPG-772&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; In the Percona PostgreSQL Operator, a runtime panic occurs when CompletedAt is nil and not properly checked before dereferencing, leading to a segmentation fault:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;panic: runtime error: invalid memory address or nil pointer dereference
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[signal SIGSEGV: segmentation violation code=0x1 addr=0x0]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Stack trace:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;github.com/percona/percona-postgresql-operator/percona/watcher.getLatestBackup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; .../wal.go:123
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;github.com/percona/percona-postgresql-operator/percona/watcher.WatchCommitTimestamps
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; .../wal.go:65&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The CompletedAt field is not validated before being accessed in getLatestBackup(), which causes a crash during WAL watcher execution.&lt;/p&gt;
&lt;p&gt;This panic can crash the operator’s goroutine, interrupting WAL monitoring and potentially affecting backup or failover logic.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.5.0, 2.6.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 2.7.0 - Pending Release&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-792" target="_blank" rel="noopener noreferrer"&gt;K8SPG-792&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; The upstream operator includes functionality that allows cluster or operator administrators to define default PostgreSQL images for each major version using environment variables. This enables users to create clusters without explicitly specifying spec.image, as the operator will automatically apply the predefined image.&lt;/p&gt;
&lt;p&gt;However, a recently introduced Patroni version check does not align with this behavior. It introduces a hardcoded dependency on spec.image, effectively bypassing the default image mechanism and undermining the intended feature.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.6.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; A possible workaround exists by manually setting the Patroni version through annotations, but this is not ideal and diminishes the convenience and flexibility originally provided.&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; A fix or workaround is expected in a future release.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1651" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1651&lt;/a&gt;: While testing the Pod Scheduling Policy feature in &lt;a href="https://docs.percona.com/everest/index.html" target="_blank" rel="noopener noreferrer"&gt;Everest&lt;/a&gt;, we encountered a situation where a PXC database pod remained in the &lt;strong&gt;Pending&lt;/strong&gt; state. This occurred because Kubernetes was unable to schedule the pod on any available node due to an affinity configuration mismatch.&lt;/p&gt;
&lt;p&gt;However, even after updating the affinity rules in the PerconaXtraDBCluster object, the new configuration was not propagated to the pod, and it remained in the &lt;strong&gt;Pending&lt;/strong&gt; state.&lt;/p&gt;
&lt;p&gt;The fact that the Pod remains stuck in Pending &lt;strong&gt;even after affinity is changed or removed&lt;/strong&gt; — and only a manual kubectl delete pod resolves it — indicates that &lt;strong&gt;the operator fails to reconcile affinity changes properly&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This issue affects other operators as well, not just PXC.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 1.17.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 1.20.0 - Yet to be released&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1648" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1648&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; The PVC size is rounded up to the nearest whole GiB value (e.g., 1.2Gi becomes 2Gi). When a storage resize operation is triggered, the operator deletes the existing StatefulSet (STS) and recreates it with the new requested PVC size.&lt;/p&gt;
&lt;p&gt;However, if the new requested size rounds up to the same value as the original, the operator does not recreate the STS. Instead, it attempts to update the existing STS, which leads to the following error:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This issue affects other operators as well, not just PXC.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Error: failed to deploy pxc: updatePod for pxc: failed to create or update sts:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;update error: StatefulSet.apps "minimal-cluster-pxc" is invalid: spec: Forbidden:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;updates to statefulset spec for fields other than 'replicas', 'ordinals',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'minReadySeconds' are forbidden&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 1.17.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 1.20.0 - Yet to be released&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="pbm-percona-backup-for-mongodb"&gt;PBM [Percona Backup for MongoDB]&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1499" target="_blank" rel="noopener noreferrer"&gt;PBM-1499&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; Restore to Missing Backup Fails with Unclear Error in Restore Custom Resource Status.&lt;br&gt;
When attempting to restore from a backup that does not exist in the main storage, the operator logs correctly report the failure:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;define base backup: get backup metadata from storage: get from store: no such file&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;However, the status.error field in the PerconaServerMongoDBRestore custom resource only shows a generic message:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;error: 'define base backup: %v'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This results in a misleading or unclear error message being surfaced to the user through the custom resource, even though the logs contain the full and accurate description of the issue.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.8.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 2.10.0 - Released&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1502" target="_blank" rel="noopener noreferrer"&gt;PBM-1502&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; In &lt;strong&gt;PBM 2.9.0&lt;/strong&gt;, running pbm profile sync &lt;profile-name&gt; fails with the error:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Error: &lt;profile-name&gt; or --all must be provided&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This occurs &lt;strong&gt;even when a valid profile name is given&lt;/strong&gt;, such as:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pbm profile sync azure-blob&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The issue affects all defined profiles (azure-blob, gcp-cs, minio) and prevents syncing individual profiles. This appears to be a bug where the CLI fails to recognize the passed argument.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.9.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Use pbm profile sync --all instead&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 2.10.0 - Released&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1538" target="_blank" rel="noopener noreferrer"&gt;PBM-1538&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; Backup is marked as successful, despite the oplog not being uploaded.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.4.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 2.10.0 - Released&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1551" target="_blank" rel="noopener noreferrer"&gt;PBM-1551&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; In a single-node PSMDB replica set with one PBM agent, PBM occasionally &lt;strong&gt;re-executes the last issued command&lt;/strong&gt;, causing errors like:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;active lock is present&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This typically occurs when the database is under load.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.9.1&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 2.10.0 - Released&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1553" target="_blank" rel="noopener noreferrer"&gt;PBM-1553&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; When restoring a 33-shard physical backup from a mixed (MongoDB Enterprise + Percona) production cluster into a Percona-only test cluster, &lt;strong&gt;PBM intermittently fails during the “clean-up and reset replicaset config” stage&lt;/strong&gt;. Some shards restore successfully, while others restore only partially or fail entirely.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Both clusters run MongoDB 6, with FCV set to 5.&lt;/li&gt;
&lt;li&gt;Restore uses --replset-remapping due to different replica set names&lt;/li&gt;
&lt;li&gt;Issue affects restores regardless of matching node count per shard.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The problem appears tied to the restore logic handling replica set configuration cleanup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.9.1&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 2.10.0 - Released&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1564" target="_blank" rel="noopener noreferrer"&gt;PBM-1564&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; A user environment experiences repeated failures during &lt;strong&gt;incremental backups&lt;/strong&gt; with the error:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[ERROR: cannot use the configured storage: source backup is stored on a different storage]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Full, base incremental, and logical backups succeed without issues. There is no indication of recent storage or configuration changes, and pbm status shows backup attempts occur close together.&lt;/p&gt;
&lt;p&gt;The issue temporarily resolves after running pbm config --force-resync, suggesting a possible bug in storage metadata syncing or internal state handling.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.8.0&lt;br&gt;
&lt;strong&gt;Upstream Bug:&lt;/strong&gt; Not Applicable&lt;br&gt;
&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Not Available&lt;br&gt;
&lt;strong&gt;Fixed/Planned Version/s:&lt;/strong&gt; 2.10.0 - Released&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, &lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;How to Report Bugs, Improvements, New Feature Requests for Percona Products&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the most up-to-date information, be sure to follow us on &lt;a href="https://twitter.com/percona" target="_blank" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/percona" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://www.facebook.com/Percona?fref=ts" target="_blank" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Quick References:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://jira.percona.com" target="_blank" rel="noopener noreferrer"&gt;Percona JIRA&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL Bug Report&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;Report a Bug in a Percona Product&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Aaditya Dubey</author>
      <category>PMM</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>MongoDB</category>
      <category>Percona</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2025/06/BugReportJune2025_hu_be31cb01591cf5d8.jpg"/>
      <media:content url="https://percona.community/blog/2025/06/BugReportJune2025_hu_257336b252c98756.jpg" medium="image"/>
    </item>
    <item>
      <title>What's new in PMM 3.2.0: Five major improvements you need to know</title>
      <link>https://percona.community/blog/2025/06/03/percona-monitoring-management-3-2-five-improvements/</link>
      <guid>https://percona.community/blog/2025/06/03/percona-monitoring-management-3-2-five-improvements/</guid>
      <pubDate>Tue, 03 Jun 2025 00:00:00 UTC</pubDate>
      <description>PMM 3.2.0 brings some long-awaited fixes and new capabilities. You can now install PMM Client on Amazon Linux 2023 with proper RPM packages, get complete MySQL 8.4 replication monitoring, and track MongoDB backups directly in PMM.</description>
      <content:encoded>&lt;p&gt;PMM 3.2.0 brings some long-awaited fixes and new capabilities. You can now install PMM Client on Amazon Linux 2023 with proper RPM packages, get complete MySQL 8.4 replication monitoring, and track MongoDB backups directly in PMM.&lt;/p&gt;
&lt;p&gt;Here’s what’s most important in this release:&lt;/p&gt;
&lt;h2 id="1-native-amazon-linux-2023-support---no-more-workarounds"&gt;1. Native Amazon Linux 2023 support - no more workarounds&lt;/h2&gt;
&lt;p&gt;What’s new: If you’ve been running PMM Client on AL2023 and dealing with complex manual installations, those days are over. You can now install PMM Client through &lt;a href="https://repo.percona.com" target="_blank" rel="noopener noreferrer"&gt;native RPM packages&lt;/a&gt; just like any other supported platform.&lt;/p&gt;
&lt;p&gt;What this means for you: Streamlined setup means you can get your Amazon Linux 2023 environments monitored faster.&lt;/p&gt;
&lt;h2 id="2-complete-mysql-84-replication-monitoring"&gt;2. Complete MySQL 8.4 replication monitoring&lt;/h2&gt;
&lt;p&gt;What’s new: PMM now fully supports replication monitoring for MySQL 8.4, including key metrics like IO Thread status, SQL Thread status, and Replication Lag. MySQL 8.4 changed how these metrics are exposed, and earlier PMM versions couldn’t track them accurately.&lt;/p&gt;
&lt;p&gt;What this means for you: With the upgraded MySQL Exporter (v0.17.2), you now get complete replication monitoring across all supported MySQL versions (5.7, 8.0, and 8.4) without any visibility gaps.&lt;/p&gt;
&lt;h2 id="3-mongodb-backup-monitoring-dashboard"&gt;3. MongoDB backup monitoring dashboard&lt;/h2&gt;
&lt;p&gt;What’s new: The new &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/reference/dashboards/dashboard-mongodb-PBM-details.html" target="_blank" rel="noopener noreferrer"&gt;PBM Details dashboard &lt;/a&gt;lets you monitor MongoDB backups directly in PMM using the PBM collector. Instead of switching between PMM and separate backup tools, you now get a real-time, unified view of backup activity across replica sets and sharded clusters.&lt;/p&gt;
&lt;p&gt;What this means for you: Easily track backup status, configuration, size, duration, PITR status, and recent successful backups—all in one place. No more tool-hopping to stay on top of your backup operations.&lt;/p&gt;
&lt;h2 id="4-grafana-116-upgrade-with-enhanced-capabilities"&gt;4. Grafana 11.6 upgrade with enhanced capabilities&lt;/h2&gt;
&lt;p&gt;What’s new: PMM now ships with Grafana 11.6, delivering enhanced visualization capabilities and improved alerting workflows.&lt;/p&gt;
&lt;p&gt;Key features include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Alert state history for reviewing historical changes in alert statuses&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Improved panel features and visualization actions&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Simplified alert creation with better UI workflows&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Recording rules for creating pre-computed metrics&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Navigation bookmarks for quick dashboard access&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What this means for you: These enhancements make your monitoring dashboards more interactive, your alerting more sophisticated, and your overall monitoring workflow more efficient.&lt;/p&gt;
&lt;h2 id="5-dramatically-improved-query-analytics-performance"&gt;5. Dramatically improved Query Analytics performance&lt;/h2&gt;
&lt;p&gt;What’s new: We’ve optimized QAN filter loading performance to reduce the number of processed rows by up to 95% in large environments.&lt;/p&gt;
&lt;p&gt;What this means for you: Filters on the &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/use/qan/index.html?h=query+ana" target="_blank" rel="noopener noreferrer"&gt;PMM Query Analytics page&lt;/a&gt; now load much faster, making the interface more responsive to improve your troubleshooting efficiency.&lt;/p&gt;
&lt;h2 id="additional-improvements-worth-noting"&gt;Additional improvements worth noting&lt;/h2&gt;
&lt;p&gt;Beyond these five major enhancements, PMM 3.2.0 also introduces:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Secure ClickHouse connections with authenticated credential support&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;MongoDB Feature Compatibility Version (FCV) panels for better cluster version visibility&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Nomad integration laying groundwork for future extensibility&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Numerous bug fixes improving stability across ProxySQL, PostgreSQL, and MySQL monitoring&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="getting-started-with-pmm-320"&gt;Getting started with PMM 3.2.0&lt;/h2&gt;
&lt;p&gt;Ready to experience these improvements? Set up your PMM 3.2.0 instance using our &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/quickstart/quickstart.html" target="_blank" rel="noopener noreferrer"&gt;quickstart guide&lt;/a&gt; or upgrade your existing installation following our &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/pmm-upgrade/migrating_from_pmm_2.html" target="_blank" rel="noopener noreferrer"&gt;migration documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For existing users with external PostgreSQL databases, make sure to review the &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/pmm-upgrade/external_postgres_pmm_upgrade.html" target="_blank" rel="noopener noreferrer"&gt;external PostgreSQL configuration migration guide&lt;/a&gt; before upgrading.&lt;/p&gt;
&lt;p&gt;Questions or feedback? We’d love to hear from you! Connect with the Percona community through our &lt;a href="https://forums.percona.com/c/percona-monitoring-and-management-pmm/30/none" target="_blank" rel="noopener noreferrer"&gt;forums&lt;/a&gt; or join the conversation on our community channels.&lt;/p&gt;</content:encoded>
      <author>Catalina Adam</author>
      <category>PMM</category>
      <category>Monitoring</category>
      <category>Percona</category>
      <category>Databases</category>
      <media:thumbnail url="https://percona.community/blog/2025/06/PMM-32-five_hu_b44082c6b85bf6ef.jpg"/>
      <media:content url="https://percona.community/blog/2025/06/PMM-32-five_hu_9306982df4fa628.jpg" medium="image"/>
    </item>
    <item>
      <title>PostgreSQL 18 - Top Enterprise Features (fast read)</title>
      <link>https://percona.community/blog/2025/05/26/postgresql-18-top-enterprise-features-fast-read/</link>
      <guid>https://percona.community/blog/2025/05/26/postgresql-18-top-enterprise-features-fast-read/</guid>
      <pubDate>Mon, 26 May 2025 00:00:00 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/05/pg18img_hu_9976bc51805a49cb.png 480w, https://percona.community/blog/2025/05/pg18img_hu_31fc038739fc52b8.png 768w, https://percona.community/blog/2025/05/pg18img_hu_c86e80e79914b07a.png 1400w"
src="https://percona.community/blog/2025/05/pg18img.png" alt="Postgres 18 is coming!" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;So the &lt;a href="https://www.postgresql.org/about/news/postgresql-18-beta-1-released-3070/" target="_blank" rel="noopener noreferrer"&gt;Beta1 is available for PostgreSQL 18&lt;/a&gt; and while not all the &lt;a href="https://www.postgresql.org/docs/18/release-18.html" target="_blank" rel="noopener noreferrer"&gt;features&lt;/a&gt; have to make it to GA, we can surely hope they do!&lt;/p&gt;
&lt;p&gt;Taking a close look at &lt;a href="https://www.postgresql.org/docs/18/release-18.html" target="_blank" rel="noopener noreferrer"&gt;what’s coming&lt;/a&gt;, here below is the selection of what excites me in particular:&lt;/p&gt;
&lt;h2 id="1-oauth-20-authentication-support"&gt;1. OAuth 2.0 authentication support&lt;/h2&gt;
&lt;p&gt;→ Finally aligns with modern enterprise SSO and identity standards (e.g., Okta, Azure AD). A major win for security teams and regulatory compliance.&lt;/p&gt;
&lt;h2 id="2-logical-replication-from-standbys-now-with-conflict-logging"&gt;2. Logical replication from standbys now with conflict logging&lt;/h2&gt;
&lt;p&gt;→ Now you can replicate from replicas not only primary nodes and thanks to conflict logging troubleshooting issues moves closer to what the users have been asking for. It’s a big step toward robust, native, HA-friendly logical replication. Not yet there, but on the right path!&lt;/p&gt;
&lt;h2 id="3-asynchronous-io-aio"&gt;3. Asynchronous I/O (AIO)&lt;/h2&gt;
&lt;p&gt;→ Modern async reads improve performance, especially under heavy parallel workloads. Foundation for future IO improvements, and also a feature that scratches an itch for a lot of cloud deployments.&lt;/p&gt;
&lt;h2 id="4-faster--safer-major-upgrades"&gt;4. Faster &amp; safer major upgrades&lt;/h2&gt;
&lt;p&gt;→ &lt;code&gt;pg_upgrade&lt;/code&gt; enhancements like parallel upgrade checks (&lt;code&gt;--jobs&lt;/code&gt;), safer upgrades (&lt;code&gt;--swap&lt;/code&gt;), and planner stats carried forward = faster version adoption and smoother upgrades for large clusters.&lt;/p&gt;
&lt;h2 id="5-observability"&gt;5. Observability++&lt;/h2&gt;
&lt;p&gt;→ Enhanced EXPLAIN statement and pg_stat_io improvements enhance understanding and optimize I/O behavior across tables, indexes, and WAL. This reduces the need for external monitoring tools.&lt;/p&gt;
&lt;h2 id="other-notable-features"&gt;Other Notable Features&lt;/h2&gt;
&lt;p&gt;While these are the top mentions, the goodies are not limited to only these. Some smaller improvements are just as exciting. Looking at these, the top interesting ones for me are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;New&lt;/strong&gt; &lt;code&gt;extension_control_path&lt;/code&gt; &lt;strong&gt;Server Variable&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Enables operators to manage PostgreSQL extensions via &lt;strong&gt;Kubernetes image volumes&lt;/strong&gt; (&lt;a href="https://kubernetes.io/blog/2025/04/29/kubernetes-v1-33-image-volume-beta/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes 1.33 image volumes&lt;/a&gt;) without modifying the base image.&lt;/p&gt;
&lt;p&gt;→ Huge win for immutable image strategies and GitOps-friendly operator design.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Easier online constraints management&lt;/p&gt;
&lt;p&gt;→ Add new &lt;code&gt;NOT NULL&lt;/code&gt; constraints without locking large tables using &lt;code&gt;NOT VALID&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;→ Use &lt;code&gt;NOT ENFORCED&lt;/code&gt; foreign keys and &lt;code&gt;CHECK&lt;/code&gt; to model relationships without runtime overhead&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Smarter index maintenance (bottom-up deletion)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Reduces bloat, lowers vacuum overhead.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SQL/JSON path support + JSON performance gains&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Enables document-style querying at scale.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Better insights into queries and vacuum&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ &lt;code&gt;EXPLAIN&lt;/code&gt; now includes buffer usage in subplans, triggers, and functions which helps spot slow parts&lt;/p&gt;
&lt;p&gt;→ &lt;code&gt;pg_stat_all_tables&lt;/code&gt; now tracks how much time vacuum and autovacuum take per table&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Indexing improvements&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Parallel GIN builds help speed up full-text and vector searches key for hybrid full-text and vector search&lt;/p&gt;
&lt;p&gt;→ B-tree skip scans make range and selective queries faster&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="what-enterprises-would-want-in-postgresql-19"&gt;What enterprises would want in PostgreSQL 19+&lt;/h2&gt;
&lt;p&gt;There is already a lot to like in this release, but based on what we hear from users, customers, and our own teams, here is what is still high on the list.
First up and this one’s close to home is that we’d love to see the Transparent Data Encryption (TDE) patches from Percona Server for PostgreSQL make their way upstream. That would allow users to benefit from &lt;code&gt;pg_tde&lt;/code&gt; directly in Community PostgreSQL Server.
The rest of the list is a mix of long standing asks and forward looking ideas. It is a wishlist for sure, but one we hope to help make real over time:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Built-in Logical Conflict Resolution Algorithms&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Support for conflict-handling strategies (e.g., last-write-wins, column-level rules) would simplify bidirectional replication and eliminate the need for custom frameworks, opening the door for fully open-source multi-master setups.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Logical failover orchestration&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Seamless promotion and failover in logical topologies, with less reliance on external tooling. This would be great from the perspective of both Kubernetes deployments as well as the ease of use for HA solutions out there.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Better integration with external auth systems&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Automatic PostgreSQL user creation based on OAUTH/LDAP roles at login, reducing operational burden for large-scale identity management and central access control.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pluggable or columnar storage support&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Native support or better extension hooks for OLAP and hybrid workloads, closing the gap with cloud-native alternatives like Citus or Redshift.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Sharding to provide horizontal scaling&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;→ Transparent sharding is a highly desired capability that becomes critical as workloads scale. While not always needed on day one, having built-in sharding means teams can grow without reinventing the wheel. Lack of it makes horizontal scaling complex, requiring more expertise and introducing higher operational overhead for DBA teams.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Jan Wieremjewicz</author>
      <category>PostgreSQL</category>
      <category>Opensource</category>
      <category>pg_jan</category>
      <media:thumbnail url="https://percona.community/blog/2025/05/jan-pg-18-cover_hu_d632612eb3789ade.jpg"/>
      <media:content url="https://percona.community/blog/2025/05/jan-pg-18-cover_hu_a35314ad552524c4.jpg" medium="image"/>
    </item>
    <item>
      <title>OS Platform End of Life (EOL) Announcement for Ubuntu 20.04 LTS</title>
      <link>https://percona.community/blog/2025/05/22/os-platform-end-of-life-eol-announcement-for-ubuntu-20.04-lts/</link>
      <guid>https://percona.community/blog/2025/05/22/os-platform-end-of-life-eol-announcement-for-ubuntu-20.04-lts/</guid>
      <pubDate>Thu, 22 May 2025 00:00:00 UTC</pubDate>
      <description>Ubuntu 20.04 LTS (Focal Fossa) is scheduled to reach its official end of life on May 31, 2025. In alignment with the upstream vendor’s lifecycle, we are also ending platform support for Ubuntu 20.04 for all our MySQL related product offerings. This date and others are published in advance on our Percona Release Lifecycle Overview page.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://ubuntu.com/blog/ubuntu-20-04-lts-end-of-life-standard-support-is-coming-to-an-end-heres-how-to-prepare" target="_blank" rel="noopener noreferrer"&gt;Ubuntu 20.04 LTS&lt;/a&gt; (Focal Fossa) is scheduled to reach its official end of life on &lt;strong&gt;May 31, 2025&lt;/strong&gt;. In alignment with the upstream vendor’s lifecycle, we are also ending platform support for Ubuntu 20.04 for all our MySQL related product offerings. This date and others are published in advance on our &lt;a href="https://www.percona.com/services/policies/percona-software-support-lifecycle" target="_blank" rel="noopener noreferrer"&gt;Percona Release Lifecycle Overview&lt;/a&gt; page.&lt;/p&gt;
&lt;p&gt;As part of our support policy, &lt;strong&gt;Percona will continue to provide advisory support for databases running on EOL platforms&lt;/strong&gt;, but effective &lt;strong&gt;April 1, 2025&lt;/strong&gt;, we have discontinued:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Delivery of new packages or binary builds&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Distribution of hotfixes or bug fixes for Percona software on Ubuntu 20.04&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OS-level support for issues not related to the database itself&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, all existing packages will remain available for download.&lt;/p&gt;
&lt;p&gt;We are committed to ensuring a smooth and seamless experience for our users. Migrating to a supported operating system will ensure that you continue to receive security updates, bug fixes, and new features for our products.&lt;/p&gt;
&lt;p&gt;We encourage you to take action promptly and plan your migration from Ubuntu Focal to a supported operating system. Each operating system vendor has different supported migration or upgrade paths to their next major release. Please &lt;a href="https://www.percona.com/services" target="_blank" rel="noopener noreferrer"&gt;contact us&lt;/a&gt; if you need assistance migrating your database to a different supported OS platform – we will be happy to assist you!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If requested by a customer, we may provide updated Percona packages for up to six months beyond the Percona EOL date, as a courtesy grace period.&lt;/p&gt;</content:encoded>
      <author>Julia Vural</author>
      <category>Ubuntu</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2025/05/eol-ubuntu_hu_d33ab1f53ade240d.jpg"/>
      <media:content url="https://percona.community/blog/2025/05/eol-ubuntu_hu_78dbe9450f5c4a1.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Bug Report: April 2025</title>
      <link>https://percona.community/blog/2025/05/19/percona-bug-report-april-2025/</link>
      <guid>https://percona.community/blog/2025/05/19/percona-bug-report-april-2025/</guid>
      <pubDate>Mon, 19 May 2025 00:00:00 UTC</pubDate>
      <description>At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.</description>
      <content:encoded>&lt;p&gt;At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.&lt;/p&gt;
&lt;p&gt;We constantly update our &lt;a href="https://jira.percona.com/" target="_blank" rel="noopener noreferrer"&gt;bug reports&lt;/a&gt; and monitor &lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;other boards&lt;/a&gt; to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. This post is a central place to get information on the most noteworthy open and recently resolved bugs.&lt;/p&gt;
&lt;p&gt;In this edition of our bug report, we have the following list of bugs.&lt;/p&gt;
&lt;h2 id="percona-servermysql-bugs"&gt;Percona Server/MySQL Bugs&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-8846" target="_blank" rel="noopener noreferrer"&gt;PS-8846&lt;/a&gt;: The ALTER INSTANCE RELOAD TLS thread gets stuck. This happens in instances with a high new connections rate (&gt;60/s) but not in all instances. Percona and Upstream did not fix it properly because the current implementation of ALTER INSTANCE RELOAD TLS requires all existing SSL connections to be closed. “A thread that executes ALTER INSTANCE RELOAD TLS tries to acquire an RCU Lock(Read-Copy-Update), waiting for the number of readers to become 0. In other words, when the server has a constant flow of new incoming SSL connections, the chances of acquiring this lock are pretty low.” Therefore, Percona and Oracle only partially fixed this; this fix should improve this behaviour.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.32-24, 8.0.41&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9609" target="_blank" rel="noopener noreferrer"&gt;PS-9609&lt;/a&gt;: The audit_log_filter can’t be installed when the server is using component_keyring_kmip&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.39-30&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.42-33&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9628" target="_blank" rel="noopener noreferrer"&gt;PS-9628&lt;/a&gt;: The binlog_encryption does not work with component_keyring_kmip&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;&lt;/strong&gt;: 8.0.40-31&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.42-33&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9664" target="_blank" rel="noopener noreferrer"&gt;PS-9664&lt;/a&gt;: With a very simple workload, MyRocks allocates a lot of memory and does not free it when the workload finishes. All instrumentation available either does not provide information about memory allocated or provides only part of it. As a result, users cannot predict how much RAM to install on the server that runs the MyRocks storage engine. With InnoDB same workload requires about 1.7G and frees about 0.5G once the job is finished.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;&lt;/strong&gt;: 8.0.39-30&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9719" target="_blank" rel="noopener noreferrer"&gt;PS-9719&lt;/a&gt;: When changing &lt;a href="https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-options-binary-log.html#sysvar_binlog_transaction_dependency_tracking" target="_blank" rel="noopener noreferrer"&gt;binlog_transaction_dependency_tracking&lt;/a&gt; in high load workload, MySQL got a segmentation fault.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.40, 8.0.41-32, 8.0.33&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: &lt;a href="https://bugs.mysql.com/bug.php?id=117922" target="_blank" rel="noopener noreferrer"&gt;117922&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: “set global binlog_transaction_dependency_tracking = commit_order;”&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.42-33, 8.4.5-5&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9768" target="_blank" rel="noopener noreferrer"&gt;PS-9768&lt;/a&gt;: An unexpected duplicate error occurs when running a select query with a group by JSON data.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.41-32, 8.4.4-4&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: &lt;a href="https://bugs.mysql.com/bug.php?id=117927" target="_blank" rel="noopener noreferrer"&gt;117927&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Rebuilding the table with “alter table db_name.table_name = rocksdb;” can fix the issue.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-xtradb-cluster"&gt;Percona Xtradb Cluster&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4512" target="_blank" rel="noopener noreferrer"&gt;PXC-4512&lt;/a&gt;: When DDLs run against tables with foreign key references when there is a write load simultaneously. The issue is typically triggered during pt-online-schema-change execution, and after a dozen or so iterations, random PXC nodes will terminate with MDL BF-BF conflict. Sometimes, the writer fails, and sometimes, the other nodes, but it can be reproducible with just the RENAME query.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.33-25, 8.0.35-27, 8.0.36-28, 8.0.37-29, 8.0.41&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Only a full write stop would help.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.42, 8.4.5&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4648" target="_blank" rel="noopener noreferrer"&gt;PXC-4648&lt;/a&gt;: After upgrading from 8.0.41 to 8.4.3, the node can’t join the group with the following error.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[ERROR] [MY-000000] [Galera] /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.4.3/percona-xtradb-cluster-galera/gcs/src/gcs_group.cpp:group_check_proto_ver():343: Group requested gcs_proto_ver: 5, max supported by this node: 4.Upgrade the node before joining this group.Need to abort.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The problem is that 8.0.41 and 8.4.4 use Galera 26.4.21. It introduced the GCS protocol version 5, 8.0.40 and 8.4.3 using Galera 26.4.20. So, the node that doesn’t understand protocol 5 tries to connect to a cluster that uses protocol 5.&lt;/p&gt;
&lt;p&gt;It is fixed as a documented bug, which can be seen &lt;a href="https://docs.percona.com/percona-xtradb-cluster/8.4/upgrade-guide.html?h=upgrade+newest+8.0+version+ensure+is+newer+corresponding+8.4+plan" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.4.3&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.4.4&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4638" target="_blank" rel="noopener noreferrer"&gt;PXC-4638&lt;/a&gt;: The binlog_utils_udf plugin fails to access binlog files correctly after an SST (State Snapshot Transfer) due to inconsistencies in the mysql-bin.index file.  After an SST, the first entry in the mysql-bin.index file can be incorrectly formatted with a relative path, while subsequent entries use absolute paths. This inconsistency can prevent the binlog_utils_udf plugin from locating the correct binlog files.&lt;/p&gt;
&lt;p&gt;The resulting mysql-bin.index content after SST might appear as:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql-bin.000010
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/home/user/sandboxes/pxc_msb_8_0_40/node2/data/mysql-bin.000011
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/home/user/sandboxes/pxc_msb_8_0_40/node2/data/mysql-bin.000012 &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.40, 8.0.41&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Rewriting binlog.index and adding the “./” prefix to the first entry, and running flush binary logs resolved this issue.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.42, 8.4.5&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-3576" target="_blank" rel="noopener noreferrer"&gt;PXC-3576&lt;/a&gt;: Deploying a new installation using the setting lower_case_table_names=1 on the startup generates the following entry on the log:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[Warning] [MY-010324] [Server] 'db' entry 'percona_schema mysql.pxc.sst.role@localhost' had database in mixed case that has been forced to lowercase because lower_case_table_names is set. It will not be possible to remove this privilege using REVOKE.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Looking at the mysql.db, we can see the deployment was able to create the mysql.pxc.sst.role mapping an uppercase database name:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; select * from mysql.db where user='mysql.pxc.sst.role';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+--------------------+--------------------+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Host | Db | User | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Create_tmp_table_priv | Lock_tables_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Execute_priv | Event_priv | Trigger_priv |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+--------------------+--------------------+-------------+-------------+-------------+-------------+-------------+-----------+------------+-------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| localhost | PERCONA_SCHEMA | mysql.pxc.sst.role | N | N | N | N | Y | N | N | N | N | N | N | N | N | N | N | N | N | N | N |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| localhost | percona_schema | mysql.pxc.sst.role | N | N | N | N | Y | N | N | N | N | N | N | N | N | N | N | N | N | N | N |&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This produces a duplicate entry for the percona_schema database and a warning message on the mysql.log.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.21-12.1, 8.0.41&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.42, 8.4.5&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4657" target="_blank" rel="noopener noreferrer"&gt;PXC-4657&lt;/a&gt;: When executing DML and DDL on the same table, the DML will get a deadlock error. If the DML does not change the data but matches, it won’t be replicated, for example.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql &gt; UPDATE test.t SET d = d LIMIT 1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 rows affected (0.01 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Rows matched: 1 Changed: 0 Warnings: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;If the table contains a trigger, the DML does not get a deadlock error and will be replicated to other nodes.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TRIGGER `t_on_update`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;AFTER UPDATE ON `t`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FOR EACH ROW
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;BEGIN
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; INSERT INTO t_history (`d`)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; VALUES (NEW.`d`);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;END&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;When other nodes apply the DML and DDL, the applier threads will get an MDL BF-BF conflict.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.40, 8.0.41&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Using a single applier thread will skip the bug, but it may introduce a performance issue, such as flow control.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.42, 8.4.5&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4664" target="_blank" rel="noopener noreferrer"&gt;PXC-4664&lt;/a&gt;: Converting thd-&gt;rli_slave to target type Slave_worker* causes a segmentation fault.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.41&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-toolkit"&gt;Percona Toolkit&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2392" target="_blank" rel="noopener noreferrer"&gt;PT-2392&lt;/a&gt;: pt-online-schema-change resume functionality doesn’t work with ADD INDEX&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.6.0, 3.7.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.7.1&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2322" target="_blank" rel="noopener noreferrer"&gt;PT-2322&lt;/a&gt;: The issue reports that pt-mysql-summary does not correctly detect and display the jemalloc memory management library, even when it is enabled. Despite jemalloc being loaded and visible in the process memory map (/proc/&lt;mysqld_pid&gt;/maps), the output from pt-mysql-summary is missing this information in some cases, unlike version 3.2.1, which correctly identifies and reports it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.5.6, 3.5.7&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.7.1&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2422" target="_blank" rel="noopener noreferrer"&gt;PT-2422&lt;/a&gt;: When using the –history option with pt-online-schema-change (pt-osc), the query responsible for updating the history entry with the new_table_name is not appropriately constrained by a primary or unique key. As a result, this UPDATE operation can inadvertently modify all entries in the history table, rather than just the intended row.&lt;/p&gt;
&lt;p&gt;This behavior can lead to significant issues when running multiple schema change operations in parallel, as the history entries for different migrations may interfere with each other, causing data consistency problems.&lt;/p&gt;
&lt;p&gt;Additionally, if a migration is paused and later resumed, this lack of key constraint can result in only a subset of the data being correctly copied to the new table, potentially leading to partial data loss or corruption when the final table swap occurs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.5.6, 3.5.7&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.7.1&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2442" target="_blank" rel="noopener noreferrer"&gt;PT-2442&lt;/a&gt;: Multiple security vulnerabilities have been identified in the latest version of Percona Toolkit, including:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2024-56171" target="_blank" rel="noopener noreferrer"&gt;CVE-2024-56171&lt;/a&gt;: Use-After-Free Vulnerability in libxml2&lt;/p&gt;
&lt;p&gt;&lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2024-12797" target="_blank" rel="noopener noreferrer"&gt;CVE-2024-12797&lt;/a&gt;: OpenSSL Raw Public Key Authentication Vulnerability&lt;/p&gt;
&lt;p&gt;&lt;a href="https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-37967" target="_blank" rel="noopener noreferrer"&gt;CVE-2022-37967&lt;/a&gt;: Windows Kerberos Elevation of Privilege Vulnerability&lt;/p&gt;
&lt;p&gt;&lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2025-24928" target="_blank" rel="noopener noreferrer"&gt;CVE-2025-24928&lt;/a&gt;: Stack-Based Buffer Overflow in libxml2&lt;/p&gt;
&lt;p&gt;It is recommended that the associated advisories be reviewed and the necessary patches or upgrades be applied to mitigate the risk.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.7.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.7.0-1&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="pmm-percona-monitoring-and-management"&gt;PMM (Percona Monitoring and Management)&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13694" target="_blank" rel="noopener noreferrer"&gt;PMM-13694&lt;/a&gt;: When using a non-default pg_stat_statements.max value, the calculated QPS displayed in QAN may be wrong.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.38.0, 2.44.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.2.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13807" target="_blank" rel="noopener noreferrer"&gt;PMM-13807&lt;/a&gt;: pmm-agent crashed at query.Fingerprint due to query including a column named “value”&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.44.0, 3.0.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.2.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13847" target="_blank" rel="noopener noreferrer"&gt;PMM-13847&lt;/a&gt;: PMM 3.0 doesn’t support running on a different uid/gid in Kubernetes&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.0.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13984" target="_blank" rel="noopener noreferrer"&gt;PMM-13984&lt;/a&gt;: Percona Monitoring and Management (PMM) version 3.1 with OVA image currently cannot be imported into VMware environments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 3.1.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12784" target="_blank" rel="noopener noreferrer"&gt;PMM-12784&lt;/a&gt;: This error invalid GetActionRequest.ActionId: value length must be at least 1 runes. occurs in the QAN (Query Analytics) dashboard, indicating that the ActionId field in the GetActionRequest is empty or improperly formatted. This typically results from a missing or incorrectly populated Action ID parameter, which is required for retrieving query data.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.33.0, 2.41.0, 2.44.0, 3.0.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 3.2.0&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-xtrabackup"&gt;Percona XtraBackup&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3421" target="_blank" rel="noopener noreferrer"&gt;PXB-3421&lt;/a&gt;: XtraBackup fails when the –databases parameter contains a very long list of databases or has a large amount of whitespace before the actual database names. When the –databases parameter is provided with 1859 whitespace characters before a table name (e.g., db01.t1), XtraBackup crashes with a signal error. If the number of whitespace characters is reduced to 1858 or fewer, the backup proceeds successfully without error.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.35-31&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3426" target="_blank" rel="noopener noreferrer"&gt;PXB-3426&lt;/a&gt;: Using KMIP component causes double free of memory on error paths.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.35-31&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 8.0.35-33, 8.4.0-3&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3392" target="_blank" rel="noopener noreferrer"&gt;PXB-3392&lt;/a&gt;: xtrabackup doesn’t pick up –innodb-log-group-home-dir config parameter&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 8.0.35-30, 8.0.35-31, 8.0.35-32&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="percona-kubernetes-operator"&gt;Percona Kubernetes Operator&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-703" target="_blank" rel="noopener noreferrer"&gt;K8SPG-703&lt;/a&gt;: When using ttlSecondsAfterFinished, there is a potential race condition where the backup jobs may be deleted before the Percona operator has had sufficient time to reconcile the perconapgbackups objects. This issue can occur even with relatively long timeouts like 1m, 5m, or 30m, not just extremely short intervals.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.5.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 2.7.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPSMDB-1263" target="_blank" rel="noopener noreferrer"&gt;K8SPSMDB-1263&lt;/a&gt;: While creating a 1.3TB logical backup, the replica’s state changes to “errored” after approximately 16 hours with the message:&lt;/p&gt;
&lt;p&gt;“failed to find CERTIFICATE”&lt;/p&gt;
&lt;p&gt;despite the backup continuing to run and eventually completing. This raises concerns about the validity of the backup and the ability to restore from it without manually altering its status to “success.”&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.13.0, 1.14.0, 1.15.0, 1.19.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 1.20.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPSMDB-1292" target="_blank" rel="noopener noreferrer"&gt;K8SPSMDB-1292&lt;/a&gt;: When spec.tls.mode is set to requireTLS, physical backup restores fail with a “server selection timeout” error. This occurs because the operator cannot establish a secure connection to the MongoDB server, resulting in closed socket errors and the inability to disable Point-in-Time Recovery (PiTR).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.19.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 1.21.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPSMDB-1294" target="_blank" rel="noopener noreferrer"&gt;K8SPSMDB-1294&lt;/a&gt;: When using MCS on GKE 1.30, an API version mismatch occurs, resulting in the error:&lt;/p&gt;
&lt;p&gt;“no matches for kind ‘ServiceImport’ in version ’net.gke.io/v1alpha1’”&lt;/p&gt;
&lt;p&gt;This indicates that the expected ServiceImport kind is not available in the specified API version, preventing proper service discovery.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.19.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 1.20.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPSMDB-1336" target="_blank" rel="noopener noreferrer"&gt;K8SPSMDB-1336&lt;/a&gt;: Restoring a backup into a new Kubernetes cluster can lead to “Time monotonicity violation” errors on config servers and mongos, causing the pods to restart. This occurs when the restored chunk version timestamps are earlier than the expected timestamps in the new cluster, resulting in tripwire assertions and persistent crashes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.19.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: To Be Determined&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1548" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1548&lt;/a&gt;: Failed to delete old backups on Google Cloud Storage&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 1.14.0, 1.15.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 1.18.0&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="pbm-percona-backup-for-mongodb"&gt;PBM (Percona Backup for MongoDB)&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1482" target="_blank" rel="noopener noreferrer"&gt;PBM-1482&lt;/a&gt;: Selective Restore with replset-remapping hangs on oplog replay and doesn’t finish&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.5.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 2.10.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1487" target="_blank" rel="noopener noreferrer"&gt;PBM-1487&lt;/a&gt;: Error Location6493100 on mongos after successful logical restore or PITR&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.8.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 2.10.0&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PBM-1531" target="_blank" rel="noopener noreferrer"&gt;PBM-1531&lt;/a&gt;: PBM Restore getting randomly stuck&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s&lt;/strong&gt;: 2.6.0, 2.7.0, 2.8.0, 2.9.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug&lt;/strong&gt;: Not Applicable&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix&lt;/strong&gt;: Not Available&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed/Planned Version/s&lt;/strong&gt;: 2.10.0&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, &lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;How to Report Bugs, Improvements, New Feature Requests for Percona Products&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the most up-to-date information, be sure to follow us on &lt;a href="https://twitter.com/percona" target="_blank" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/percona" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://www.facebook.com/Percona?fref=ts" target="_blank" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Quick References:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://jira.percona.com" target="_blank" rel="noopener noreferrer"&gt;Percona JIRA&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL Bug Report&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;Report a Bug in a Percona Product&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Aaditya Dubey</author>
      <category>PMM</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>MongoDB</category>
      <category>Percona</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2025/05/BugReportApril2025_hu_9ef0f28e35b29c61.jpg"/>
      <media:content url="https://percona.community/blog/2025/05/BugReportApril2025_hu_a174c2ff678d8a6c.jpg" medium="image"/>
    </item>
    <item>
      <title>Setting Up and Monitoring MongoDB 8 Replica Sets with PMM 3 Using Docker: A Beginner-Friendly Guide</title>
      <link>https://percona.community/blog/2025/03/18/setting-up-and-monitoring-mongodb-8-replica-sets-with-pmm-3-using-docker-a-beginner-friendly-guide/</link>
      <guid>https://percona.community/blog/2025/03/18/setting-up-and-monitoring-mongodb-8-replica-sets-with-pmm-3-using-docker-a-beginner-friendly-guide/</guid>
      <pubDate>Tue, 18 Mar 2025 00:00:00 UTC</pubDate>
      <description>This guide explains how to set up a MongoDB 8 Replica Set and monitor it using PMM 3, all within Docker. We’ll guide you through the steps to create a local environment, configure the necessary components, and connect them for effective monitoring and management.</description>
      <content:encoded>&lt;p&gt;This guide explains how to set up a MongoDB 8 Replica Set and monitor it using PMM 3, all within Docker. We’ll guide you through the steps to create a local environment, configure the necessary components, and connect them for effective monitoring and management.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The guide is written in detail for beginners. In &lt;a href="#conclusion"&gt;the conclusion&lt;/a&gt; section there are ready configurations for the experienced.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The recent &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/release-notes/3.0.0.html" target="_blank" rel="noopener noreferrer"&gt;release&lt;/a&gt; of &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management 3&lt;/a&gt; introduces several new features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Upgraded Grafana version for an improved user experience.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rootless containers for enhanced security.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ARM support for the pmm-client.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Monitoring capabilities for MongoDB 8, along with &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/reference/dashboards/dashboard-mongodb-router-summary.html" target="_blank" rel="noopener noreferrer"&gt;new dashboards&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This article is intended for developers and DBAs who want to experiment with these tools locally using Docker. We will cover the following steps to set everything up and test the functionality:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Launch the PMM 3 pmm-server for monitoring and open it in a browser.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install MongoDB, starting with a standalone server, and then convert it into a Replica Set with three nodes. &lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB&lt;/a&gt; images are used in this article.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the pmm-client for MongoDB to send metrics to the pmm-server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Explore the PMM 3 &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/reference/dashboards/dashboard-mongodb-router-summary.html" target="_blank" rel="noopener noreferrer"&gt;dashboards&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/pmm-mongodb-rs_hu_8e877e10d43414f4.png 480w, https://percona.community/blog/2025/03/pmm-mongodb-rs_hu_4f9758ca9365ee98.png 768w, https://percona.community/blog/2025/03/pmm-mongodb-rs_hu_42d0c424fcd5e4b8.png 1400w"
src="https://percona.community/blog/2025/03/pmm-mongodb-rs.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - Dashboard - MongoDB Replica Set Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We will use Docker Compose to define and run multiple containers efficiently through a single &lt;code&gt;docker-compose.yaml&lt;/code&gt; file.&lt;/p&gt;
&lt;p&gt;If you’re ready to dive into the world of Dockerized database monitoring and management, let’s get started!&lt;/p&gt;
&lt;h2 id="step-zero-preparation"&gt;Step Zero: Preparation&lt;/h2&gt;
&lt;p&gt;To get started, you need a terminal to run Docker commands and a text editor to modify the docker-compose.yaml file.&lt;/p&gt;
&lt;p&gt;You also need Docker installed on your system. If Docker is not installed, follow these instructions to set it up:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Docker Desktop: This application includes both Docker and Docker Compose. It is available for multiple operating systems. This guide uses Docker Desktop on macOS with an Apple Silicon ARM processor. &lt;a href="https://www.docker.com/products/docker-desktop/" target="_blank" rel="noopener noreferrer"&gt;Download Docker Desktop&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker and Docker Compose (separately): If preferred, install Docker and Docker Compose individually. Use the following links for guidance:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.docker.com/get-started/" target="_blank" rel="noopener noreferrer"&gt;Download Docker for your OS&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://docs.docker.com/compose/install/" target="_blank" rel="noopener noreferrer"&gt;Download Docker Compose&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="command-to-verify-installation"&gt;Command to Verify Installation:&lt;/h3&gt;
&lt;p&gt;Verify Docker Compose Installation: If you’re using Docker Desktop, Docker Compose is included. If you installed Docker Compose separately, you can verify the installation with:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose --version&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This should return the version of Docker Compose installed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ community git:&lt;span class="o"&gt;(&lt;/span&gt;main&lt;span class="o"&gt;)&lt;/span&gt; ✗ docker-compose --version
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Docker Compose version v2.21.0-desktop.1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Once Docker and Docker Compose are installed and verified, you are ready to move on to the next steps of deploying PMM 3 and MongoDB in Docker.&lt;/p&gt;
&lt;h3 id="project-directory"&gt;Project Directory&lt;/h3&gt;
&lt;p&gt;Create a directory where we’ll store the configuration and necessary files. For example, I created a directory named pmm-mongodb-setup and navigated into it.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mkdir pmm-mongodb-setup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; pmm-mongodb-setup&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="step-one-starting-pmm-3-using-docker-compose"&gt;Step One: Starting PMM 3 Using Docker Compose&lt;/h2&gt;
&lt;p&gt;To start with, we will launch PMM 3, specifically the pmm-server, which we will later access via a browser.&lt;/p&gt;
&lt;h3 id="create-the-docker-compose-configuration-file"&gt;Create the Docker Compose Configuration File&lt;/h3&gt;
&lt;p&gt;First, create a file named &lt;code&gt;docker-compose.yaml&lt;/code&gt; in your project directory. Then, copy and paste the following configuration into the file to set up PMM 3:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'3'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;pmm-server&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/pmm-server:3&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"linux/amd64"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Specifies that Docker should use the image for the amd64 architecture, which is necessary if the container doesn't support ARM and your host system is ARM (e.g., Mac with Apple Silicon).&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-server&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="m"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="m"&gt;443&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;8443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Defines a command to check the container's health and sets the timing for executions and retries.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CMD-SHELL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl -k -f -L https://pmm-server:8443 &gt; /dev/null 2&gt;&amp;1 || exit 1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;30s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;10s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Explanation:&lt;/p&gt;
&lt;p&gt;We define the first &lt;code&gt;pmm-server&lt;/code&gt; service to use the &lt;code&gt;percona/pmm-server:3&lt;/code&gt; image&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;platform&lt;/code&gt;: This parameter ensures compatibility with ARM-based processors, such as my Mac with Apple Silicon, by instructing Docker to use the image for the amd64 architecture.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;healthcheck&lt;/code&gt;: This parameter performs a check to confirm that the container has started successfully.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;
&lt;h3 id="start-the-pmm-3-container"&gt;Start the PMM 3 Container&lt;/h3&gt;
&lt;p&gt;Save the &lt;code&gt;docker-compose.yaml&lt;/code&gt; file, and then use the following command to start the PMM 3 container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose up -d&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This command will download the PMM 3 image if it is not already available locally and start the container in detached mode.&lt;/p&gt;
&lt;p&gt;Expected result in the terminal:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup docker-compose up -d
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;+&lt;span class="o"&gt;]&lt;/span&gt; Running 2/2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Network pmm-mongodb-setup_default Created 0.1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-server Started 0.9s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected result in Docker Desktop:
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/docker-pmm-start_hu_3909e5a4671f6495.png 480w, https://percona.community/blog/2025/03/docker-pmm-start_hu_898ccea59f593e2b.png 768w, https://percona.community/blog/2025/03/docker-pmm-start_hu_ad0659857f814903.png 1400w"
src="https://percona.community/blog/2025/03/docker-pmm-start.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - Docker Desktop Start" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="open-pmm-in-a-browser"&gt;Open PMM in a browser&lt;/h3&gt;
&lt;p&gt;Now you can open PMM in your browser at http://localhost. Use admin/admin as the username and password to log in. When prompted to change the password, skip this step by clicking the Skip button. Since this is a test setup, we need a simple password for other containers.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/03/pmm-login.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - Login" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/pmm-home_hu_6f0a507496133b77.png 480w, https://percona.community/blog/2025/03/pmm-home_hu_1406d5f6bdc98fc6.png 768w, https://percona.community/blog/2025/03/pmm-home_hu_6568e254d94ccb86.png 1400w"
src="https://percona.community/blog/2025/03/pmm-home.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - Home Page" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now, PMM 3 is successfully running using Docker Compose.&lt;/p&gt;
&lt;h2 id="step-two-starting-mongodb"&gt;Step Two: Starting MongoDB&lt;/h2&gt;
&lt;p&gt;We start by launching a standalone MongoDB service. This simple configuration allows us to quickly understand how it operates before moving on to a more advanced setup with a Replica Set.&lt;/p&gt;
&lt;p&gt;To keep the database data persistent, even after a restart, a &lt;code&gt;volume&lt;/code&gt; is used for MongoDB data storage. Add the following configuration to the &lt;code&gt;docker-compose.yaml&lt;/code&gt; file under the &lt;code&gt;pmm-server&lt;/code&gt; service:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"percona/percona-server-mongodb:8.0-multi"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;mongodb-data:/data/db&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;MONGO_INITDB_ROOT_USERNAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;databaseAdmin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;password&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;"27017:27017"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mongod"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--port"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"27017"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--bind_ip_all"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--profile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--slowms"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--rateLimit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"100"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CMD-SHELL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mongosh --eval 'db.adminCommand(\"ping\")' --quiet"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;30s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;10s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# MongoDB data storage volume&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Explanation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;environment&lt;/code&gt;: Configures the root user’s credentials (username and password).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;command&lt;/code&gt;: Sets additional parameters for MongoDB:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;bind_ip_all&lt;/code&gt;: Allows external access to the database, for instance, through MongoDB Compass.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;profile&lt;/code&gt;: Enables profiling settings to support Query Analytics (QAN) in PMM.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;healthcheck&lt;/code&gt;: Ensures the container starts successfully by executing a health check command.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;volumes:&lt;/code&gt;: Creates the specified volume for MongoDB data storage. Defines a Docker volume to store MongoDB data persistently.&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;
&lt;h3 id="launching-the-configuration"&gt;Launching the Configuration&lt;/h3&gt;
&lt;p&gt;Save the updated &lt;code&gt;docker-compose.yaml&lt;/code&gt; file and launch the MongoDB service by running the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose up -d&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Docker Compose checks the configuration and starts the MongoDB container.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup docker-compose up -d
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;+&lt;span class="o"&gt;]&lt;/span&gt; Running 3/3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Volume &lt;span class="s2"&gt;"pmm-mongodb-setup_mongodb-data"&lt;/span&gt; Created 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-mongodb-setup-mongodb-1 Started 0.1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-server Running 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verifying MongoDB in Docker Desktop:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/docker-pmm-mongodb_hu_bb9842f0502a5d59.png 480w, https://percona.community/blog/2025/03/docker-pmm-mongodb_hu_171053abe6d42b2.png 768w, https://percona.community/blog/2025/03/docker-pmm-mongodb_hu_33937ddce310b985.png 1400w"
src="https://percona.community/blog/2025/03/docker-pmm-mongodb.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - Docker Desktop MongoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="step-three-pmm-client"&gt;Step Three: PMM Client&lt;/h2&gt;
&lt;p&gt;At this point, we have both the PMM Server, which is accessible in the browser, and MongoDB running. To transfer metrics from MongoDB to the PMM Server, a container with &lt;code&gt;pmm-client&lt;/code&gt; needs to be started.&lt;/p&gt;
&lt;p&gt;Add another service &lt;code&gt;pmm-client&lt;/code&gt; to the &lt;code&gt;docker-compose.yaml&lt;/code&gt; file, right after &lt;code&gt;mongodb&lt;/code&gt;. Note that `volumes:`` must remain at the bottom of the file.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;pmm-client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/pmm-client:3&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-client&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;depends_on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;pmm-server&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;service_healthy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;service_healthy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_ADDRESS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-server:8443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_USERNAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_INSECURE_TLS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_CONFIG_FILE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;config/pmm-agent.yaml&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SETUP&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SETUP_FORCE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_PRERUN_SCRIPT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;&gt;&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; pmm-admin status --wait=10s &amp;&amp;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; pmm-admin add mongodb --username=databaseAdmin --password=password --host=mongodb --port=27017 --query-source=profiler&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Explanation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;depends_on&lt;/code&gt;: Ensures that pmm-client starts only after pmm-server and mongodb have started successfully and passed the healthcheck.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PMM_AGENT_PRERUN_SCRIPT&lt;/code&gt;: The pmm-admin add mongodb command adds the MongoDB service to PMM for monitoring.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PMM_AGENT_SERVER_ADDRESS&lt;/code&gt;: Specifies the PMM Server address and uses port 8443.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PMM_AGENT_SERVER_USERNAME&lt;/code&gt; and &lt;code&gt;PMM_AGENT_SERVER_PASSWORD&lt;/code&gt;: Update these values if you have changed the PMM login credentials.&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;
&lt;h3 id="applying-the-updated-configuration"&gt;Applying the Updated Configuration&lt;/h3&gt;
&lt;p&gt;Run the following command to apply the updated docker-compose.yaml configuration and start the pmm-client service:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose up -d&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After running the command, you should see the pmm-client container start successfully:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup docker-compose up -d
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;+&lt;span class="o"&gt;]&lt;/span&gt; Running 3/3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-mongodb-setup-mongodb-1 Healthy 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-server Healthy 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-client Started 0.1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Open PMM in your browser. On the homepage, you should now see MongoDB listed as a monitored service:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/pmm-home-mongodb_hu_b6fcfc17c39e35b2.png 480w, https://percona.community/blog/2025/03/pmm-home-mongodb_hu_45fe2e0cc9332840.png 768w, https://percona.community/blog/2025/03/pmm-home-mongodb_hu_7c396f8ba7d7098.png 1400w"
src="https://percona.community/blog/2025/03/pmm-home-mongodb.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - PMM MongoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="step-four-convert-a-standalone-mongodb-to-a-replica-set"&gt;Step Four: Convert a Standalone MongoDB to a Replica Set&lt;/h2&gt;
&lt;p&gt;A single MongoDB instance works well for development and testing. At this point, you can already connect to the database from your application or tools such as MongoDB Compass and run various NoSQL queries.&lt;/p&gt;
&lt;p&gt;However, the goal is to deploy a Replica Set consisting of three replicas, which is recommended for production and operational setups. For now, we set up the Replica Set on a single machine with a single docker-compose, which is intended for testing and development purposes only.&lt;/p&gt;
&lt;p&gt;Both the mongodb and pmm-client services need to be updated.&lt;/p&gt;
&lt;h3 id="stopping-all-services"&gt;Stopping All Services&lt;/h3&gt;
&lt;p&gt;First, stop all the currently running services:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose down &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Result:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup docker-compose down
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;+&lt;span class="o"&gt;]&lt;/span&gt; Running 4/4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-client Removed 0.3s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-mongodb-setup-mongodb-1 Removed 0.4s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-server Removed 4.5s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Network pmm-mongodb-setup_default Removed&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="generating-a-key-file"&gt;Generating a Key File&lt;/h3&gt;
&lt;p&gt;To run three MongoDB replicas that can securely communicate with each other, a key file is required.&lt;/p&gt;
&lt;p&gt;Create a &lt;code&gt;secrets&lt;/code&gt; folder next to the &lt;code&gt;docker-compose.yaml&lt;/code&gt; file and generate the &lt;code&gt;mongodb-keyfile&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mkdir secrets
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;openssl rand -base64 &lt;span class="m"&gt;128&lt;/span&gt; &gt; secrets/mongodb-keyfile
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;chmod &lt;span class="m"&gt;600&lt;/span&gt; secrets/mongodb-keyfile&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this case, the mongodb-keyfile will be generated inside the secrets folder. If your operating system does not support these commands, manually create the secrets folder and add a mongodb-keyfile with the following content:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rVLhIK2PhZKGxysjwMR4t1OmNppqdAzEs408hrbzg95D146mn9YENixId6pvIGCA
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Cy9hc1k6OKKabbv7Rm347NwSFxbdPPx0/jnaO80U/a6/mv0XqSmEl8wdR91b4jIm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;d98LobplwRs4b7g9cnLMUAIULr0WG+J36NtKIA6q4eE=&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="add-three-mongodb-services-for-replica-set"&gt;Add three mongodb services for Replica Set&lt;/h3&gt;
&lt;p&gt;Remove the existing mongodb service from the docker-compose.yaml file and replace it with three Replica Set services:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-rs101&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/percona-server-mongodb:8.0-multi&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;mongodb-rs101&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;"27017:27017"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mongod"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--port"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"27017"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--replSet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--keyFile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/etc/secrets/mongodb-keyfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--bind_ip_all"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--profile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--slowms"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--rateLimit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"100"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;MONGO_INITDB_ROOT_USERNAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;databaseAdmin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;password&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;mongodb-data-101:/data/db&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./secrets:/etc/secrets:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CMD-SHELL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mongosh --host localhost --port 27017 --username databaseAdmin --password password --authenticationDatabase admin --eval 'rs.status().ok || 1'"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;30s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;10s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-rs102&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/percona-server-mongodb:8.0-multi&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;mongodb-rs102&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;"28017:28017"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mongod"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--port"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"28017"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--replSet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--keyFile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/etc/secrets/mongodb-keyfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--bind_ip_all"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--profile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--slowms"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--rateLimit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"100"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;MONGO_INITDB_ROOT_USERNAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;databaseAdmin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;password&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;mongodb-data-102:/data/db&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./secrets:/etc/secrets:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CMD-SHELL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mongosh --host localhost --port 28017 --username databaseAdmin --password password --authenticationDatabase admin --eval 'rs.status().ok || 1'"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;30s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;10s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-rs103&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/percona-server-mongodb:8.0-multi&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;mongodb-rs103&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;"29017:29017"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mongod"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--port"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"29017"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--replSet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--keyFile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/etc/secrets/mongodb-keyfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--bind_ip_all"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--profile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--slowms"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--rateLimit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"100"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;MONGO_INITDB_ROOT_USERNAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;databaseAdmin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;password&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;mongodb-data-103:/data/db&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./secrets:/etc/secrets:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CMD-SHELL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mongosh --host localhost --port 29017 --username databaseAdmin --password password --authenticationDatabase admin --eval 'rs.status().ok || 1'"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;30s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;10s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Key Points:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;./secrets:/etc/secrets:ro:&lt;/code&gt; Mounts the key file from your disk into the container.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;ports:&lt;/code&gt;: Each replica runs on a separate port since they are hosted on the same machine.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;command:&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;--keyFile&lt;/code&gt;: Enables replication and uses the key file for secure communication between replicas.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;--replSet&lt;/code&gt;: Defines a Replica set parameter named rs.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;healthcheck:&lt;/code&gt;: Ensures that pmm-client starts only after the Replica Set is initialized.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;
&lt;h3 id="adding-volumes"&gt;Adding Volumes&lt;/h3&gt;
&lt;p&gt;Define three separate volumes for data storage:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-data-101&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-data-102&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;mongodb-data-103:&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="initializing-the-replica-set"&gt;Initializing the Replica Set&lt;/h3&gt;
&lt;p&gt;Add another service to initialize the Replica Set (after &lt;code&gt;mongodb-rs103&lt;/code&gt;):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-rs-init&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/percona-server-mongodb:8.0-multi&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;rs-init&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;depends_on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;mongodb-rs101&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;mongodb-rs102&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;mongodb-rs103&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;entrypoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"until mongosh --host mongodb-rs101 --port 27017 --username databaseAdmin --password password --authenticationDatabase admin --eval 'print(\"waited for connection\")'; do sleep 5; done &amp;&amp; \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s2"&gt; mongosh --host mongodb-rs101 --port 27017 --username databaseAdmin --password password --authenticationDatabase admin --eval 'config={\"_id\":\"rs\",\"members\":[{\"_id\":0,\"host\":\"mongodb-rs101:27017\"},{\"_id\":1,\"host\":\"mongodb-rs102:28017\"},{\"_id\":2,\"host\":\"mongodb-rs103:29017\"}],\"settings\":{\"keyFile\":\"/etc/secrets/mongodb-keyfile\"}};rs.initiate(config);'"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./secrets:/etc/secrets:ro&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This service connects to one of the replicas and initializes the Replica Set configuration.&lt;/p&gt;
&lt;h3 id="updating-pmm-client"&gt;Updating pmm-client&lt;/h3&gt;
&lt;p&gt;Finally, modify the pmm-client service to register all three Replica Set nodes:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;pmm-client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/pmm-client:3&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-client&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;depends_on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;pmm-server&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;service_healthy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-rs101&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;service_healthy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-rs102&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;service_healthy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;mongodb-rs103&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;service_healthy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_ADDRESS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-server:8443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_USERNAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_INSECURE_TLS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_CONFIG_FILE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;config/pmm-agent.yaml&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SETUP&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SETUP_FORCE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_PRERUN_SCRIPT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;&gt;&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; pmm-admin status --wait=10s &amp;&amp;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; pmm-admin add mongodb --service-name=mongodb-rs101 --username=databaseAdmin --password=password --host=mongodb-rs101 --port=27017 --query-source=profiler &amp;&amp;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; pmm-admin add mongodb --service-name=mongodb-rs102 --username=databaseAdmin --password=password --host=mongodb-rs102 --port=28017 --query-source=profiler &amp;&amp;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; pmm-admin add mongodb --service-name=mongodb-rs103 --username=databaseAdmin --password=password --host=mongodb-rs103 --port=29017 --query-source=profiler&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Explanation:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;depends_on&lt;/code&gt;: Ensures pmm-client waits until all replicas and pmm-server are initialized.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PMM_AGENT_PRERUN_SCRIPT&lt;/code&gt;: Adds all three replicas to PMM monitoring with the pmm-admin add mongodb command.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h3 id="launching-the-configuration-1"&gt;Launching the Configuration&lt;/h3&gt;
&lt;p&gt;Start all services with:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose up -d &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected Output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup docker-compose up -d
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;+&lt;span class="o"&gt;]&lt;/span&gt; Running 10/10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Network pmm-mongodb-setup_default Created 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Volume &lt;span class="s2"&gt;"pmm-mongodb-setup_mongodb-data-101"&lt;/span&gt; Created 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Volume &lt;span class="s2"&gt;"pmm-mongodb-setup_mongodb-data-102"&lt;/span&gt; Created 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Volume &lt;span class="s2"&gt;"pmm-mongodb-setup_mongodb-data-103"&lt;/span&gt; Created 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container mongodb-rs103 Healthy 0.1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-server Healthy 1.1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container mongodb-rs101 Healthy 0.1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container mongodb-rs102 Healthy 0.1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container rs-init Started 0.1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ✔ Container pmm-client Started 0.0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ pmm-mongodb-setup&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Expected Output in Docker Desktop:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/docker-desktop-rs_hu_c171901a96bf081a.png 480w, https://percona.community/blog/2025/03/docker-desktop-rs_hu_efaca3ac6ebcd41b.png 768w, https://percona.community/blog/2025/03/docker-desktop-rs_hu_1b33539b6c1a7e52.png 1400w"
src="https://percona.community/blog/2025/03/docker-desktop-rs.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - Docker Desktop MongoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="verifying-in-pmm"&gt;Verifying in PMM&lt;/h3&gt;
&lt;p&gt;Open PMM and explore dashboards such as the MongoDB Replica Set Summary, which displays information about your replicas and various metrics:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/pmm-rs-services_hu_d0ad375de0dc8708.png 480w, https://percona.community/blog/2025/03/pmm-rs-services_hu_362f4b685a1025ac.png 768w, https://percona.community/blog/2025/03/pmm-rs-services_hu_447ef5090f685c71.png 1400w"
src="https://percona.community/blog/2025/03/pmm-rs-services.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - PMM MongoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;For example, I experimented by restarting one of the MongoDB services in Docker Desktop. The Replica Set switched the Primary replica, and when I simulated a failure by stopping a service, the monitoring dashboard reflected this event:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/pmm-rs-statuses_hu_152c5a5848d218c0.png 480w, https://percona.community/blog/2025/03/pmm-rs-statuses_hu_a41e7ee71109cd35.png 768w, https://percona.community/blog/2025/03/pmm-rs-statuses_hu_9c7cef99e95ae73e.png 1400w"
src="https://percona.community/blog/2025/03/pmm-rs-statuses.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - PMM MongoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="connecting-to-mongodb-and-useful-commands"&gt;Connecting to MongoDB and Useful Commands&lt;/h2&gt;
&lt;h3 id="connecting-to-mongodb"&gt;Connecting to MongoDB&lt;/h3&gt;
&lt;p&gt;There are several ways to connect to MongoDB depending on your setup and tools:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Using Docker Desktop&lt;/p&gt;
&lt;p&gt;If you are using Docker Desktop, you can select a container and open the Exec tab. This opens a terminal within the container.&lt;/p&gt;
&lt;p&gt;From there, you can connect to MongoDB using the mongosh shell with the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-22" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-22"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongosh --host localhost --port &lt;span class="m"&gt;27017&lt;/span&gt; -u databaseAdmin -p password --authenticationDatabase admin&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/03/mongodb-connect_hu_1993f455a7ac9c0d.png 480w, https://percona.community/blog/2025/03/mongodb-connect_hu_d494614723a4a044.png 768w, https://percona.community/blog/2025/03/mongodb-connect_hu_c87f2510dcd2bf5e.png 1400w"
src="https://percona.community/blog/2025/03/mongodb-connect.png" alt="Percona Monitoring and Management (PMM) 3.0.0 - Docker Desktop MongoDB Connect" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Using Docker CLI&lt;/p&gt;
&lt;p&gt;If you’re running Docker without Docker Desktop, you can connect to the container using the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-23" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-23"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; -it &lt;container_name&gt; bash&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;when you connect to the container, connect to MongoDB&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-24" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-24"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongosh --host localhost --port &lt;span class="m"&gt;27017&lt;/span&gt; -u databaseAdmin -p password --authenticationDatabase admin&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connecting from Applications or Tools&lt;/p&gt;
&lt;p&gt;If you are connecting through an application or a tool like MongoDB Compass, you can use connection strings tailored to your setup:&lt;/p&gt;
&lt;p&gt;Here are the MongoDB connection strings tailored for your use cases:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Standalone MongoDB: Connects directly to a single MongoDB instance.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-25" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-25"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongodb://databaseAdmin:password@localhost:27017&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Primary Node: Forces a direct connection to the primary node in the Replica Set using the directConnection=true option.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-26" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-26"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongodb://databaseAdmin:password@localhost:27017/?directConnection&lt;span class="o"&gt;=&lt;/span&gt;true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Full Replica Set: Lists all Replica Set members and enables automatic failover using replicaSet=rs&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-27" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-27"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongodb://databaseAdmin:password@localhost:27017,localhost:28017,localhost:29017/?replicaSet&lt;span class="o"&gt;=&lt;/span&gt;rs&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="useful-commands"&gt;Useful Commands&lt;/h3&gt;
&lt;p&gt;Here are some helpful commands for managing and troubleshooting your MongoDB setup:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Check the status of the Replica Set. After connecting to MongoDB, use the following command to retrieve the status of the Replica Set:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-28" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-28"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rs.status&lt;span class="o"&gt;()&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check MongoDB runtime configuration. Use this command to view the configuration options for the running MongoDB instance:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-29" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-29"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;db.adminCommand&lt;span class="o"&gt;({&lt;/span&gt; getCmdLineOpts: &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="o"&gt;})&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this article, we explored deploying MongoDB using Docker and Docker Compose. We covered both a Single Instance MongoDB for simple setups and a MongoDB Replica Set for high availability, while integrating them with Percona Monitoring and Management (PMM) for monitoring.&lt;/p&gt;
&lt;p&gt;Here are the final configurations you can download:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Single Instance MongoDB + PMM 3: &lt;a href="https://gist.github.com/dbazhenov/fc954c9bd7f21e2ad17dffb4acfd7142" target="_blank" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Replica Set MongoDB + PMM 3: &lt;a href="https://gist.github.com/dbazhenov/fd47167734230d294a4aa10da623d1f2" target="_blank" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thank you for reading! I look forward to your comments and questions.&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>MongoDB</category>
      <category>Docker</category>
      <category>Opensource</category>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/blog/2025/03/pmm-mongodb-cover_hu_4a7029d1f994dce1.jpg"/>
      <media:content url="https://percona.community/blog/2025/03/pmm-mongodb-cover_hu_3b760685321d77d.jpg" medium="image"/>
    </item>
    <item>
      <title>Join Us Online: Stream About Percona Toolkit for MySQL!</title>
      <link>https://percona.community/blog/2025/03/14/join-us-online-stream-about-percona-toolkit-for-mysql/</link>
      <guid>https://percona.community/blog/2025/03/14/join-us-online-stream-about-percona-toolkit-for-mysql/</guid>
      <pubDate>Fri, 14 Mar 2025 00:00:00 UTC</pubDate>
      <description>Are you passionate about databases and want to stay on top of the latest advancements in MySQL and Percona Toolkit? You’re in luck! We are excited to invite you to our upcoming online stream, where we’ll delve into some exciting changes and updates.</description>
      <content:encoded>&lt;p&gt;Are you passionate about databases and want to stay on top of the latest advancements in MySQL and Percona Toolkit? You’re in luck! We are excited to invite you to our upcoming online stream, where we’ll delve into some exciting changes and updates.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Date:&lt;/strong&gt; March 27, 2025&lt;br&gt;
&lt;strong&gt;Time:&lt;/strong&gt; 13:30 GMT / 9:30 ET&lt;br&gt;
&lt;strong&gt;Streaming Live on:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/events/removingoffensivelanguagefrompe7307408691371077632/theater/" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://www.youtube.com/live/JOEpIQL7cXM" target="_blank" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;&lt;/p&gt;
&lt;h4 id="about-the-event"&gt;About the Event&lt;/h4&gt;
&lt;p&gt;Our featured speaker, Sveta Smirnova, Principal Support Engineering Coordinator at Percona, will share her insights on the recent updates in MySQL, focusing on the removal of offensive replication statements like START/STOP SLAVE. As the maintainer of the Percona Toolkit, Sveta had to rewrite numerous tools and libraries to accommodate these changes, resulting in significant updates to 511 files.&lt;/p&gt;
&lt;p&gt;This event is perfect for developers, DBAs, and anyone interested in databases. Sveta, an expert in MySQL and Percona Toolkit, is also an accomplished author and a speaker at international conferences on development and databases. During the stream, she will discuss the challenges she faced while renewing legacy code, including supporting SSL and handling the deprecation of certain tools.&lt;/p&gt;
&lt;p&gt;We invite you to join the discussion and share your experiences, questions, and insights. Engage directly with Sveta and other participants to gain a deeper understanding of the Percona Toolkit and its latest enhancements.&lt;/p&gt;
&lt;h4 id="discussion-topics"&gt;Discussion Topics&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Introduction to Percona Toolkit&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;History and evolution of Percona Toolkit.&lt;/li&gt;
&lt;li&gt;Overview of the most popular tools and their uses.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Changes in MySQL&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;Overview of new and legacy syntax.&lt;/li&gt;
&lt;li&gt;Implications of dropped offensive replication statements.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Adapting Percona Toolkit&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;Challenges in renewing legacy code.&lt;/li&gt;
&lt;li&gt;Solutions implemented, including SSL support and deprecation of certain tools.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Practical Insights&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;Detailed explanation of the changes made to 511 files.&lt;/li&gt;
&lt;li&gt;Fine-tuning and migration strategies for Percona Toolkit users.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="why-attend"&gt;Why Attend?&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Insightful Content:&lt;/strong&gt; Gain valuable knowledge on the latest changes in MySQL and how they impact Percona Toolkit.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expert Guidance:&lt;/strong&gt; Learn directly from Sveta Smirnova, an industry expert with extensive experience in database management.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Interactive Session:&lt;/strong&gt; Have the opportunity to ask questions live and engage directly with the speaker.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="how-to-join"&gt;How to Join&lt;/h4&gt;
&lt;p&gt;Tune in on March 27, 2025, at 13:30 GMT / 9:30 ET, and watch the live stream on &lt;a href="https://www.linkedin.com/events/removingoffensivelanguagefrompe7307408691371077632/theater/" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://www.youtube.com/live/JOEpIQL7cXM" target="_blank" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;. Don’t miss this chance to enhance your understanding of MySQL and Percona Toolkit while gaining practical insights from one of the best in the field.&lt;/p&gt;
&lt;p&gt;Mark your calendars, spread the word, and get ready for an informative session! We look forward to seeing you there!&lt;/p&gt;
&lt;p&gt;If you have any questions or need further information, feel free to reach out to us.&lt;/p&gt;
&lt;p&gt;📅 Add to Calendar&lt;br&gt;
🔗 &lt;a href="https://www.linkedin.com/events/removingoffensivelanguagefrompe7307408691371077632/theater/" target="_blank" rel="noopener noreferrer"&gt;Join the LinkedIn Stream&lt;/a&gt;&lt;br&gt;
🔗 &lt;a href="https://www.youtube.com/live/JOEpIQL7cXM" target="_blank" rel="noopener noreferrer"&gt;Join the YouTube Stream&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;We can’t wait to see you at the event!&lt;/p&gt;</content:encoded>
      <author>Sveta Smirnova</author>
      <category>MySQL</category>
      <category>Toolkit</category>
      <category>Events</category>
      <media:thumbnail url="https://percona.community/events/streams/Live-Sveta-Edith-march-27_hu_8acd1ec53a021f.jpg"/>
      <media:content url="https://percona.community/events/streams/Live-Sveta-Edith-march-27_hu_b4d5e82775302e73.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 3 and rootless containers</title>
      <link>https://percona.community/blog/2025/02/19/percona-monitoring-and-management-3-and-rootless-containers/</link>
      <guid>https://percona.community/blog/2025/02/19/percona-monitoring-and-management-3-and-rootless-containers/</guid>
      <pubDate>Wed, 19 Feb 2025 00:00:00 UTC</pubDate>
      <description>In today’s landscape, where security breaches are a constant concern, reducing potential attack vectors is a top priority for any organization. Percona Monitoring and Management (PMM) has established itself as a reliable solution for database performance monitoring. With the release of PMM version 3, Percona has significantly strengthened its security posture, notably by introducing support for rootless container deployments. This advancement directly addresses a crucial security challenge and enhances the overall robustness and reliability of PMM.</description>
      <content:encoded>&lt;p&gt;In today’s landscape, where security breaches are a constant concern, reducing potential attack vectors is a top priority for any organization. Percona Monitoring and Management (PMM) has established itself as a reliable solution for database performance monitoring. With &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/release-notes/3.0.0.html" target="_blank" rel="noopener noreferrer"&gt;the release of PMM version 3&lt;/a&gt;, Percona has significantly strengthened its security posture, notably by introducing support for rootless container deployments. This advancement directly addresses a crucial security challenge and enhances the overall robustness and reliability of PMM.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/02/pmm3-homepage_hu_ab475622fd130cae.jpg 480w, https://percona.community/blog/2025/02/pmm3-homepage_hu_b786868d12aa39e8.jpg 768w, https://percona.community/blog/2025/02/pmm3-homepage_hu_256f4d15d640605c.jpg 1400w"
src="https://percona.community/blog/2025/02/pmm3-homepage.jpg" alt="Percona Monitoring and Management (PMM) 3.0.0" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The inherent risks associated with root privileges are well-documented. While many applications, including those containerized, have historically relied on root access, this practice presents a substantial security vulnerability. In the event of a successful exploit, an attacker gains comprehensive control over the host system. This risk is further exacerbated in environments with outdated software or complex configurations. Essentially, while the root user offers extensive capabilities, it also represents a significant potential liability that should be carefully mitigated.&lt;/p&gt;
&lt;p&gt;In this blog post we will look at the exact differences between PMM versions 2 and 3 and how they behave.&lt;/p&gt;
&lt;h2 id="enforcing-pod-security-standards"&gt;Enforcing Pod Security Standards&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/security/pod-security-standards/" target="_blank" rel="noopener noreferrer"&gt;The Pod Security Standards&lt;/a&gt; define three different policies to broadly cover the security spectrum. These policies are cumulative and range from highly-permissive to highly-restrictive.&lt;/p&gt;
&lt;p&gt;To enforce restrictions on a specific namespace we will create the following YAML manifest:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Namespace
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: secure-namespace
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; labels:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pod-security.kubernetes.io/enforce: restricted
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pod-security.kubernetes.io/warn: restricted
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pod-security.kubernetes.io/audit: restricted&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This YAML creates a namespace named secure-namespace. The pod-security.kubernetes.io/enforce: restricted label instructs Kubernetes to deny any pods in this namespace that violate the “restricted” PSS profile. The warn and audit labels are also very useful for monitoring and testing before fully enforcing the restricted policy.&lt;/p&gt;
&lt;h2 id="deploy-pmm"&gt;Deploy PMM&lt;/h2&gt;
&lt;p&gt;We will execute a series of deployments to demonstrate the difference between PMM2 and PMM3 behavior in insecure and secure environments. All files can be found in this github repository: &lt;a href="https://github.com/spron-in/blog-data/tree/master/pmm3-rootless" target="_blank" rel="noopener noreferrer"&gt;spron-in/blog-data/pmm3-rootless&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="regular-namespace"&gt;Regular namespace&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;PMM2&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl apply -f 01.pmm2.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-server-0 1/1 Running 0 45s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I can connect to my PMM2 server with a Service that I created.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;PMM3&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl apply -f 02.pmm3.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-server-0 1/1 Running 0 20s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I can connect to my PMM3 server with a Service that I created.&lt;/p&gt;
&lt;h3 id="secure-namespace"&gt;Secure namespace&lt;/h3&gt;
&lt;p&gt;Let’s try to deploy both versions of Percona Monitoring and Management servers in a secure namespace. For both we will see the following:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl apply -f 01.pmm2.yaml -n secure-namespace
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pmm-server" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pmm-server" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pmm-server" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pmm-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl -n secure-namespace get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;No resources found in secure-namespace namespace.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;There are no Pods created and if you describe the StatefulSet, you are going to see a similar error:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Warning FailedCreate 5s (x15 over 87s) statefulset-controller create Pod pmm-server-0 in StatefulSet pmm-server failed error: pods "pmm-server-0" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pmm-server" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pmm-server" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pmm-server" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pmm-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The error will be the same for PMM2 and PMM3 manifests.&lt;/p&gt;
&lt;h3 id="secure-namespace-and-security-contexts"&gt;Secure namespace and security contexts&lt;/h3&gt;
&lt;p&gt;We are now going to apply Pod and Container Security Contexts to both manifests.&lt;/p&gt;
&lt;p&gt;Under spec.containers we add everything that Kubernetes suggested us:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: pmm-server
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; securityContext:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; allowPrivilegeEscalation: false
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; capabilities:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; drop:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ALL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; runAsNonRoot: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; seccompProfile:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; type: 'RuntimeDefault'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;PMM2&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl -n secure-namespace apply -f 03.pmm2-secure.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl -n secure-namespace get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-server-0 0/1 Error 2 (15s ago) 28s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Even though the PMM2 server Pod can be created now, it is failing to start. If you check the logs, you are going to see the following:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl -n secure-namespace logs pmm-server-0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Error: Can't drop privilege as nonroot user
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;For help, use /usr/local/bin/supervisord -h&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;PMM3&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl -n secure-namespace apply -f 04.pmm3-secure.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/pmm-server created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;% kubectl -n secure-namespace get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-server-0 1/1 Running 0 32s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;PMM3 starts just fine.&lt;/p&gt;
&lt;h3 id="helm"&gt;Helm&lt;/h3&gt;
&lt;p&gt;The recommended approach to deploy PMM3 in Kubernetes is via Helm. You can find our helm charts in &lt;a href="https://github.com/percona/percona-helm-charts/tree/main/charts/pmm" target="_blank" rel="noopener noreferrer"&gt;percona/helm-charts&lt;/a&gt; github repository and more in &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/install-pmm/install-pmm-server/deployment-options/helm/index.html" target="_blank" rel="noopener noreferrer"&gt;our documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To deploy PMM3 in a namespace or environment with strict security (like OpenShift), you need to pass similar security context parameters. You can find how in &lt;a href="https://github.com/spron-in/blog-data/blob/master/pmm3-rootless/05.pmm3-helm.yaml" target="_blank" rel="noopener noreferrer"&gt;05.pmm3-helm.yaml&lt;/a&gt; values manifest.&lt;/p&gt;
&lt;p&gt;Then the deployment will look like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;helm repo update
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;helm install pmm3 percona/pmm -f 05.pmm3-helm.yaml --namespace secure-namespace&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;PMM 3’s rootless design excels where PMM 2 falters in secure Kubernetes environments. Enforcing Pod Security Standards, we saw PMM 3 deploy successfully, while PMM 2 failed, even with security contexts. This highlights PMM 3’s enhanced security, crucial for modern, hardened deployments. Using Helm simplifies secure PMM 3 deployments, ensuring robust database monitoring without compromising security.&lt;/p&gt;
&lt;p&gt;Try out &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/release-notes/3.0.0.html" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management version 3&lt;/a&gt;, 100% open source database observability solution, and learn more about its enhancements.&lt;/p&gt;
&lt;p&gt;Tell us what you think in &lt;a href="https://forums.percona.com/c/percona-monitoring-and-management-pmm/pmm-3/84/l/new" target="_blank" rel="noopener noreferrer"&gt;our forum&lt;/a&gt; or let us know if you are looking for &lt;a href="https://www.percona.com/about/contact" target="_blank" rel="noopener noreferrer"&gt;commercial support&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Sergey Pronin</author>
      <category>PMM</category>
      <category>HELM</category>
      <category>Docker</category>
      <media:thumbnail url="https://percona.community/blog/2025/02/pmm3-rootless_hu_74de63da6cea5681.jpg"/>
      <media:content url="https://percona.community/blog/2025/02/pmm3-rootless_hu_f897b8c176da0f3f.jpg" medium="image"/>
    </item>
    <item>
      <title>How to Use IAM Roles for Service Accounts (IRSA) with Percona Operator for MongoDB on AWS</title>
      <link>https://percona.community/blog/2025/02/17/how-to-use-iam-roles-for-service-accounts-irsa-with-percona-operator-for-mongodb-on-aws/</link>
      <guid>https://percona.community/blog/2025/02/17/how-to-use-iam-roles-for-service-accounts-irsa-with-percona-operator-for-mongodb-on-aws/</guid>
      <pubDate>Mon, 17 Feb 2025 00:00:00 UTC</pubDate>
      <description>Introduction Percona Operator for MongoDB is an open-source solution designed to streamline and automate database operations within Kubernetes. It allows users to effortlessly deploy and manage highly available, enterprise-grade MongoDB clusters. The operator simplifies both initial deployment and setup, as well as ongoing management tasks like backups, restores, scaling, and upgrades, ensuring seamless database lifecycle management.</description>
      <content:encoded>&lt;h1 id="introduction"&gt;Introduction&lt;/h1&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-operator-for-mongodb/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt; is an open-source solution designed to streamline and automate database operations within Kubernetes. It allows users to effortlessly deploy and manage highly available, enterprise-grade MongoDB clusters.  The operator simplifies both initial deployment and setup, as well as ongoing management tasks like backups, restores, scaling, and upgrades, ensuring seamless database lifecycle management.&lt;/p&gt;
&lt;p&gt;When running database workloads on Amazon EKS (Elastic Kubernetes Service), backup and restore processes often require access to AWS services like S3 for storage. A key challenge is ensuring these operations have secure, least-privileged access to AWS resources without relying on static credentials. Properly managing these permissions is crucial to maintaining data integrity, security, and compliance in automated backup and restore workflows.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" target="_blank" rel="noopener noreferrer"&gt;IAM Roles for Service Accounts (IRSA)&lt;/a&gt; is the recommended approach to solve this problem. IRSA allows Kubernetes pods to securely assume IAM roles, eliminating the need for hardcoded credentials, long-lived AWS keys, or excessive permissions. Instead, it leverages OpenID Connect (OIDC) authentication, ensuring that only the right workloads get access to AWS services.&lt;br&gt;
By implementing IRSA, you enhance the security posture of your Kubernetes workloads while simplifying IAM management. In this article, we’ll walk through how IRSA works, why it’s beneficial, and how to configure it properly for the Percona Operator for MongoDB in EKS clusters.&lt;/p&gt;
&lt;h1 id="irsa-installation-and-configuration-for-percona-operator-for-mongodb"&gt;IRSA Installation and Configuration for Percona Operator for MongoDB&lt;/h1&gt;
&lt;ol&gt;
&lt;li&gt;IRSA requires an OpenID Connect (OIDC) provider associated with your EKS cluster.&lt;br&gt;
So, you should &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html#:~:text=To%20create%20a%20provider%2C%20choose,com%20and%20choose%20Add%20provider." target="_blank" rel="noopener noreferrer"&gt;create an OIDC provider for your EKS cluster&lt;/a&gt;. &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Creating an OIDC provider for your EKS cluster involves several steps. This setup allows your EKS cluster to use IAM roles for service accounts, which makes it possible to grant fine-grained IAM permissions to pods.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Check if OIDC is already set up:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws eks describe-cluster --name &lt;cluster_name&gt; --query &lt;span class="s2"&gt;"cluster.identity.oidc.issuer"&lt;/span&gt; --output text
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;https://oidc.eks.eu-west-3.amazonaws.com/id/7AA1C67941083331A80382E464EB2F1F
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# If it is not already set up, create an OIDC provider:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;eksctl utils associate-iam-oidc-provider --region &lt;region&gt; --cluster &lt;cluster-name&gt; --approve&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here oidc-id is 7AA1C67941083331A80382E464EB2F1F. We will use it under role creation.&lt;/p&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Create an IAM Policy to access s3 buckets.  Substitute &lt;s3_bucket&gt; with your correct bucket name:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Define the required permissions in an IAM policy JSON file: &lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat s3-bucket-policy.json
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"s3:*"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;]&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"arn:aws:s3:::&lt;s3_bucket&gt;"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"arn:aws:s3:::&lt;s3_bucket&gt;/*"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Create the IAM policy:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws iam create-policy --policy-name &lt;policy name&gt; --policy-document file://s3-bucket-policy.json&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Create an IAM Role and Attach the Policy:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Role example. Replace &lt;account-id&gt; with account id and &lt;oidc-id&gt; with cluster’s OIDC ID&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat role-trust-policy.json
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Principal"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Federated"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::&lt;account-id&gt;:oidc-provider/oidc.eks.&lt;region&gt;.amazonaws.com/id/&lt;oidc-id&gt;"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;}&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="s2"&gt;"sts:AssumeRoleWithWebIdentity"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"Condition"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"oidc.eks.&lt;region&gt;.amazonaws.com/id/&lt;oidc-id&gt;:aud"&lt;/span&gt;: &lt;span class="s2"&gt;"sts.amazonaws.com"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Create role:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws iam create-role --role-name &lt;role_name&gt; --assume-role-policy-document file://role-trust-policy.json --description &lt;span class="s2"&gt;"Allow access to s3 bucket"&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="4"&gt;
&lt;li&gt;Attach the policy to the role.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Please update &lt;role-name&gt;, &lt;account-id&gt; and &lt;policy-name&gt; with the corresponding values.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws iam attach-role-policy --role-name &lt;role-name&gt; --policy-arn arn:aws:iam::&lt;account-id&gt;:policy/&lt;policy-name&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="5"&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-operator-for-mongodb/eks.html#install-the-operator-and-deploy-your-mongodb-cluster" target="_blank" rel="noopener noreferrer"&gt;Install the operator and deploy Percona Server for MongoDB&lt;/a&gt; in your EKS cluster (skip this step if you already have the operator and the database cluster installed).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To ensure proper functionality, we need to annotate both the operator service account (default: percona-server-mongodb-operator) and the cluster service account (default: default).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;🔴 Warning: The cluster and operator  won’t restart automatically; therefore, a manual restart is necessary to apply the changes.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get service accounts:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get sa -n &lt;namespace&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME SECRETS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;default &lt;span class="m"&gt;0&lt;/span&gt; 25m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-server-mongodb-operator &lt;span class="m"&gt;0&lt;/span&gt; 25m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get role_arn:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws iam get-role --role-name &lt;role-name&gt; --query &lt;span class="s2"&gt;"Role.Arn"&lt;/span&gt; --output text
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Annotate service account. Please update role_arn with appropriate value.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl annotate serviceaccount default &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; eks.amazonaws.com/role-arn&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;role_arn&gt;"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl annotate serviceaccount percona-server-mongodb-operator &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; eks.amazonaws.com/role-arn&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;role_arn&gt;"&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="7"&gt;
&lt;li&gt;To verify that the settings have been applied, inspect service accounts and the environment variables in both the operator and replica set (RS/Config) pods. The variable AWS_ROLE_ARN should be properly set.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Check annotation in service account&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get sa -n &lt;namespace&gt; percona-server-mongodb-operator -o yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get sa -n &lt;namespace&gt; default -o yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Check the variable inside container&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; -ti &lt;percona-server-mongodb-operator-container&gt; -n &lt;operator_namespace&gt; bash
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bash-5.1$ printenv &lt;span class="p"&gt;|&lt;/span&gt; grep &lt;span class="s1"&gt;'AWS_ROLE_ARN'&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;AWS_ROLE_ARN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arn:aws:iam::1111111111111:role/some-name-psmdb-access-s3-bucket
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; -ti &lt;rs0-0_pod&gt; -n &lt;namespace&gt; bash
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;mongodb@some-name-rs0-0 db&lt;span class="o"&gt;]&lt;/span&gt;$ printenv &lt;span class="p"&gt;|&lt;/span&gt; grep &lt;span class="s1"&gt;'AWS_ROLE_ARN'&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;AWS_ROLE_ARN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arn:aws:iam::1111111111111:role/some-name-psmdb-access-s3-bucket&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="8"&gt;
&lt;li&gt;Configure the backup/restore settings as usual, but do not provide s3.credentialsSecret for the storage in deploy/cr.yaml. For detailed instructions  please refer to &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/backups-storage.html" target="_blank" rel="noopener noreferrer"&gt;Configure storage for backups&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;shell&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# backup section in cr.yaml example &lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; storages:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; aws-s3:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; type: s3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; s3:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; region: &lt;region&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; bucket: &lt;bucket&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h1 id="conclusion"&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;Using IAM Roles for Service Accounts (IRSA) in an Amazon EKS cluster is a best practice when running &lt;a href="https://docs.percona.com/percona-operators/" target="_blank" rel="noopener noreferrer"&gt;database operators&lt;/a&gt; in Kubernetes. By integrating IRSA, database operators—such as the&lt;a href="https://docs.percona.com/percona-operator-for-mongodb/index.html" target="_blank" rel="noopener noreferrer"&gt; Percona Server for MongoDB Operator&lt;/a&gt;—can securely access AWS services like S3 for backups without relying on static credentials.&lt;/p&gt;
&lt;p&gt;IRSA enhances security by enforcing the principle of least privilege, ensuring that database operators in EKS have access only to the specific AWS resources they require. This approach reduces the risk of unauthorized access while also improving manageability by eliminating the need to store and rotate AWS credentials within Kubernetes secrets. By adopting IRSA in &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB Operator&lt;/a&gt; , organizations can create a more secure, scalable, and automated environment for managing MongoDB databases.&lt;/p&gt;</content:encoded>
      <author>Natalia Marukovich</author>
      <category>Kubernetes</category>
      <category>MongoDB</category>
      <media:thumbnail url="https://percona.community/blog/2025/02/mongo-aws-iam_hu_784c39e43abdfe63.jpg"/>
      <media:content url="https://percona.community/blog/2025/02/mongo-aws-iam_hu_2d794beb55bbadb2.jpg" medium="image"/>
    </item>
    <item>
      <title>Join Percona for Google Summer of Code 2025 – Explore, Innovate, and Contribute!</title>
      <link>https://percona.community/blog/2025/02/05/google-summer-of-code-2025/</link>
      <guid>https://percona.community/blog/2025/02/05/google-summer-of-code-2025/</guid>
      <pubDate>Wed, 05 Feb 2025 00:00:00 UTC</pubDate>
      <description>Are you passionate about open-source databases, AI/ML, and security? Do you want to work on real-world projects that impact thousands of developers and enterprises worldwide? Percona is excited to invite students to participate in Google Summer of Code 2025 (GSoC) and help advance our cutting-edge open-source database solutions!</description>
      <content:encoded>&lt;p&gt;Are you passionate about open-source databases, AI/ML, and security? Do you want to work on real-world projects that impact thousands of developers and enterprises worldwide? Percona is excited to invite students to participate in &lt;a href="https://summerofcode.withgoogle.com/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Google Summer of Code 2025 (GSoC)&lt;/strong&gt;&lt;/a&gt; and help advance our cutting-edge open-source database solutions!&lt;/p&gt;
&lt;h1 id="why-contribute-to-percona"&gt;Why Contribute to Percona?&lt;/h1&gt;
&lt;p&gt;At Percona, we believe that &lt;strong&gt;open world is a better world&lt;/strong&gt;! GSoC is an excellent opportunity to work with seasoned developers, gain hands-on experience, and contribute to powerful database tools used by businesses globally.&lt;/p&gt;
&lt;p&gt;For 2025, we’re especially interested in projects that focus on &lt;strong&gt;AI/ML&lt;/strong&gt; and &lt;strong&gt;security&lt;/strong&gt;—two critical areas shaping the future of databases. Whether you’re passionate about &lt;strong&gt;automating database performance insights&lt;/strong&gt; with AI or &lt;strong&gt;hardening security for mission-critical data&lt;/strong&gt;, we have exciting challenges for you!&lt;/p&gt;
&lt;p&gt;Percona mentors are going to help you realize your ideas or one of the ideas below. We invite you to contribute to products and projects such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/percona-xtradb-cluster" target="_blank" rel="noopener noreferrer"&gt;Percona XtraDB Cluster&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/percona-server" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/postgres" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/percona-postgresql-operator" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-backup-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Backup for MongoDB&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/pg_tde" target="_blank" rel="noopener noreferrer"&gt;pg_tde: Transparent Database Encryption for PostgreSQL&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As well as CI/CD related projects with the Percona Build Engineering.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id="project-ideas-for-gsoc-2025"&gt;Project Ideas for GSoC 2025&lt;/h1&gt;
&lt;p&gt;Below are some suggested project ideas categorized by Percona software:&lt;/p&gt;
&lt;h2 id="percona-distribution-for-postgresql"&gt;Percona Distribution for PostgreSQL&lt;/h2&gt;
&lt;h3 id="snapshot-based-postgresql-backups"&gt;Snapshot-based PostgreSQL backups&lt;/h3&gt;
&lt;p&gt;Database users are often very familiar with their storage provider’s storage snapshot capabilities. These snapshots are very handy and performant to use, hence their popularity among users. Backups for other databases (e.g., MongoDB) are often configured via this capability as it provides many performance benefits for large-scale data, especially on Cloud deployments. Having such technology supported across the backup solutions for multiple databases makes it possible to leverage this effectively for Percona Everest via the Percona Operators.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.crunchydata.com/blog/postgresql-snapshots-and-backups-with-pgbackrest-in-kubernetes" target="_blank" rel="noopener noreferrer"&gt;Comments from Crunchy&lt;/a&gt; on what needs to be glued together to get snapshots and pgBackRest working together better. Additionally, &lt;a href="https://www.timescale.com/blog/making-postgresql-backups-100x-faster-via-ebs-snapshots-and-pgbackrest" target="_blank" rel="noopener noreferrer"&gt;Timescale on how they use snapshots and pgBackRest together in their hosted managed service&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables&lt;/strong&gt;:
Have an API available to Percona Distribution for PostgreSQL to effectively use storage snapshots to create backups and restore from storage snapshot-based backups. Preferably, have it added to/complementary to the currently recommended solution of pgBackRest.&lt;/p&gt;
&lt;p&gt;Have Percona Operator for PostgreSQL expose the storage snapshot-based backup/restore so that Percona Everest can leverage it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; C++, PostgreSQL, Kubernetes
&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours
**Difficulty level: **Hard
&lt;strong&gt;Mentors:&lt;/strong&gt; @Andrew_Pogrebnoi, @Jan_Wieremjewicz
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-postgresql-operator" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/percona-postgresql-operator: Percona Operator for PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/postgres" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/postgres: Percona Server for PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="pgbackrest-to-barman-close-gap-improvements"&gt;pgBackRest to Barman close gap improvements&lt;/h3&gt;
&lt;p&gt;PostgreSQL has two main backup tools: Barman and pgBackRest. Both are powerful backup and restore tools, each with its own strengths. pgBackRest is generally considered more advanced in terms of parallelism, performance, and flexibility. Barman does offer some advantages in certain areas. While pgBackRest is maintained by Community, Barman is a tool mainly maintained by one company and is less popular. Barman does have UX improvements over pgBackRest, especially for non-expert users:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;direct WAL archiving with PostgreSQL’s built-in archive_command,&lt;/li&gt;
&lt;li&gt;simpler backup and recovery process, especially for standby creation,&lt;/li&gt;
&lt;li&gt;clearer logging and monitoring for backup integrity,&lt;/li&gt;
&lt;li&gt;simpler configuration in small to medium deployments,&lt;/li&gt;
&lt;li&gt;better native tools for cloud backups&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It would be beneficial for Percona, which uses pgBackRest in the Percona Distribution for PostgreSQL, to have the backup tool close any functionality gaps in Barman. Percona customers sometimes use Barman and expect Percona to support it. Having a way to migrate off Barman to pgBackRest, reducing any potential friction for the users, would be beneficial.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
Provide a close-gap set of improvements based on the list available in the description&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; C++, PostgreSQL
&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours
&lt;strong&gt;Difficulty level:&lt;/strong&gt; Hard
&lt;strong&gt;Mentors:&lt;/strong&gt; @Andrew_Pogrebnoi, @Jan_Wieremjewicz
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/postgres" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/postgres: Percona Server for PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="tool-to-investigate-postgresql-locks-for-dummies"&gt;Tool to investigate PostgreSQL locks for dummies&lt;/h3&gt;
&lt;p&gt;Currently, there is no tool that allows users with low experience to detect and understand all types of locks on their PG database, which may lead to many issues in deployments not managed by expert PostgreSQL users. As described in the blog posts below, understanding how locks work is difficult:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://xata.io/blog/anatomy-of-locks" target="_blank" rel="noopener noreferrer"&gt;Anatomy of Table-Level Locks in PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://xata.io/blog/anatomy-of-locks-reduce" target="_blank" rel="noopener noreferrer"&gt;Anatomy of table-level locks: Reducing locking impact&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Detect ddl with mixed strong locks and others. i.e. allow to review locked PIDs as pg_locks will not work&lt;/li&gt;
&lt;li&gt;Present all locks in a GUI&lt;/li&gt;
&lt;li&gt;(Streched) have the GUI integrated in PMM&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; C++, PostgreSQL
&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours
&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium
**Mentors: **Kai Wagner, @Jan_Wieremjewicz
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/postgres" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/postgres: Percona Server for PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="session-continuity-for-pgbouncer-for-the-zero-downtime-upgrades"&gt;Session continuity for PgBouncer for the zero downtime upgrades&lt;/h3&gt;
&lt;p&gt;Percona is looking to introduce zero downtime upgrades capability to the Percona Operator and later on to Percona Everest. The assumption is to base on pgBouncer and our HA solution utilizing the replica with the new database and a switch from the previous version to the new version.&lt;/p&gt;
&lt;p&gt;Such a solution provides a zero downtime upgrade capability and a Rollback capability. To provide true zero downtime major upgrades for the current Percona Distribution for PostgreSQL, there needs to be an improvement that takes over the switching of sessions between the databases: the previous version and the new version&lt;/p&gt;
&lt;p&gt;In the future, this tool should also make it possible to zero downtime and migrate to Everest.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
Extend pgBouncer to ensure that the sessions can be switched between the databases without downtime for the users but only a potential performance drop&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; C++, Kubernetes
&lt;strong&gt;Duration:&lt;/strong&gt; 175 hours
&lt;strong&gt;Difficulty leve&lt;/strong&gt;l: Hard
**Mentors: **Kai Wagner, @Jan_Wieremjewicz
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/postgres" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/postgres: Percona Server for PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="percona-software-for-mongodb"&gt;Percona Software for MongoDB&lt;/h2&gt;
&lt;h3 id="interactive-shell-installer-for-percona-software-for-mongodb"&gt;Interactive Shell Installer for Percona Software for MongoDB&lt;/h3&gt;
&lt;p&gt;This project aims to develop an interactive shell-based installer for &lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB&lt;/a&gt; and &lt;a href="https://www.percona.com/mongodb/software/percona-backup-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Backup for MongoDB&lt;/a&gt;. The installer will simplify the installation, configuration, and initial setup process, making it easy for users to deploy these open-source enterprise solutions efficiently. The primary goal is to enhance the user experience by reducing manual setup steps and ensuring proper configuration through guided prompts and automation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A command-line-based interactive installer script.&lt;/li&gt;
&lt;li&gt;Automated dependency checks and installation.&lt;/li&gt;
&lt;li&gt;Interactive prompts for configuration choices (e.g., authentication, replication, sharding).&lt;/li&gt;
&lt;li&gt;Seamless installation of both Percona Server for MongoDB and Percona Backup for MongoDB.&lt;/li&gt;
&lt;li&gt;Integration with package managers for major Linux distributions (Debian, Ubuntu, RHEL, CentOS).&lt;/li&gt;
&lt;li&gt;Logging and validation mechanisms to ensure correct setup.&lt;/li&gt;
&lt;li&gt;Documentation and user guide for the installer.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; C++ or Go&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @radoslaw.szulgo&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt; &lt;a href="https://github.com/percona/percona-backup-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-backup-mongodb&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-backup-for-mongodb-backup-speed-throttling"&gt;Percona Backup for MongoDB backup speed throttling&lt;/h3&gt;
&lt;p&gt;On large scale deployments, backups may significantly impact network performance - speficially network bandwidth may be heavily utilized, if the backup storage is fast, causing performance degradation of the database itself. Database reliability engineers would like to reduce the network load by slowing down physical backups with Percona Backup for MongoDB (PBM) configuration. The scope of the project is to implement a network bandwidth rate limiter and perform load testing showing the impact or rate limiting on backup time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
The expected outcome of this project is insurance that backup will not degrade network bandwidth impacting the database. As a result a participant needs to provide proposed code changes in a form of fork of PBM and a create a report with load test results.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Go&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 175 hours&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Easy&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @radoslaw.szulgo&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-backup-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-backup-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-server-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="percona-backup-for-mongodb-golang-sdk"&gt;Percona Backup for MongoDB Golang SDK&lt;/h3&gt;
&lt;p&gt;The project’s purpose is to extend capabilities and reduce maintenance in monitoring, managing, and automating backup and restores of MongoDB from &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt; tool. In the scope of the project there’s a migration from &lt;a href="https://www.percona.com/mongodb/software/percona-backup-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Backup for MongoDB&lt;/a&gt; CLI to a dedicated PBM golang client library in PMM. The client library (aka SDK) has to be implemented and map all current CLI operations to Go API functions. As a stretch goal, the project can be extended into implementing a backup progress reporting using the created SDK.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
The expected outcome of the project is the reduced maintenance of backup integration in PMM project and enabling extensibility of backup management. As a result of the project a new open-source SDK should be create&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Go&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Easy&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @radoslaw.szulgo&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-backup-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-backup-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/pmm" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/pmm&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="ceph-storage-support-in-percona-backup-for-mongodb"&gt;CEPH Storage support in Percona Backup for MongoDB&lt;/h3&gt;
&lt;p&gt;Ceph is an open source Software Defined Storage(SDS) software that is massively scalable and reliable. It’s one of the most popular storage technologies in Kubernetes and Openshift. The project aims to enable users to store their Percona Backups for MongoDB data on a Ceph storage which would be very convenient as they wouldn’t need to manage other additional storages. The scope of the project includes building a workspace setup on Kubernetes and &lt;a href="https://github.com/percona/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt;, research on current challanges using Ceph storage, and implement necessary changes to make it work in a performant way. At the end document the solution.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
The project’s deliverables are technical documentation, Percona Operator for MongoDB changes, and instructions on setting up an environment with Ceph storage.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Go, Kubernetes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 175 hours&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Easy&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @radoslaw.szulgo&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-backup-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-backup-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-server-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="boostfs-storage-support-in-percona-backup-for-mongodb"&gt;BoostFS storage support in Percona Backup for MongoDB&lt;/h3&gt;
&lt;p&gt;Dell Data Domain Boost File System (BoostFS) provides a general file system interface to the DD Boost library, allowing standard backup applications to take advantage of DD Boost features. In this project we’d like to extend our open-source Percona Backup for MongoDB to leverage that storage technology to reduce backup and restore time - and the same time help users to reduce their Recovery Time Objective (RTO) and Recovery Point Objective (RPO). In the scope of the project there’s a preparation of the workspace setup with Percona Server for MongoDB, Percona Backup for MongoDB and mounted BoostFS disk volumes on Google Cloud Platform and documenting the architecture of the environment. Additionally, the project includes a research on how PBM works with that storage and implementing necessary changes to PBM to make it work. Finally, a simple benchmark should be performed that proves the performance boost.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
It is expected that project delivers an architecture diagram of the testing environment in Google Cloud Platform, implementation of required changes to support BoostFS in PBM, and report incl. performance benchmark results and comparison to other storage systems.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Go, GCP&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Hard&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @radoslaw.szulgo&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-backup-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-backup-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://infohub.delltechnologies.com/en-us/l/dell-apex-block-storage-for-aws-backup-and-recovery-using-ddve-and-dd-boost-oracle-rman-agent/backup-procedure/" target="_blank" rel="noopener noreferrer"&gt;https://infohub.delltechnologies.com/en-us/l/dell-apex-block-storage-for-aws-backup-and-recovery-using-ddve-and-dd-boost-oracle-rman-agent/backup-procedure/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.dell.com/support/manuals/pl-pl/dd-virtual-edition/dd_p_ddve-gcp_ig/purpose-of-this-guide?guid=guid-015a004c-0518-4a23-a043-39c97ed165f0&amp;lang=en-us" target="_blank" rel="noopener noreferrer"&gt;https://www.dell.com/support/manuals/pl-pl/dd-virtual-edition/dd_p_ddve-gcp_ig/purpose-of-this-guide?guid=guid-015a004c-0518-4a23-a043-39c97ed165f0&amp;lang=en-us&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="openstack-swift-storage-support-in-percona-backup-for-mongodb"&gt;OpenStack Swift storage support in Percona Backup for MongoDB&lt;/h3&gt;
&lt;p&gt;The OpenStack Object Store project, known as Swift, offers cloud storage software so that you can store and retrieve lots of data with a simple API. It’s built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound. Swift is very convenient to use as a backup storage for MongoDB workloads running on OpenStack platform. The scope of the project includes building a workspace environment on Google Cloud Platform with OpenStack clusters and running there Percona Server for MongoDB. Then implementing required changes in Percona Backup for MongoDB to support Swift storage. Finally, performing benchmark tests and comparison to GCP native storages.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
It is expected that project delivers an architecture diagram of the testing environment in Google Cloud Platform, implementation of required changes to support OpenStack Swift in PBM, and report incl. performance benchmark results and comparison to other storage systems.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Go, GCP, OpenStack&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 175 hours&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @radoslaw.szulgo&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Relevant repositories and resources:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-backup-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-backup-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openstack/swift" target="_blank" rel="noopener noreferrer"&gt;https://github.com/openstack/swift&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ncw/swift" target="_blank" rel="noopener noreferrer"&gt;https://github.com/ncw/swift&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/installing/openstack" target="_blank" rel="noopener noreferrer"&gt;https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/installing/openstack&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="percona-server-for-mysql-and-percona-xtradb-cluster"&gt;Percona Server for MySQL and Percona XtraDB Cluster&lt;/h2&gt;
&lt;h3 id="automating-code-merges-with-ai"&gt;Automating Code Merges with AI&lt;/h3&gt;
&lt;p&gt;The regular and manual merge from Oracle’s GitHub repository process is time-consuming, complex, and prone to errors, particularly due to merge conflicts. Careful attention is required to avoid introducing regressions into Percona’s open-source products. While this project is specific to Percona’s needs, it addresses a common challenge in open-source software development, as many projects rely on upstream repositories for their code. Therefore, the solution can be generalized and could benefit other open-source projects with similar code integration needs.&lt;/p&gt;
&lt;p&gt;This GSoC project aims to develop an intelligent system using Artificial Intelligence to automate the MySQL fork merge process. Percona has been performing these merges for 18 years, accumulating a wealth of historical data (code changes, merge resolutions, conflict histories, test results) that can be leveraged to train an AI model.&lt;/p&gt;
&lt;p&gt;The core objective is to create a tool that can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Analyze upstream changes: Process and understand the changes introduced by Oracle in their MySQL repository.&lt;/li&gt;
&lt;li&gt;Identify merge conflicts: Identify conflicts between upstream changes and Percona’s modifications.&lt;/li&gt;
&lt;li&gt;Suggest merge resolutions: Propose solutions for resolving identified conflicts, drawing on patterns from historical merge data.&lt;/li&gt;
&lt;li&gt;Automate merges: Automatically apply upstream changes with the suggested merge resolutions.&lt;/li&gt;
&lt;li&gt;Learn and adapt: Continuously improve its performance and accuracy by learning from new merge data and feedback.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reduced merge time and effort: Automating the merge process will free up developer time for other critical tasks.&lt;/li&gt;
&lt;li&gt;Improved merge accuracy: AI can potentially identify subtle conflicts that might be missed by manual review.&lt;/li&gt;
&lt;li&gt;Faster release cycles: Streamlining the merge process will enable quicker releases of updated Percona products.&lt;/li&gt;
&lt;li&gt;Open-source contribution: The resulting tool will be open-sourced, benefiting other projects that maintain forks of MySQL or similar databases. This problem is not unique to Percona; other open-source projects facing similar merging challenges can utilize this solution.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As a result of this project, you’re expected to deliver:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A working prototype of the AI-powered merge tool.&lt;/li&gt;
&lt;li&gt;Well-documented code and training data.&lt;/li&gt;
&lt;li&gt;Comprehensive test suite and evaluation results.&lt;/li&gt;
&lt;li&gt;A report detailing the project’s methodology, findings, and future directions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Python, Machine Learning libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn), C++, Git, database systems
&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours
&lt;strong&gt;Difficulty level:&lt;/strong&gt; Hard
&lt;strong&gt;Mentors:&lt;/strong&gt; Julia Vural, Oleksiy Lukin
&lt;strong&gt;Relevant repositories and resources:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/percona-server: Percona Server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-xtradb-cluster" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/percona-xtradb-cluster: A High Scalability Solution for MySQL Clustering and High Availability&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mysql/mysql-servermetal/docs/installing/openstack" target="_blank" rel="noopener noreferrer"&gt;https://github.com/mysql/mysql-servermetal/docs/installing/openstack&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="percona-everest"&gt;Percona Everest&lt;/h2&gt;
&lt;h3 id="easier-troubleshooting-on-database-clusters-in-percona-everest"&gt;Easier Troubleshooting on database clusters in Percona Everest&lt;/h3&gt;
&lt;p&gt;The main goal is to provide tools to Percona Everest users to troubleshoot database clusters. This project will require the implementation of log collection, rotation, UI, and possibly an AI helper to analyze those logs. If users have a centralized log collection implemented, this tool needs to be able to integrate with it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Full user flow to support database cluster troubleshooting process (UI, backend, API, integrations).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Log Collection &amp; Rotation System:
&lt;ul&gt;
&lt;li&gt;Implement a mechanism to collect logs from Percona Everest-managed database clusters.&lt;/li&gt;
&lt;li&gt;Ensure efficient log rotation to manage storage and performance impact.&lt;/li&gt;
&lt;li&gt;Enable compatibility with external log aggregation tools (e.g., Elasticsearch, Grafana Loki, or OpenTelemetry)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;User Interface for Log Access:
&lt;ul&gt;
&lt;li&gt;Develop a UI within Percona Everest to allow users to view and analyze logs.&lt;/li&gt;
&lt;li&gt;Include search, filtering, and visualization options for better troubleshooting.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;AI-Powered Log Analysis (Stretched scope)
&lt;ul&gt;
&lt;li&gt;Explore AI-driven log analysis to provide users with insights, anomaly detection, and recommendations.&lt;/li&gt;
&lt;li&gt;Implement basic AI-assisted troubleshooting if feasible within the project timeline.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Documentation &amp; Testing:
&lt;ul&gt;
&lt;li&gt;Deliver user and developer documentation covering installation, usage, and troubleshooting.&lt;/li&gt;
&lt;li&gt;Include test cases and automation scripts to ensure system reliability.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Kubernetes, Go, CI/CD
&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours
**Difficulty level: **Medium
&lt;strong&gt;Mentors:&lt;/strong&gt; @Diogo_Recharte, @Mayank_Shah
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt; &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/everest&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-everest-rbac-policies-management-ui"&gt;Percona Everest RBAC policies management UI&lt;/h3&gt;
&lt;p&gt;Create a user interface to create and manage role-based access control policies&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Role-Based Access Control (RBAC) UI:
&lt;ul&gt;
&lt;li&gt;Develop a user-friendly interface in Percona Everest to create, update, and manage RBAC policies.&lt;/li&gt;
&lt;li&gt;Implement role assignment and permission configuration for database clusters.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Documentation &amp; Testing:
&lt;ul&gt;
&lt;li&gt;Deliver comprehensive user and developer documentation.&lt;/li&gt;
&lt;li&gt;Include test cases and automation scripts to ensure reliability.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Front-end, CI/CD tools
&lt;strong&gt;Duration:&lt;/strong&gt; 90 hours
&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium
&lt;strong&gt;Mentors:&lt;/strong&gt; @Diogo_Recharte, Peter Szczepaniak
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt; &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/everest&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="context-sensitive-help"&gt;Context sensitive help&lt;/h3&gt;
&lt;p&gt;The Percona Everest documentation contains valuable information, hints, and tips, but we lack a way to present relevant information to our users. This project aims to work with the UX and Docs teams to solve this problem.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Implement a mechanism to display relevant documentation, hints, and tips based on the user’s current action or screen within Percona Everest.&lt;/li&gt;
&lt;li&gt;Ensure seamless integration with the existing UI for a non-intrusive experience.&lt;/li&gt;
&lt;li&gt;Enable contextual tooltips, pop-ups, or side panels that present relevant documentation without requiring users to leave the interface.&lt;/li&gt;
&lt;li&gt;Support links to full documentation pages when needed.&lt;/li&gt;
&lt;li&gt;Optionally, explore AI-driven suggestions based on user behavior and past queries.&lt;/li&gt;
&lt;li&gt;Allow users to control the level of help they receive (e.g., enable/disable tips, adjust verbosity).&lt;/li&gt;
&lt;li&gt;Provide user and developer documentation on how the system works and how to extend it.&lt;/li&gt;
&lt;li&gt;Ensure thorough testing to validate the accuracy and relevance of displayed help content.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Front-end, CI/CD tools
&lt;strong&gt;Duration:&lt;/strong&gt; 175 hours
&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium
&lt;strong&gt;Mentors:&lt;/strong&gt; @Diogo_Recharte, Peter Szczepaniak
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt; &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/everest&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="backups-and-restore-timeline-visualization"&gt;Backups and restore timeline visualization&lt;/h3&gt;
&lt;p&gt;Databases are usually long-living services, and investigating issues with them is easier when you can see events like backups and restores of this service on a timeline.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Develop a visual timeline within Percona Everest to display backup and restore events for database clusters.&lt;/li&gt;
&lt;li&gt;Ensure the timeline is intuitive, zoomable, and supports different time ranges (e.g., last 24 hours, 7 days, custom range).&lt;/li&gt;
&lt;li&gt;Retrieve and display backup and restore events from Percona Everest’s database and logs.&lt;/li&gt;
&lt;li&gt;Include metadata such as timestamps, duration, status (success, failure), and associated users or processes.&lt;/li&gt;
&lt;li&gt;Allow users to filter events by type (full backup, incremental backup, restore, etc.).&lt;/li&gt;
&lt;li&gt;Enable color-coding or icons to differentiate event types at a glance.&lt;/li&gt;
&lt;li&gt;Deliver comprehensive user and developer documentation.&lt;/li&gt;
&lt;li&gt;Ensure automated tests for data accuracy, UI performance, and usability.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Front-end, CI/CD tools
&lt;strong&gt;Duration:&lt;/strong&gt; 175 hours
&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium
&lt;strong&gt;Mentors:&lt;/strong&gt; @Diogo_Recharte, Peter Szczepaniak
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt; &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/everest&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="refactor-test-automation-using-page-object-model"&gt;Refactor test automation using page object model&lt;/h3&gt;
&lt;p&gt;Our project currently has a functional end-to-end (E2E) UI test suite that ensures the stability and correctness of our application. However, the test suite does not follow the Page Object Model (POM) design pattern, making it harder to maintain, scale, and debug.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Restructure existing test automation to follow the Page Object Model (POM) design pattern.&lt;/li&gt;
&lt;li&gt;Ensure better separation of test logic and UI elements for improved maintainability.&lt;/li&gt;
&lt;li&gt;Implement modular and reusable page object classes for different UI components and workflows.&lt;/li&gt;
&lt;li&gt;Standardize naming conventions and best practices for test scripts.&lt;/li&gt;
&lt;li&gt;Improve error handling and logging to make test failures easier to diagnose.&lt;/li&gt;
&lt;li&gt;Ensure the refactored test suite runs efficiently in CI/CD pipelines.&lt;/li&gt;
&lt;li&gt;Validate test performance improvements and maintain test coverage.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Required/preferred skills: Playwright, Typescript, Kubernetes
Duration: 175 hours
Difficulty level: Medium
Mentors: @Diogo_Recharte, Tomislav_Plavcic, Edith Puclla
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt; &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/everest&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-monitoring-and-management-pmm"&gt;Percona Monitoring and Management (PMM)&lt;/h2&gt;
&lt;h3 id="queryable-backup-and-restore-of-percona-server-for-mongodb"&gt;Queryable backup and restore of Percona Server for MongoDB&lt;/h3&gt;
&lt;p&gt;The project aims to equip MongoDB Database Administrators with &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt; extension that allows to query data directly from a backup. It solves their pain when they want to inspect just a single document in a collection from a few Terabytes size backup. The time associated with downloading the snapshot, decompressing it, getting it running in a local MongoDB node, and finally running the query would be significant. Not only that, but there are obvious nontrivial costs — both monetary and operational — associated with having to quickly spin up new environments. In scope of the project there is a backend Go application server implementation to run on-demand ephemeral mongodb instance, load data from backup, and enable user to run a DB query from a UI interface.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
It’s expected to deliver source code changes to PMM in form of PMM fork that extends PMM functionality for queryable MongoDB backup. Specifically, it is expected to prepare a solution design, implementation, unit and/or integration tests based on a Docker environment, and documentation.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Go, MongoDB&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @radoslaw.szulgo&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Relevant repository:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/pmm" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/pmm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-backup-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-backup-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="design-engineering-for-pmm"&gt;Design Engineering for PMM&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt; is a long-standing open-source software, but its age also comes with some UX and UI debt. Facing new goals to help innovate PMM, the team is excited to look forward to starting “renovating the house” and swapping the GUI with a more modern one built in-house. We are looking for experts in design engineering to help with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Making/refining/cataloging UI components that we will need for QA and production;&lt;/li&gt;
&lt;li&gt;Creating functional prototypes ad hoc from written ideas or designs;&lt;/li&gt;
&lt;li&gt;Convert old PMM pages like for like into new PMM pages (with new UI).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Contributing with ideas to help make the code library more easy to contribute to;&lt;/li&gt;
&lt;li&gt;Contributing to the code library with at least one new component;&lt;/li&gt;
&lt;li&gt;Create at least one code prototype for one of the team’s ongoing ideas;&lt;/li&gt;
&lt;li&gt;Convert at least one existing PMM functionality page UI into the new UI.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; CI/CD, Git, Storybook, React, MUI, Figma&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mentor&lt;/strong&gt;: @pedro.fernandes&lt;/p&gt;
&lt;h3 id="pmm-ui-for-postgresql-backups---create-restore-check-monitor"&gt;PMM UI for PostgreSQL backups - create, restore, check, monitor&lt;/h3&gt;
&lt;p&gt;Backup management without UI is not an easy task for the users. Having a tool that could be a tool of choice for backup and restore management could provide a unification layer for multiple backup/restore tools as well as provide a very important value that’s often overlooked: backup monitoring.&lt;/p&gt;
&lt;p&gt;As it turns out, many DBAs are often worried about the state of their backups and make it a daily routine task to check how the backups they have configured are. As the environments scale, this task becomes increasingly tiresome. Also, having to check whether the backups they create are effective makes it another routine task that needs either extra automation scripting to run a restore or a manual task.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Extend PMM UI to add existing backups so that they can be monitored&lt;/li&gt;
&lt;li&gt;Extend PMM UI to create backups&lt;/li&gt;
&lt;li&gt;Create a tool to automate the backup testing (check whether the backups created are usable)&lt;/li&gt;
&lt;li&gt;Extend PMM to monitor and alert on backup irregularities&lt;/li&gt;
&lt;li&gt;Integrate the backup management with external schedulers like cron.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; C++, Go, PostgreSQL
&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours
&lt;strong&gt;Difficulty level:&lt;/strong&gt; Hard
&lt;strong&gt;Mentors:&lt;/strong&gt; Kai Wagner, @Jan_Wieremjewicz
&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/postgres" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/postgres: Percona Server for PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/pmm" target="_blank" rel="noopener noreferrer"&gt;GitHub - percona/pmm: Percona Monitoring and Management: an open source database monitoring, observability and management tool&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="llm-powered-test-scenario-generation-for-open-source-contributions"&gt;LLM-Powered Test Scenario Generation for Open Source Contributions&lt;/h3&gt;
&lt;p&gt;Open-source projects thrive on community contributions, but ensuring that each pull request (PR) has adequate test coverage is a major challenge. Many PRs introduce changes without proper regression tests, leading to bugs and unstable releases.&lt;/p&gt;
&lt;p&gt;This project aims to build an LLM-powered tool that analyzes code changes in GitHub PRs and automatically generates relevant test scenarios. Using an Open Source LLM (e.g., DeepSeek, Mistral, LLaMA), the system will identify impact areas, suggest missing test cases, and recommend regression tests based on past commits. The goal is to integrate this into GitHub workflows, enabling maintainers to quickly assess PR test coverage and guide contributors in writing better tests.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The student will work on developing a system with the following features:&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;PR Analysis &amp; Impact Assessment&lt;/li&gt;
&lt;li&gt;Extract and analyze code diffs in pull requests.&lt;/li&gt;
&lt;li&gt;Identify affected functions, dependencies, and modules.&lt;/li&gt;
&lt;li&gt;Predict impact areas using a dependency graph.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Test Scenario Generation using LLMs&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Use an Open Source LLM (DeepSeek, Mistral, etc.) to generate test cases.&lt;/li&gt;
&lt;li&gt;Recommend unit tests, integration tests, and regression scenarios.&lt;/li&gt;
&lt;li&gt;Compare new tests with existing ones to detect gaps in coverage.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start="3"&gt;
&lt;li&gt;GitHub Bot for Automated Suggestions&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Implement a bot that comments on PRs with test recommendations.&lt;/li&gt;
&lt;li&gt;Provide interactive feedback to contributors and maintainers.&lt;/li&gt;
&lt;li&gt;Integrate with GitHub Actions for CI/CD automation.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start="5"&gt;
&lt;li&gt;Regression Test Identification&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Identify existing test cases that need to be re-run.&lt;/li&gt;
&lt;li&gt;Suggest additional tests based on historical PRs and past bug reports.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start="6"&gt;
&lt;li&gt;Evaluation Metrics &amp; Benchmarking&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Measure effectiveness by tracking missed bugs before/after integration.&lt;/li&gt;
&lt;li&gt;Collect feedback from maintainers and contributors.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Future Scope:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Extend beyond GitHub to GitLab, Bitbucket, and other version control systems.&lt;/li&gt;
&lt;li&gt;Support additional test types, such as security and performance tests.&lt;/li&gt;
&lt;li&gt;Implement self-learning mechanisms to improve accuracy over time.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Strong programming skills in Python or JavaScript, Experience with GitHub APIs &amp; Pull Request Workflows, Understanding of Machine Learning / LLMs (DeepSeek, Mistral, LLaMA, etc.), Familiarity with Software Testing &amp; QA Automation, Experience with CI/CD Pipelines &amp; GitHub Actions (Bonus).
&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours
&lt;strong&gt;Difficulty level:&lt;/strong&gt; Hard
&lt;strong&gt;Mentor:&lt;/strong&gt; Peter Sirotnak, @vasyl.yurkovych&lt;/p&gt;
&lt;h2 id="percona-build-engineering"&gt;Percona Build Engineering&lt;/h2&gt;
&lt;h3 id="sboms-for-percona-database-software---mysql-postgresql-and-mongodb"&gt;SBOMs for Percona database software - MySQL, PostgreSQL, and MongoDB&lt;/h3&gt;
&lt;p&gt;A “software bill of materials” (SBOM) has emerged as a key building block in software security and software supply chain risk management. An SBOM is a nested inventory, a list of ingredients that comprise software components. The project aims to adapt Percona’s build pipelines to generate SBOMs for Percona Software for MySQL, PostgreSQL, and MongoDB. This will enable organizations using Percona software to be more secure and avoid software supply chain vulnerabilities that were very harmful in late 2020 with the discovery of the &lt;a href="https://www.csoonline.com/article/3601508/solarwinds-supply-chain-attack-explained-why-organizations-were-not-prepared.html" target="_blank" rel="noopener noreferrer"&gt;Solar Winds&lt;/a&gt; cyberattack or later with the &lt;a href="https://en.wikipedia.org/wiki/Log4Shell" target="_blank" rel="noopener noreferrer"&gt;Log4j&lt;/a&gt; security flaw.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
At the end of the project, a running staging pipeline in Jenkins and Trivy should produce complete SBOMs for Percona Server for MySQL, PostgreSQL, and MongoDB, Percona Backup for MongoDB, Percona Xtra Backup for MySQL. SBOMs are uploaded automatically to the Percona repository and are downloadable publicly. Additionally, technical documentation on how the process works is expected to be created.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Jenkins, Trivy&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 175 hours&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Easy&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @radoslaw.szulgo&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Relevant repository and resources:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-backup-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-backup-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-server-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://trivy.dev/v0.33/docs/sbom/" target="_blank" rel="noopener noreferrer"&gt;https://trivy.dev/v0.33/docs/sbom/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="evolving-cicd-automating-build-test-and-release-for-robust-software-delivery"&gt;Evolving CI/CD: Automating Build, Test, and Release for Robust Software Delivery&lt;/h3&gt;
&lt;p&gt;Continuous Integration and Continuous Deployment (CI/CD) pipelines are the backbone of modern software development, ensuring rapid, reliable, and repeatable delivery. However, many pipelines still operate in fragmented stages, where builds and tests are automated, but releases remain a manual or semi-automated process.&lt;/p&gt;
&lt;p&gt;This project aims to transform our CI/CD pipelines into a true end-to-end automated system, seamlessly integrating build, test, and release stages. By implementing best practices in CI/CD automation, we will ensure that only thoroughly tested software progresses to release, minimizing human intervention and reducing the risk of deployment failures.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;
The successful completion of this project will result in a fully automated and robust CI/CD pipeline that seamlessly integrates build, test, and release processes. The key outcomes will include:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fully Automated CI/CD Pipeline&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A redesigned pipeline where builds, testing, and releases are interconnected and automated.
Code changes will automatically trigger builds, run tests, and, if successful, deploy releases without manual intervention.
Comprehensive Test Integration&lt;/p&gt;
&lt;p&gt;The pipeline will incorporate unit tests, integration tests, security scans, and other quality assurance mechanisms.
Ensuring that faulty builds do not reach production by enforcing test-driven deployment.
Automated Release Process&lt;/p&gt;
&lt;p&gt;A mechanism that automatically releases software only if all tests pass.
Versioning, tagging, and artifact management will be streamlined.
The release process will be documented and configurable for different environments (e.g., staging, production).
Infrastructure as Code (IaC) &amp; Deployment Automation&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Documentation &amp; Guides&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Clear technical documentation detailing the new pipeline’s workflow and configuration.
A step-by-step guide for developers and DevOps engineers on how to use and extend the pipeline.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; CI/CDl like Jenkins, GitHub Actions, GitLab CI, or similar; Docker; Testing frameworks and automated deployment strategies; infrastructure as code (IaC) and cloud environments is a plus.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 350 hours&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @Evgeniy_Patlan , @Vadim_Yalovets&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Relevant repository:&lt;/strong&gt; &lt;a href="https://github.com/Percona-Lab/jenkins-pipelines" target="_blank" rel="noopener noreferrer"&gt;https://github.com/Percona-Lab/jenkins-pipelines&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="build-automation-for-open-source-databases"&gt;Build Automation for Open-Source Databases&lt;/h3&gt;
&lt;p&gt;Building and maintaining multiple database forks—such as MySQL, MongoDB, and PostgreSQL—often involves redundant build scripts, leading to inefficiencies, inconsistencies, and maintenance overhead. Currently, each database has its own set of build scripts despite sharing many common steps.&lt;/p&gt;
&lt;p&gt;This project aims to develop a modular, extensible build system that allows for streamlined compilation and packaging of different database forks. The system will provide a flexible framework where users can select required modules, specify target OS distributions, and automate the build process with minimal configuration.&lt;/p&gt;
&lt;p&gt;By implementing a plugin-based architecture, this modular builder will simplify cross-database maintenance, reduce duplication, and improve consistency across different builds.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deliverables:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Modular Build Framework – A reusable, pluggable system that dynamically selects required modules for MySQL, MongoDB, and PostgreSQL builds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Multi-OS Support – Automated builds for multiple Linux distributions (Debian, Ubuntu, CentOS, RHEL) with configurable OS selection.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Automated Package Creation – DEB and RPM package generation with standardized versioning and tagging.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configurable &amp; Scalable Builds – Easy customization of build parameters, allowing extension to new database forks or patches.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;CI/CD Integration – Optional support for Jenkins, GitHub Actions, or GitLab CI to enable fully automated builds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Comprehensive Documentation – User and developer guides with example configurations for quick adoption and extension.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Required/preferred skills:&lt;/strong&gt; Bash/Python, CMake, Makefiles, Autotools, Linux and packaging (DEB/RPM), dependency management, CD/CD tools are a plus&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Duration:&lt;/strong&gt; 175 hours&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Difficulty level:&lt;/strong&gt; Medium&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mentors&lt;/strong&gt;: @Evgeniy_Patlan , @Vadim_Yalovets&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Relevant repositories&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server-mongodb" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-server-mongodb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-xtradb-cluster" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-xtradb-cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-server&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;More ideas are coming soon!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Suggest your ideas in the comments of the post or on &lt;a href="%28https://forums.percona.com/t/google-summer-of-code-2025-project-ideas/36461%29"&gt;the forum&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;GSoC isn’t just about working on predefined ideas—it’s about innovation! If you have a project idea that aligns with &lt;strong&gt;Percona software, AI/ML, security, or database performance&lt;/strong&gt;, submit your proposal, and our mentors will be happy to discuss it with you.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Do you have questions?&lt;/strong&gt; Visit our &lt;a href="https://forums.percona.com/t/google-summer-of-code-2025-project-ideas/36461" target="_blank" rel="noopener noreferrer"&gt;Community Forum&lt;/a&gt; or join our chat channels to connect with potential mentors.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ready to get started?&lt;/strong&gt; See our &lt;a href="https://forums.percona.com/t/google-summer-of-code-2025-contribution-guide/36420" target="_blank" rel="noopener noreferrer"&gt;Google Summer of Code 2025: Contribution guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;See you in GSoC 2025!&lt;/p&gt;</content:encoded>
      <author>Radoslaw Szulgo</author>
      <category>PMM</category>
      <category>Percona</category>
      <category>Opensource</category>
      <category>MongoDB</category>
      <category>GSoC</category>
      <media:thumbnail url="https://percona.community/blog/2025/02/gsoc-blog-post-cover_hu_81a494f59f256715.jpg"/>
      <media:content url="https://percona.community/blog/2025/02/gsoc-blog-post-cover_hu_5736630b427157bd.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Operator for MongoDB 1.19: Remote Backups, Auto-Generated Passwords, and More!</title>
      <link>https://percona.community/blog/2025/01/31/percona-operator-for-mongodb-1.19-remote-backups-auto-generated-passwords-and-more/</link>
      <guid>https://percona.community/blog/2025/01/31/percona-operator-for-mongodb-1.19-remote-backups-auto-generated-passwords-and-more/</guid>
      <pubDate>Fri, 31 Jan 2025 00:00:00 UTC</pubDate>
      <description>The latest release of the Percona Operator for MongoDB, version 1.19, is here. It brings a suite of enhancements designed to streamline your MongoDB deployments on Kubernetes. This release introduces a technical preview of remote file server backups, simplifies user management with auto-generated passwords, supports Percona Server for MongoDB 8.0, and includes numerous other improvements and bug fixes. Let’s dive into the details of what 1.19 has to offer.</description>
      <content:encoded>&lt;p&gt;The latest release of the &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt;, &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/RN/Kubernetes-Operator-for-PSMONGODB-RN1.19.0.html" target="_blank" rel="noopener noreferrer"&gt;version 1.19&lt;/a&gt;, is here. It brings a suite of enhancements designed to streamline your MongoDB deployments on Kubernetes. This release introduces a technical preview of remote file server backups, simplifies user management with auto-generated passwords, supports Percona Server for MongoDB 8.0, and includes numerous other improvements and bug fixes. Let’s dive into the details of what 1.19 has to offer.&lt;/p&gt;
&lt;h2 id="remote-backups-with-network-file-system-technical-preview"&gt;Remote Backups with Network File System (Technical Preview)&lt;/h2&gt;
&lt;p&gt;Backing up your MongoDB data is crucial, and Percona Operator for MongoDB 1.19 introduces a powerful new option for backup storage: the filesystem type. This feature, currently in technical preview, allows you to leverage a remote file server, mounted locally as a sidecar volume, for your backups. This is particularly useful in environments with network restrictions that prevent the use of S3-compatible storage or for organizations using non-standard storage solutions that support the Network File System (NFS) protocol.&lt;/p&gt;
&lt;h3 id="setting-up-remote-backups"&gt;Setting Up Remote Backups&lt;/h3&gt;
&lt;p&gt;To use this new capability, you’ll need to add your remote storage as a sidecar volume within the replsets section of your Custom Resource (and configsvrReplSet for sharded clusters). Here’s how:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;replsets:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sidecarVolumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: backup-nfs-vol
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; nfs:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; server: "nfs-service.storage.svc.cluster.local"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; path: "/psmdb-my-cluster-name-rs0"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Then, configure the mount point and sidecar volume name in the &lt;code&gt;backup.volumeMounts&lt;/code&gt; section:&lt;/p&gt;
&lt;p&gt;YAML:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;backup:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumeMounts:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - mountPath: /mnt/nfs/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: backup-nfs-vol
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Finally, set up a filesystem type storage in the backup.storages section, pointing it to the mount point:&lt;/p&gt;
&lt;p&gt;YAML:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;backup:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; enabled: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; storages:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; backup-nfs:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; type: filesystem
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; filesystem:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; path: /mnt/nfs/&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;See more in our &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/backups-storage.html#remote-file-server" target="_blank" rel="noopener noreferrer"&gt;documentation about this storage type&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="simplified-user-management-with-auto-generated-passwords"&gt;Simplified User Management with Auto-Generated Passwords&lt;/h2&gt;
&lt;p&gt;Managing user credentials just got easier. Percona Operator for MongoDB 1.19 enhances declarative management of custom MongoDB users by adding the ability to generate passwords automatically. Now, when defining a new user in your deploy/cr.yaml file, you can omit the reference to an existing Secret containing the password, and the Operator will handle the generation for you:&lt;/p&gt;
&lt;p&gt;YAML:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;users:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: my-user
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; db: admin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; roles:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: clusterAdmin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; db: admin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: userAdminAnyDatabase
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; db: admin&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The Operator will create a Secret to store the generated password securely. It is important to note that the Secret will be created after the cluster is in the Ready state.&lt;/p&gt;
&lt;p&gt;Get the user credentials:
Find the Secret resource named &lt;cluster-name&gt;-custom-user-secret
Get the user password with this one-liner:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get secret my-cluster-name-custom-user-secret -o jsonpath='{.data.my-user}' | base64 -d&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can find more details on this automatically created Secret in our &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/users.html#custom-mongodb-roles" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-80-support"&gt;Percona Server for MongoDB 8.0 Support&lt;/h2&gt;
&lt;p&gt;Staying up-to-date with the latest MongoDB versions is essential for performance and security. Percona Operator for MongoDB 1.19 now officially supports Percona Server for MongoDB 8.0, in addition to 6.0 and 7.0. This means you can leverage the latest features and improvements from MongoDB 8.0, combined with the enterprise-grade enhancements and open-source commitment of Percona Server for MongoDB.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2025/01/operator-mongodb-8.png" alt="Percona Server for MongoDB 8.0 Support" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Check out &lt;a href="https://www.percona.com/blog/percona-server-for-mongodb-8-0-most-performant-ever/" target="_blank" rel="noopener noreferrer"&gt;this blog post&lt;/a&gt; to learn more about the features in MongoDB 8.0.&lt;/p&gt;
&lt;h2 id="streamlined-aws-s3-access-with-iam-roles-for-service-accounts-irsa"&gt;Streamlined AWS S3 Access with IAM Roles for Service Accounts (IRSA)&lt;/h2&gt;
&lt;p&gt;Percona Operator for MongoDB 1.19 adds support for &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" target="_blank" rel="noopener noreferrer"&gt;IAM Roles for Service Accounts (IRSA)&lt;/a&gt;, simplifying secure access to AWS S3 for backups on Amazon EKS. IRSA lets you grant granular S3 permissions to specific Pods via their associated Kubernetes service accounts. This approach ensures that only the Pods that require S3 access receive it, adhering to the principle of least privilege. Furthermore, each Pod can only access credentials linked to its service account, providing strong credential isolation. For enhanced security, all S3 access is tracked through AWS CloudTrail, enabling comprehensive auditability. All of this happens without the need to manually manage and distribute AWS credentials.&lt;/p&gt;
&lt;p&gt;Configuration Steps&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create an IAM Role: Define an IAM role with S3 access permissions. See &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" target="_blank" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Identify Service Accounts: The Operator uses percona-server-mongodb-operator and your cluster uses default (customizable in deploy/cr.yaml).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Annotate Service Accounts: Link the IAM role to both service accounts:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl -n &lt;cluster namespace&gt; annotate serviceaccount default eks.amazonaws.com/role-arn: &lt;YOUR_IAM_ROLE_ARN&gt; --overwrite
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl -n &lt;operator namespace&gt; annotate serviceaccount percona-server-mongodb-operator eks.amazonaws.com/role-arn: &lt;YOUR_IAM_ROLE_ARN&gt; --overwrite&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure S3 Storage: Set up S3 storage in deploy/cr.yaml without s3.credentialsSecret. The Operator will use IRSA.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Important: IRSA credentials take precedence over IAM instance profiles, and S3 credentials in a Secret override both.&lt;/p&gt;
&lt;p&gt;IRSA streamlines S3 access, enhancing security and manageability for your MongoDB backups on EKS. Learn more in our &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/backups-storage.html#automating-access-to-amazon-s3-based-on-iam-roles" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Percona Operator for MongoDB 1.19 delivers a significant step forward in simplifying and automating the management of your MongoDB clusters on Kubernetes. With features like remote backups, auto-generated passwords, and support for Percona Server for MongoDB 8.0, this release empowers you to deploy, manage, and scale your databases with greater ease and efficiency.&lt;/p&gt;
&lt;p&gt;We encourage you to explore the &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/RN/Kubernetes-Operator-for-PSMONGODB-RN1.19.0.html" target="_blank" rel="noopener noreferrer"&gt;full release notes&lt;/a&gt; and try out the new features. As always, your feedback is invaluable to us. Please share your thoughts and contribute to the project on our &lt;a href="https://github.com/percona/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; or our &lt;a href="https://forums.percona.com/c/mongodb/percona-kubernetes-operator-for-mongodb/29" target="_blank" rel="noopener noreferrer"&gt;Community Forum&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Sergey Pronin</author>
      <category>Kubernetes</category>
      <category>MongoDB</category>
      <category>Percona</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2025/01/operator-1-19_hu_6722f842e532b42a.jpg"/>
      <media:content url="https://percona.community/blog/2025/01/operator-1-19_hu_61ed34aa6c8532c6.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 3.0.0-GA</title>
      <link>https://percona.community/blog/2025/01/29/percona-monitoring-management-3-ga/</link>
      <guid>https://percona.community/blog/2025/01/29/percona-monitoring-management-3-ga/</guid>
      <pubDate>Wed, 29 Jan 2025 00:00:00 UTC</pubDate>
      <description>We’re excited to announce the release of Percona Monitoring and Management (PMM) 3.0.0 GA.</description>
      <content:encoded>&lt;p&gt;We’re excited to announce the release of &lt;strong&gt;Percona Monitoring and Management (PMM) 3.0.0 GA&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The Percona Monitoring and Management (PMM) 3.0.0 release delivers major security and stability enhancements. Notable security improvements include rootless deployments and encryption of sensitive data, along with improved API authentication using Grafana service accounts. Deployment options have expanded with official ARM support and the ability to use Podman for rootless deployments, providing flexibility and better security. Additionally, the introduction of containerized architecture has increased stability, and a streamlined upgrade process ensures reliability and ease of maintenance.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2025/01/PMM-3.0.0_hu_bab4bf52b037997b.png 480w, https://percona.community/blog/2025/01/PMM-3.0.0_hu_aad2d5b4f9ec2c44.png 768w, https://percona.community/blog/2025/01/PMM-3.0.0_hu_a72c59d901d4239b.png 1400w"
src="https://percona.community/blog/2025/01/PMM-3.0.0.png" alt="Percona Monitoring and Management (PMM) 3.0.0" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;User experience has been significantly improved with more flexible monitoring configurations and UI-based upgrades for Podman installations. This release also includes new features such as monitoring for MongoDB 8.0 and integration with Watchtower for automated container updates. These enhancements aim to provide users with a more secure, stable, and user-friendly monitoring and management experience.&lt;/p&gt;
&lt;h2 id="release-notes"&gt;Release notes&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;To see the full list of changes, check out the &lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/release-notes/3.0.0.html" target="_blank" rel="noopener noreferrer"&gt;3.0.0 GA Release Notes&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Percona Monitoring and Management (PMM) 3.0.0 Release Notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Security Enhancements&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Implementation of rootless deployments to enhance security.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Encryption of sensitive data to ensure information confidentiality.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Improved API authentication with Grafana service accounts, increasing access security.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deployment Options&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Official PMM Client ARM support, allowing the use of PMM on ARM architecture devices.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rootless deployments using Podman, providing flexibility and security.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Support for deployments using Helm, Docker, Virtual Appliance, and Amazon AWS for various use cases.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stability Improvements&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Increased stability through containerized architecture, providing isolation and manageability.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Streamlined upgrade process, reducing the risk of failures during updates and enhancing reliability.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;User Experience&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Flexible monitoring configurations, allowing users to tailor the system to their needs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;UI-based upgrades for Podman installations, making the update process more convenient and intuitive.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;New Features&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Monitoring for MongoDB 8.0, ensuring support for the latest database versions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Integration with Watchtower for automated container updates, simplifying management and keeping the system up-to-date.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We invite you to install and try the new PMM 3.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Quickstart guide&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/quickstart.html" target="_blank" rel="noopener noreferrer"&gt;Get started with PMM&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Multiple installation options&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/3/install-pmm/index.html" target="_blank" rel="noopener noreferrer"&gt;About PMM installation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;Contact us on the &lt;a href="https://forums.percona.com/c/percona-monitoring-and-management-pmm/pmm-3/84" target="_blank" rel="noopener noreferrer"&gt;Percona Community Forums&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Ondrej Patocka</author>
      <category>PMM</category>
      <category>General Availability</category>
      <category>Monitoring</category>
      <category>Percona</category>
      <category>Databases</category>
      <media:thumbnail url="https://percona.community/blog/2025/01/pmm-blog-post-cover_hu_157785b07ee52466.jpg"/>
      <media:content url="https://percona.community/blog/2025/01/pmm-blog-post-cover_hu_dd1f0c6bd73fc043.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL 8.4 Support in Percona Toolkit 3.7.0</title>
      <link>https://percona.community/blog/2025/01/06/mysql-8.4-support-in-percona-toolkit-3.7.0/</link>
      <guid>https://percona.community/blog/2025/01/06/mysql-8.4-support-in-percona-toolkit-3.7.0/</guid>
      <pubDate>Mon, 06 Jan 2025 00:00:00 UTC</pubDate>
      <description>Percona Toolkit 3.7.0 has been released on Dec 23, 2024. The main feature of this release is MySQL 8.4 support.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona Toolkit 3.7.0 has been released on &lt;strong&gt;Dec 23, 2024&lt;/strong&gt;. The main feature of this release is MySQL 8.4 support.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;In this blog, I will explain what has been changed. A full list of improvements and bug fixes can be found in the &lt;em&gt;&lt;a href="https://docs.percona.com/percona-toolkit/release_notes.html" target="_blank" rel="noopener noreferrer"&gt;&lt;em&gt;release notes&lt;/em&gt;&lt;/a&gt;&lt;/em&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;TLDR;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Replication statements in 8.4 are fully supported by the Percona Toolkit&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pt-slave-delay&lt;/code&gt; has been deprecated.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pt-slave-find&lt;/code&gt; has been renamed to &lt;code&gt;pt-replica-find&lt;/code&gt;. The old name has been deprecated but exists in the repository as an alias of the &lt;code&gt;pt-replica-find&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pt-slave-restart&lt;/code&gt; has been renamed to &lt;code&gt;pt-replica-restart&lt;/code&gt;. Old name has been deprecated but exists in the repository as an alias of the &lt;code&gt;pt-replica-restart&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Basic SSL support has been added to the tools where it was not working before (see &lt;a href="https://perconadev.atlassian.net/browse/PT-191" target="_blank" rel="noopener noreferrer"&gt;https://perconadev.atlassian.net/browse/PT-191&lt;/a&gt; ), and Percona Toolkit now supports &lt;code&gt;caching_sha2_password&lt;/code&gt; and &lt;code&gt;sha256_password&lt;/code&gt;authentication plugins. Full implementation of &lt;a href="https://perconadev.atlassian.net/browse/PT-191" target="_blank" rel="noopener noreferrer"&gt;https://perconadev.atlassian.net/browse/PT-191&lt;/a&gt; is planned for the next version.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="replication-statements"&gt;Replication Statements&lt;/h2&gt;
&lt;p&gt;MySQL 8.4 removed earlier deprecated offensive language, such as &lt;code&gt;SLAVE&lt;/code&gt; or &lt;code&gt;MASTER&lt;/code&gt;. This made tools written for earlier versions not compatible with the new version. Percona Toolkit was also affected, and I had to rewrite it.&lt;/p&gt;
&lt;p&gt;However, Percona Toolkit should be able to run not only with MySQL 8.4 but also with older versions. So, the change was not a simple grep and replace of offensive words. It is not even possible for version MySQL 8.0 because new syntax was first introduced in 8.0.23 for the &lt;code&gt;CHANGE REPLICATION SOURCE&lt;/code&gt; and &lt;code&gt;START/STOP REPLICA&lt;/code&gt; commands. Earlier versions weren’t aware of this change.&lt;/p&gt;
&lt;p&gt;Another challenge was the fact that I could replace all occurrences of the word &lt;code&gt;SLAVE&lt;/code&gt; with &lt;code&gt;REPLICA&lt;/code&gt;. Still, I could not do the same for the &lt;code&gt;MASTER&lt;/code&gt; and &lt;code&gt;SOURCE&lt;/code&gt; pairs because replication source-related commands are mapped differently:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Legacy syntax&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Syntax without offensive words&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CHANGE MASTER&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CHANGE REPLICATION SOURCE&lt;/code&gt; (since 8.0.23)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;SHOW MASTER STATUS&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;SHOW BINARY LOG STATUS&lt;/code&gt; (since 8.4.0)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RESET MASTER&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;RESET BINARY LOGS[ AND GTIDS]&lt;/code&gt; ( since 8.4.0)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MASTER&lt;/code&gt;  in other commands&lt;/td&gt;
&lt;td&gt;&lt;code&gt;SOURCE&lt;/code&gt; (partially since 8.0.23, fully since 8.4)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;So, I added selectors that use the correct command depending on the MySQL server version.&lt;/p&gt;
&lt;p&gt;I intentionally implemented new syntax for version 8.4 only, so I do not have to check every single minor version of 8.0. I also did not implement new syntax for MariaDB. This may happen in the future.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;However, all messages displayed to the user use the new syntax. If you rely on old syntax somewhere in your scripts, adjust them.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Internally, most of the functions were renamed to use the new syntax, but the important module &lt;code&gt;lib/MasterSlave.pm&lt;/code&gt; kept its name.&lt;/p&gt;
&lt;h2 id="deprecated-and-outdated-tools"&gt;Deprecated and Outdated Tools&lt;/h2&gt;
&lt;p&gt;As a result of this change, &lt;code&gt;pt-slave-delay&lt;/code&gt; has been deprecated. The tool stays in the repository and works as before when connected to MySQL 8.0 or an earlier version. However, it refuses to work with MySQL 8.4. The tool will be removed in one of the future versions.&lt;/p&gt;
&lt;p&gt;Tools &lt;code&gt;pt-slave-find&lt;/code&gt; and &lt;code&gt;pt-slave-restart&lt;/code&gt; were renamed to &lt;code&gt;pt-replica-find&lt;/code&gt; and &lt;code&gt;pt-replica-restart&lt;/code&gt;. Aliases with old names still exist, so you have time to change your scripts. However, expect that these aliases will be removed in one of the future versions as well.&lt;/p&gt;
&lt;p&gt;Tool &lt;code&gt;pt-variable-advisor&lt;/code&gt; has been updated to reflect current default values.&lt;/p&gt;
&lt;h2 id="basic-ssl-support"&gt;Basic SSL Support&lt;/h2&gt;
&lt;p&gt;Percona Toolkit did not have consistent SSL support: some of the tools were able to connect using SSL, and others did not. This was reported at &lt;a href="https://perconadev.atlassian.net/browse/PT-191" target="_blank" rel="noopener noreferrer"&gt;https://perconadev.atlassian.net/browse/PT-191&lt;/a&gt;. In this version, I added option “&lt;code&gt;s&lt;/code&gt;” for &lt;code&gt;DSN&lt;/code&gt; that instructs &lt;code&gt;DBD::mysql&lt;/code&gt; to open a secure connection with the database. As a result, Percona Toolkit now supports &lt;code&gt;caching_sha2_password&lt;/code&gt; and &lt;code&gt;sha256_password&lt;/code&gt; authentication plugins. But other SSL options are still missed. Full SSL support will be added in the next version.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Percona Toolkit fully supports MySQL 8.4. If you use &lt;code&gt;pt-slave-find&lt;/code&gt; and &lt;code&gt;pt-slave-restart&lt;/code&gt;, consider calling them by their new names &lt;code&gt;pt-replica-find&lt;/code&gt; and &lt;code&gt;pt-replica-restart&lt;/code&gt;. Tool &lt;code&gt;pt-slave-delay&lt;/code&gt; has been deprecated and will be removed in future versions. Use built-in feature &lt;a href="https://dev.mysql.com/doc/refman/8.4/en/replication-delayed.html" target="_blank" rel="noopener noreferrer"&gt;delayed replication&lt;/a&gt; instead.&lt;/p&gt;</content:encoded>
      <author>Sveta Smirnova</author>
      <category>Toolkit</category>
      <category>MySQL</category>
      <category>Percona</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2025/01/toolkit-370_hu_5b639c8b155c6c50.jpg"/>
      <media:content url="https://percona.community/blog/2025/01/toolkit-370_hu_a2b76fa1e2f67fb9.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 3.0.0-Beta - Tech Preview</title>
      <link>https://percona.community/blog/2024/12/02/percona-monitoring-management-technical-preview/</link>
      <guid>https://percona.community/blog/2024/12/02/percona-monitoring-management-technical-preview/</guid>
      <pubDate>Mon, 02 Dec 2024 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 3.0.0 Beta - Tech Preview We’re excited to announce the Tech Preview (Beta) release of Percona Monitoring and Management (PMM) 3.0.0-Beta.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-300-beta---tech-preview"&gt;Percona Monitoring and Management 3.0.0 Beta - Tech Preview&lt;/h2&gt;
&lt;p&gt;We’re excited to announce the Tech Preview (Beta) release of &lt;strong&gt;Percona Monitoring and Management (PMM) 3.0.0-Beta&lt;/strong&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This release is intended for testing environments only, as it’s not yet production-ready. The GA (General Availability) release will be available through standard channels in the upcoming months.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id="release-notes"&gt;Release notes&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;To see the full list of changes, check out the &lt;a href="https://pmm-doc-3.onrender.com/release-notes/3.0.0_Beta.html" target="_blank" rel="noopener noreferrer"&gt;3.0.0-Beta - Tech Preview Release Notes&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="installation-options"&gt;Installation options&lt;/h2&gt;
&lt;h3 id="pmm-server"&gt;PMM Server&lt;/h3&gt;
&lt;h4 id="docker"&gt;Docker&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://hubgw.docker.com/r/perconalab/pmm-server/tags?name=3.0.0-beta" target="_blank" rel="noopener noreferrer"&gt;Server&lt;/a&gt;: &lt;code&gt;docker pull perconalab/pmm-server:3.0.0-beta&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pmm-doc-3.onrender.com/install-pmm/install-pmm-server/baremetal/docker/easy-install.html" target="_blank" rel="noopener noreferrer"&gt;Docker installation guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="vm"&gt;VM&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM3-Server-2024-11-26-1307.ova" target="_blank" rel="noopener noreferrer"&gt;Download OVA file&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pmm-doc-3.onrender.com/install-pmm/install-pmm-server/baremetal/virtual/index.html" target="_blank" rel="noopener noreferrer"&gt;VM Installation guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="pmm-client"&gt;PMM Client&lt;/h3&gt;
&lt;h4 id="docker-images"&gt;Docker images&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://hubgw.docker.com/r/perconalab/pmm-client/tags?name=3.0.0-beta" target="_blank" rel="noopener noreferrer"&gt;AMD 64 + ARM 64&lt;/a&gt;: &lt;code&gt;docker pull perconalab/pmm-client:3.0.0-beta&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="binary-packages"&gt;Binary packages&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://downloads.percona.com/downloads/TESTING/pmm-client-3.0.0beta/pmm-client-3.0.0beta.AMD64.tar.gz" target="_blank" rel="noopener noreferrer"&gt;Download AMD 64 tarball&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://downloads.percona.com/downloads/TESTING/pmm-client-3.0.0beta/pmm-client-3.0.0beta.ARM64.tar.gz" target="_blank" rel="noopener noreferrer"&gt;Download ARM 64 tarball&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="package-manager-installation"&gt;Package Manager installation&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;Enable testing repository via Percona-release: &lt;code&gt;percona-release enable pmm3-client testing&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Install relevant pmm-client package using your system’s package manager&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;p&gt;Contact us on the &lt;a href="https://forums.percona.com/c/percona-monitoring-and-management-pmm" target="_blank" rel="noopener noreferrer"&gt;Percona Community Forums&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Ondrej Patocka</author>
      <category>PMM</category>
      <category>Technical Preview</category>
      <category>Monitoring</category>
      <category>Percona</category>
      <category>Databases</category>
      <media:thumbnail url="https://percona.community/images/pmm/pmm-blog-post-cover_hu_d535f2202891bf3f.jpg"/>
      <media:content url="https://percona.community/images/pmm/pmm-blog-post-cover_hu_ab7fc16f44593397.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Bug Report: October 2024</title>
      <link>https://percona.community/blog/2024/11/25/percona-bug-report-october-2024/</link>
      <guid>https://percona.community/blog/2024/11/25/percona-bug-report-october-2024/</guid>
      <pubDate>Mon, 25 Nov 2024 00:00:00 UTC</pubDate>
      <description>At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.</description>
      <content:encoded>&lt;p&gt;At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.&lt;/p&gt;
&lt;p&gt;We constantly update our &lt;a href="https://jira.percona.com/" target="_blank" rel="noopener noreferrer"&gt;bug reports&lt;/a&gt; and monitor &lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;other boards&lt;/a&gt; to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. This post is a central place to get information on the most noteworthy open and recently resolved bugs.&lt;/p&gt;
&lt;p&gt;In this edition of our bug report, we have the following list of bugs,&lt;/p&gt;
&lt;h2 id="percona-servermysql-bugs"&gt;Percona Server/MySQL Bugs&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-8057" target="_blank" rel="noopener noreferrer"&gt;PS-8057&lt;/a&gt;: When max_slowlog_size is set to above 4096, then it gets reset to 1073741824. This overwrites the slow log file path with a different file name, which becomes like node_name.log.000001. Due to this issue, your path defined at slow_query_log_file won`t be useful. This issue has started happening since MySQL Version 8.0.32.&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;p&gt;MySQL 8.0.36 is running with the following set of configurations:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log = ON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log_file = /home/user/sandboxes/msb_ps8_0_36/data/slow
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;long_query_time = 10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;max_slowlog_files = 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;max_slowlog_size = 510000000&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Check the log_file path:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql [localhost:8036] {msandbox} ((none)) &gt; show global variables like "%slow_query_log_file%";
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------+------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------+------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| slow_query_log_file | /home/adi/sandboxes/msb_ps8_0_36/data/localhost.log.000001 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------+------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Interestingly, you will see that this file,/home/user/sandboxes/msb_ps8_0_36/data/localhost.log, 000001, was not even created.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;user@localhost:~/sandboxes/msb_ps8_0_36/data$ ll | grep slow
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r----- 1 adi adi 355518891 Aug 13 16:45 localhost-slow.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r----- 1 adi adi 1079653421 Jul 30 18:13 localhost-slow.log.old
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r----- 1 adi adi 255 Aug 13 16:48 slow
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r----- 1 adi adi 255 Aug 13 16:45 slow.000001&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After removing max_slowlog_size = 510000000&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql [localhost:8036] {msandbox} ((none)) &gt; show global variables like "%slow_query_log_file%";
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------+--------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------+--------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| slow_query_log_file | /home/adi/sandboxes/msb_ps8_0_36/data/slow |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------+--------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 5.7.36-39, 8.0.35-27, 8.0.36-28&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PS 8.0.39-30, 8.4.2-2&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Use “set global slow_query_log_file =’&lt;correct slow query log file&gt;’;”&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9214" target="_blank" rel="noopener noreferrer"&gt;PS-9214&lt;/a&gt;: INPLACE ALTER TABLE might fail with a duplicate key error if concurrent insertions occur; there have been many bugs reported here and in MySQL bugs regarding duplicate key errors while doing an online alter table operation on tables with primary and unique keys indexes. The bug is not as easy to reproduce but involves ONLY the primary key and includes an atomic sequence that cannot create a duplicate.  It seems to be related to page splits/merges.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-27, 8.0.36-28&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PS 8.0.39-30, 8.4.2-2&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug:&lt;/strong&gt; &lt;a href="https://bugs.mysql.com/bug.php?id=115511" target="_blank" rel="noopener noreferrer"&gt;115511&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Use ALTER TABLE … ALGORITHM=COPY instead.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9275" target="_blank" rel="noopener noreferrer"&gt;PS-9275&lt;/a&gt;: When querying based on a function, MySQL does not use the available functional index when using the LIKE operator, which results inconsistent query plans when functional Indexes are used.&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE `test` (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; `id` int NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; `a` varchar(200) DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; `test02` varchar(9) GENERATED ALWAYS AS (monthname(from_unixtime(`a`))) VIRTUAL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; `!hidden!test01!0!0` varchar(9) GENERATED ALWAYS AS (monthname(from_unixtime(`a`))) VIRTUAL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; PRIMARY KEY (`id`),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; KEY `test01` ((monthname(from_unixtime(`a`)))),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; KEY `test02` (`test02`)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;) ENGINE=InnoDB AUTO_INCREMENT=14 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; explain select MONTHNAME(FROM_UNIXTIME(a)) from test WHERE MONTHNAME(FROM_UNIXTIME(a)) Like 'April%';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 | SIMPLE | test | NULL | ALL | NULL | NULL | NULL | NULL | 13 | 100.00 | Using where |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set, 1 warning (0,00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; explain select MONTHNAME(FROM_UNIXTIME(a)) from test WHERE `!hidden!test01!0!0` Like 'April%';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------+------------+-------+---------------+--------+---------+------+------+----------+-------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------+------------+-------+---------------+--------+---------+------+------+----------+-------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 | SIMPLE | test | NULL | range | test01 | test01 | 39 | NULL | 2 | 100.00 | Using where |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------+------------+-------+---------------+--------+---------+------+------+----------+-------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set, 1 warning (0,00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.36-28, 8.4.X&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug:&lt;/strong&gt; &lt;a href="https://bugs.mysql.com/bug.php?id=104713" target="_blank" rel="noopener noreferrer"&gt;104713&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Use the indexes created on virtual fields explicitly.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9286" target="_blank" rel="noopener noreferrer"&gt;PS-9286:&lt;/a&gt; &lt;a href="https://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.html#:~:text=Limits%20Attribute%20Rules-,3.22%20State,-This%20attribute%20is" target="_blank" rel="noopener noreferrer"&gt;KMIP&lt;/a&gt; Component leaves keys in a pre-active state.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.X, 8.3.0-1, 8.4.0-1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PS 8.0.39-30, 8.4.2-2&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9314" target="_blank" rel="noopener noreferrer"&gt;PS-9314:&lt;/a&gt; The database crashed due to the SELECT statement. Since the JSON is invalid, the command should return ERROR 3146, an Invalid data type for JSON, but unfortunately, it crashed the instance with Signal 11 using JSON_TABLE.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.36-28, 8.0.37-29, 8.0.39-30&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PS 8.0.39-30, 8.4.2-2&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; show global variables like 'version%';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+-----------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+-----------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| version | 8.0.36-28 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| version_comment | Percona Server (GPL), Release 28, Revision 47601f19 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| version_compile_machine | x86_64 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| version_compile_os | Linux |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| version_compile_zlib | 1.2.13 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| version_suffix | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+-----------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;6 rows in set (0.01 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT ele AS domain FROM JSON_TABLE('["TEST'+(select load_file('test'))+'"]', "$[*]" COLUMNS (ele VARCHAR(70) PATH "$" )) AS json_elements ;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 2013 (HY000): Lost connection to MySQL server during query
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;No connection. Trying to reconnect...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Can't connect to the server&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9369" target="_blank" rel="noopener noreferrer"&gt;PS-9369:&lt;/a&gt; The audit plugin causes memory exhaustion after a few days; disconnecting threads and disabling the audit plugin is undesirable. This workaround can not be used since it requires scheduling an application outage. Even when small, it’s a recurrent event.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.37-29&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PS 8.0.40-31 [Yet to Release]&lt;/p&gt;
&lt;h2 id="percona-xtradb-cluster"&gt;Percona Xtradb Cluster&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4453" target="_blank" rel="noopener noreferrer"&gt;PXC-4453:&lt;/a&gt; In 3 Node PXC cluster, node01 has active flow control(FC). Active FC blocks user sessions to insert a message into the channel queue (session waits on send monitor (conn-&gt;sm)); send monitor is blocked because FC is active. The idea behind the logic is that applier threads, when consuming messages from the queue conn-&gt;recv_q, should check if FC is active, and if the queue level is below conn-&gt;lower_limit, FC should be disabled, and the user connection thread waiting on the sending monitor should be woken up. In other words, disabling the FC signal is driven by the consumption of events from recv_q by applier threads.&lt;/p&gt;
&lt;p&gt;In this case, it seems that recv_q is empty, but FC is active, so nothing can be added to recv_q. We have a vicious circle of some kind of deadlock, and due to this race condition, we are seeing cluster hangs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 5.7.25, 5.7.44, 8.0.36-28&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXC 8.0.37-29, 8.4.0&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4404" target="_blank" rel="noopener noreferrer"&gt;PXC-4404:&lt;/a&gt; wsrep_preordered=ON causes protocol violations, which cause a node to crash when the group view changes on a cluster with a node acting as an async replica.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 5.7.44-31&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Set wsrep_preordered=OFF; however, you may experience a delay in async replication.&lt;/p&gt;
&lt;p&gt;Note: Option &lt;a href="https://galeracluster.com/library/documentation/mysql-wsrep-options.html#wsrep-preordered" target="_blank" rel="noopener noreferrer"&gt;wsrep_preordered&lt;/a&gt; is deprecated in MySQL-wsrep: 8.0.19-26.3, MariaDB: 10.1.1&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4362" target="_blank" rel="noopener noreferrer"&gt;PXC-4362:&lt;/a&gt; The PXC node evicted when creating a function by the user doesn`t have the super privilege, and binary logging is enabled.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.34-26&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXC 8.0.36-28, 8.4.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Setting log_bin_trust_function_creators is the workaround. Note that log_bin_trust_function_creators is deprecated by MySQL 8.0.34 and will be removed in the future.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4365" target="_blank" rel="noopener noreferrer"&gt;PXC-4365&lt;/a&gt;: PXC nodes leave clusters when the row size is too large and have more than 3 nvarchar columns.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-27&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXC 8.0.36-28, 8.3.0&lt;/p&gt;
&lt;h2 id="percona-toolkit"&gt;Percona Toolkit&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2325" target="_blank" rel="noopener noreferrer"&gt;PT-2325&lt;/a&gt;: pt-table-sync does not produce the correct SQL statements to sync tables containing JSON columns properly.&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;p&gt;pt-table-sync emits the following SQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DELETE FROM `test`.`test_to` WHERE `id`='2' AND `data`='{"baz": "quux"}' LIMIT 1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO `test`.`test_to`(`id`, `data`) VALUES ('1', '{"foo": "bar"}');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The INSERT statement works fine, but the DELETE fails to delete the row with &lt;code&gt;id&lt;/code&gt;=‘2’, because the AND &lt;code&gt;data&lt;/code&gt;=’{“baz”: “quux”}’ portion of the WHERE clause will result in the query matching zero rows.&lt;/p&gt;
&lt;p&gt;Verify the incorrect contents of the test_to table with the following:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Examine the state of our test tables.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker exec -it mysql_5_7_12_test mysql -utest -ptest -e "
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; use test;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; select * from test_to;"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;That should return the following output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+-----------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id | data |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+-----------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 2 | {"baz": "quux"} |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 | {"foo": "bar"} |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+-----------------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Witness that the row with id=2 still exists in the table and was not deleted as it should have been. With JSON columns, the DELETE statement would need to look like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DELETE FROM `test`.`test_to`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WHERE `id`='2'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;AND `data`=CAST('{"baz": "quux"}' AS JSON)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;LIMIT 1;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.7&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2329" target="_blank" rel="noopener noreferrer"&gt;PT-2329&lt;/a&gt;: During the run, pt-archiver will ignore columns that are camelCase during the insert, but it will get all the columns during select.&lt;/p&gt;
&lt;p&gt;It could be confirmed by using a dry-run:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pt-archiver --source [...] --dest [...] --where "1=1" --statistics --progress=10000 --limit=1000 --no-delete --no-safe-auto-increment --no-check-columns --columns=addressLine1,addressLine2,city,state,postalCode,country,customerNumber --why-quit --skip-foreign-key-checks --dry-run&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here are the results:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT /*!40001 SQL_NO_CACHE */ `addressLine1`,`addressLine2`,`city`,`state`,`postalCode`,`country`,`customerNumber`,`customernumber` FROM `classicmodels`.`customers` FORCE INDEX(`PRIMARY`) WHERE (1=1) ORDER BY `customernumber` LIMIT 1000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT /*!40001 SQL_NO_CACHE */ `addressLine1`,`addressLine2`,`city`,`state`,`postalCode`,`country`,`customerNumber`,`customernumber` FROM `classicmodels`.`customers` FORCE INDEX(`PRIMARY`) WHERE (1=1) AND ((`customernumber` &gt; ?)) ORDER BY `customernumber` LIMIT 1000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO `classicmodels`.`addresses`(`city`,`state`,`country`) VALUES (?,?,?)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.7&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; The solution is to include all columns in lowercase in the param –columns until the bug is fixed.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2344" target="_blank" rel="noopener noreferrer"&gt;PT-2344&lt;/a&gt;: pt-config-diff compares mysqld options, but it fails if the [mysqld] section is in uppercase, even though that is a valid way of setting mysqld variables. Since [MYSQLD] is acceptable for MySQL, pt-config-diff should compare the options under that section.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.7&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Use [mysqld] as lowercase until the bug is fixed.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2355" target="_blank" rel="noopener noreferrer"&gt;PT-2355&lt;/a&gt;: Table data is lost if we accidentally resume a previously failed job that has null boundaries. pt-online-schema-change should not resume a job with empty boundaries.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.6.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PT 3.7.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Do not run pt-online-schema-change with job id having null boundaries.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2356" target="_blank" rel="noopener noreferrer"&gt;PT-2356&lt;/a&gt;: If you run pt-online-schema-change, which results in an error, then subsequent runs will create new tables that won`t be cleaned up.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.6.0&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2349" target="_blank" rel="noopener noreferrer"&gt;PT-2349&lt;/a&gt;: pt-table-sync is failing to sync data from PXC to the async environment, and trigger errors include “WSREP detected deadlock/conflict and aborted the transaction.”&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.3.1, 3.5.2, 3.6.0&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-1726" target="_blank" rel="noopener noreferrer"&gt;PT-1726&lt;/a&gt;: pt-query-digest is not distinguishing queries when an alias is used&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.6.0&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;p&gt;Queries from slow query log:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Time: 2019-01-31T11:00:00.728957Z
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# User@Host: sageone_ext_uk[sageone_ext_uk] @ [10.181.130.22] Id: 18714290
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Query_time: 2.709699 Lock_time: 0.000402 Rows_sent: 19 Rows_examined: 51011
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;use sageone_ext_uk;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SET timestamp=1548932400;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT a,b,c from table1 as t1 where t1.a=3 and t1.b=5;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Time: 2019-01-31T11:00:00.728957Z
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# User@Host: sageone_ext_uk[sageone_ext_uk] @ [10.181.130.22] Id: 18714290
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Query_time: 2.709699 Lock_time: 0.000402 Rows_sent: 19 Rows_examined: 51011
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;use sageone_ext_uk;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SET timestamp=1548932400;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT a,b,c from table1 as t1 where t1.a=3 and t1.c=5;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The fingerprints for the above queries are the same, which is incorrect behaviour:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;select a,b,c from table? as t? where t?=? and t?=?&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2374" target="_blank" rel="noopener noreferrer"&gt;PT-2374&lt;/a&gt;: If we say –ignore=bob, every combination of the bob user will be ignored. This includes bob@localhost, bob@::1, bob@foobar, etc. But this is not the case. Only bob@% is ignored; pt-show-grants –ignore does not ignore all accounts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.6.0&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2375" target="_blank" rel="noopener noreferrer"&gt;PT-2375&lt;/a&gt;: When pt-table-sync is used on a table with a GENERATED AS column, it fails because we cannot REPLACE/INSERT values into a GENERATED column.&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`requestStatus` tinyint(1) GENERATED ALWAYS AS (if((`provRequired` = 0),(`httpSyncStatus` not between 200 and 299),(`httpAsyncStatus` not between 200 and 299))) VIRTUAL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 3105 (HY000): The value specified for generated column 'requestStatus' in table 'qqq' is not allowed.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The —-ignore-columns parameter specifically states that if a REPLACE/INSERT is needed, all columns will be used. Due to this, the pt-table-sync does not work with the generated columns.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.6.0&lt;/p&gt;
&lt;h2 id="pmm-percona-monitoring-and-management"&gt;PMM [Percona Monitoring and Management]&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12013" target="_blank" rel="noopener noreferrer"&gt;PMM-12013:&lt;/a&gt; If we add many RDS instances to the PMM server, say 200+, and Change the prom scrape.maxScrapeSize to the value that allows the VM to parse the reply from the exporter, then the metrics are gathered unreliably, there are gaps, and the exporter`s RSS feed goes to like 5GB for instance. This concludes that rds_exporter is unreliable for large deployments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.35.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PMM 3.0.0-Beta available as Tech Preview]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12161" target="_blank" rel="noopener noreferrer"&gt;PMM-12161:&lt;/a&gt; In the Mongodb cluster summary page, Under QPS of Config Services dashboard, it is being clubbed configRS, mongoS and mongod servers. This results in too many configuration services under the QPS of the Config Services dashboard.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.42.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PMM 3.1 [Yet to Release]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12993" target="_blank" rel="noopener noreferrer"&gt;PMM-12993:&lt;/a&gt; In PMM, CPU metrics have a label “mode” to identify between CPU info: sys, iowait, nice, user, idle, etc. With 1 rds instance, the metric is perfectly fine. However, after adding more instances, the CPU metric is still collected, but the “mode” label is empty, which breaks the graphs in the Advanced Data Exploration dashboard.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.41.1, 2.41.2&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13148" target="_blank" rel="noopener noreferrer"&gt;PMM-13148&lt;/a&gt;: If we run the queries without using the schema name, then we don`t see such queries in the QAN.&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql [localhost:8036] {msandbox} ((none)) &gt; update test.joinit set g=100,t="06:44:50" where i=1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 1 row affected (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Rows matched: 1 Changed: 1 Warnings: 0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here, we can see we did not explicitly select the database name using the USE &lt;database&gt; command and executed the query directly. This results in QAN not being able to capture such queries for analytics.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.41.2&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Run queries with USE &lt;dbname&gt;; &lt;Query&gt;;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13252" target="_blank" rel="noopener noreferrer"&gt;PMM-13252:&lt;/a&gt; A 500 error message is returned while creating the role with the existing name.&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Enable Access roles in PMM Settings&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Open the Access role page and create a role with the name “Test. “&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Try to create a new role with the name “Test. “&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It returns with Internal server error 500:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;in logs: msg="RPC /accesscontrol.v1beta1.AccessControlService/CreateRole done in 1.409839ms with unexpected error: pq: duplicate key value violates unique constraint "roles_title_key""&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.42.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; It is expected to be fixed in PMM 3&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13277" target="_blank" rel="noopener noreferrer"&gt;PMM-13277&lt;/a&gt;: When we try to launch PMM using AWS AMI as mentioned in our docs. However, the AWS webpage works fine, and it logins, but every graph and details are blank with “Server error 502” The same can be seen in the log for Victoria metrics:&lt;/p&gt;
&lt;p&gt;The following error will be seen:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2024-07-27T06:40:22.848Z panic /home/builder/rpm/BUILD/VictoriaMetrics-pmm-6401-v1.93.4/lib/mergeset/part_header.go:88 FATAL: cannot read "/srv/victoriametrics/data/indexdb/17D6772949F4A234/17D6772B9FDF298D/metadata.json": open /srv/victoriametrics/data/indexdb/17D6772949F4A234/17D6772B9FDF298D/metadata.json: no such file or directory
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;panic: FATAL: cannot read "/srv/victoriametrics/data/indexdb/17D6772949F4A234/17D6772B9FDF298D/metadata.json": open /srv/victoriametrics/data/indexdb/17D6772949F4A234/17D6772B9FDF298D/metadata.json: no such file or directory&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.42.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PMM 2.43.0&lt;/p&gt;
&lt;h2 id="percona-xtrabackup"&gt;Percona XtraBackup&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3302" target="_blank" rel="noopener noreferrer"&gt;PXB-3302&lt;/a&gt;: If the number of GTID sets is absolutely large on a MySQL instance, the output “GTID of the last change” in the Xtrabackup log is truncated compared to the full output in xtrabackup_binlog_info and xtrabackup_info. This can be an issue for external tools obtaining the GTID coordinates from the log as it would be impractical to get the coordinates from  xtrabackup_binlog_info or xtrabackup_info on a large, compressed xbstream file.&lt;/p&gt;
&lt;p&gt;Here is a snippet of a backup log:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2024-06-03T10:18:59.678581+08:00 0 [Note] [MY-011825] [Xtrabackup] MySQL binlog position: filename 'mysql-bin.000002', position '10197', GTID of the last change ** REDACTED **,9f18624b-214f-11ef-871f-b445068273a0:1,9f1bf5ae-214f-11ef-871f-b445068273a0:1,9f1f6fac-214f-11ef-871f-b445068273a0:1,9f231076-214f-11ef-871f-b445068273a0:1,9f26d153-214f-11ef-871f-b445068273a0:1,9f2a5fdd-214f-11ef-871f-b445068273a0:1,9f2df6e8-214f-11ef-871f-b445068273a0:1,9f318143-214f-11ef-871f-b445068273a0:1,9f353351-214f-11ef-871f-b445068273a0:1,9f38f96c-214f-11ef-871f-b445068273a0:1,9f3cdc53-214f-11ef-871f-b445068273a0:1,9f40fcc0-214f-11ef-871f-b445068273a0:1,9f44955a-214f-11ef-871f-b445068273a0:1,9f481188-214f-11ef-871f-b445068&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Snippet of xtrabackup_binlog_info:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql-bin.000002 10197 ** REDACTED **,9fdd9048-214f-11ef-871f-b445068273a0:1,9fe138a3-214f-11ef-871f-b445068273a0:1,9fe4c3d8-214f-11ef-871f-b445068273a0:1,9fe82e39-214f-11ef-871f-b445068273a0:1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-30&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXB 8.4.0-1, 8.0.35-32&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3283" target="_blank" rel="noopener noreferrer"&gt;PXB-3283&lt;/a&gt;: When xtrabackup takes a backup and exports a tablespace,  xtrabackup gets the wrong table definition from the ibd for tables that have changed the charset-collation in MySQL before backup.&lt;/p&gt;
&lt;p&gt;Eg:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE test.a (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; a datetime DEFAULT NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_as_ci;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;the collation_id is 8 (latin1_swedish_ci)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-22" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-22"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;shell&gt; ibd2sdi /var/lib/mysql/test/a.ibd | jq '.[1].object.dd_object.columns[0]' | grep collation_id
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "collation_id": 8&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;When MySQL converts the charset on a table, it converts the date and time data types columns in the ibd file but not the data dictionary cache. The collation in the ibd does not match that of the data dictionary.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-23" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-23"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE test.a CONVERT TO CHARACTER SET utf8mb4 collate utf8mb4_unicode_ci;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The collation_id becomes 224 (utf8mb4_unicode_ci)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-24" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-24"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;shell&gt; ibd2sdi /var/lib/mysql/test/a.ibd | jq '.[1].object.dd_object.columns[0]' | grep collation_id
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "collation_id": 224&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The collation_id of the copied table is 8 (latin1_swedish_ci)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-25" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-25"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;create table xb.a like test.a;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;shell&gt; ibd2sdi /var/lib/mysql/xb/a.ibd | jq '.[1].object.dd_object.columns[0]' | grep collation_id
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "collation_id": 8&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;When xtrabackup exports the tablespace, the collation_id is 224 in ibd. Xtrabackup will write it to cfg metadata file.&lt;/p&gt;
&lt;p&gt;When MySQL imports a tablespace, MySQL gets an error Column %s precise type mismatch because the collation_id of MySQL does not match that of xtrabackup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-30&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXB 8.4.0-1, 8.0.35-31&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-2797" target="_blank" rel="noopener noreferrer"&gt;PXB-2797&lt;/a&gt;: When importing a single table (IMPORT TABLESPACE) from a backup made using xtrabackup and the table contains a full-text index, the import process will error out with:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-26" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-26"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 1808 (HY000) at line 132: Schema mismatch (Index xxxxxx field xxxxxx is ascending which does not match metadata file which is descending)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.28-20&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXB 8.0.35-31&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3210" target="_blank" rel="noopener noreferrer"&gt;PXB-3210&lt;/a&gt;: PXB fails to build on macOS since 8.0.33-28 due to FIND_PROCPS()&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-27" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-27"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CMake Error at cmake/procps.cmake:29 (MESSAGE):
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Cannot find proc/sysinfo.h or libproc2/meminfo.h in . You can pass it to
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; CMake with -DPROCPS_INCLUDE_PATH=&lt;path&gt; or install
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; procps-devel/procps-ng-devel/libproc2-dev package
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Call Stack (most recent call first):
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; storage/innobase/xtrabackup/src/CMakeLists.txt:24 (FIND_PROCPS)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.33-28, 8.0.34-29, 8.0.35-30&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXB 8.0.35-31&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3130" target="_blank" rel="noopener noreferrer"&gt;PXB-3130&lt;/a&gt;: Performing upgrade from PS 8.0.30 -&gt; PS 8.0.33 using PXB&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Use PXB 8.0.30 on PS 8.0.30&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy to new host&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Prepare using PXB 8.0.33&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start PS 8.0.33&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Which results in the Assertion:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-28" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-28"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;I0825 22:33:01.738917 05155 ???:1] xtrabackup80-apply-log(stderr) - InnoDB: Assertion failure: log0recv.cc:4353:log.m_files.find(recovered_lsn) != log.m_files.end()&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.30&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXB 8.0.35-31&lt;/p&gt;
&lt;h2 id="percona-kubernetes-operator"&gt;Percona Kubernetes Operator&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1398" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1398:&lt;/a&gt; Scheduled PXC backup job pod fails to complete the process successfully in a random/sporadic fashion.&lt;/p&gt;
&lt;p&gt;Error Returns As:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-29" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-29"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ EXID_CODE=4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ '[' -f /tmp/backup-is-completed ']'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ log ERROR 'Backup was finished unsuccessfull'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Terminating processProcess completed with error: /usr/bin/run_backup.sh: 4 (Interrupted system call)2024-05-03 09:39:08 [ERROR] Backup was finished unsuccessfull
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ exit 4&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 1.13.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXCO 1.16.0 [Yet to Release]&lt;/p&gt;
&lt;p&gt;Note: Since we don`t have steps to reproduce the issue, it is hard to confirm whether the fix is working as expected. Please feel free to provide feedback or create a Jira if required.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1397" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1397:&lt;/a&gt; The operator`s default configuration makes the cluster unusable if TDE (Transparent data encryption) is used; the entry point of the PXC container configures the parameter binlog_rotate_encryption_master_key_at_startup. As a workaround, binlog_rotate_encryption_master_key_at_startup should be disabled. However, it has security implications.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 1.12.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXCO 1.16.0 [Yet to Release]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-1222" target="_blank" rel="noopener noreferrer"&gt;K8SPXC-1222:&lt;/a&gt; Upgrading Cluster Fails When Dataset Has Large Number Of Tables. When the operator replaces the first pod with one with the new version, it fails to start up and gets stuck in a loop that restarts every 120 seconds.&lt;/p&gt;
&lt;p&gt;The problem looks like from pxc-entrypoint.sh:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-30" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-30"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;for i in {120..0}; do
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; if echo 'SELECT 1' | "${mysql[@]}" &amp;&gt;/dev/null; then
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; break
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fi
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; echo 'MySQL init process in progress...'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sleep 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; done&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 1.11.0, 1.12.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PXCO 1.16.0 [Yet to Release]&lt;/p&gt;
&lt;h2 id="orchestrator"&gt;Orchestrator&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/DISTMYSQL-406" target="_blank" rel="noopener noreferrer"&gt;DISTMYSQL-406&lt;/a&gt;: Orchestrator 3.2.6-11 shows the MySQLOrchestratorPassword variable value in the error log and when accessing the web interface.&lt;/p&gt;
&lt;p&gt;E.g.:&lt;/p&gt;
&lt;p&gt;Create a MySQL and Orchestrator node&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-31" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-31"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -e "CREATE USER 'orchestrator_srv'@'%' IDENTIFIED BY 'orc_server_password'; GRANT ALL ON orchestrator.* TO 'orchestrator_srv'@'%';"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Configure Orchestrator to use node0 as MySQL backend database&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-32" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-32"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;vi /usr/local/orchestrator/orchestrator.conf.json &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add the following lines and remove sqlite options:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-33" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-33"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "MySQLOrchestratorHost": "node_0_IP",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "MySQLOrchestratorPort": 3306,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "MySQLOrchestratorDatabase": "orchestrator",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "MySQLOrchestratorUser": "orchestrator_srv",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "MySQLOrchestratorPassword": "orc_server_password",&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;On node1, there are several messages showing the backend password:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-34" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-34"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Feb 28 23:26:03 XX-XX-node1 orchestrator[4262]: 2024-02-28 23:26:03 ERROR 2024-02-28 23:26:03 ERROR QueryRowsMap(orchestrator_srv:orc_server_password@tcp(10.124.33.138:3306)/orchestrator?timeout=1s&amp;readTimeout=30s&amp;rejectReadOnly=false&amp;interpolateParams=true) select hostname, token, first_seen_active, last_seen_Active from active_node where anchor = 1: dial tcp 10.124.33.138:3306: connect: connection refused&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.36(PS)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 8.4.0(PS)&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, &lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;How to Report Bugs, Improvements, New Feature Requests for Percona Products&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the most up-to-date information, be sure to follow us on &lt;a href="https://twitter.com/percona" target="_blank" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/percona" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://www.facebook.com/Percona?fref=ts" target="_blank" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Quick References:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://jira.percona.com" target="_blank" rel="noopener noreferrer"&gt;Percona JIRA&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL Bug Report&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;Report a Bug in a Percona Product&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;About Percona:&lt;/p&gt;
&lt;p&gt;As the only provider of distributions for all three of the most popular open source databases—PostgreSQL, MySQL, and MongoDB—Percona provides &lt;a href="https://www.percona.com/services/consulting" target="_blank" rel="noopener noreferrer"&gt;expertise&lt;/a&gt;, &lt;a href="https://www.percona.com/software" target="_blank" rel="noopener noreferrer"&gt;software&lt;/a&gt;, &lt;a href="https://www.percona.com/services/support/mysql-support" target="_blank" rel="noopener noreferrer"&gt;support&lt;/a&gt;, and &lt;a href="https://www.percona.com/services/managed-services" target="_blank" rel="noopener noreferrer"&gt;services&lt;/a&gt; no matter the technology.&lt;/p&gt;
&lt;p&gt;Whether its enabling developers or DBAs to realize value faster with tools, advice, and guidance, or making sure applications can scale and handle peak loads, Percona is here to help.&lt;/p&gt;
&lt;p&gt;Percona is committed to being open source and preventing vendor lock-in. Percona contributes all changes to the upstream community for possible inclusion in future product releases.&lt;/p&gt;</content:encoded>
      <author>Aaditya Dubey</author>
      <category>PMM</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>PostgreSQL</category>
      <category>Percona</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2024/11/BugReportOctober2024_hu_236f422c0e93c589.jpg"/>
      <media:content url="https://percona.community/blog/2024/11/BugReportOctober2024_hu_ee8f7570141a1ec5.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.43.0 Preview Release</title>
      <link>https://percona.community/blog/2024/09/12/preview-release/</link>
      <guid>https://percona.community/blog/2024/09/12/preview-release/</guid>
      <pubDate>Thu, 12 Sep 2024 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.43.0 Tech Preview Release Hello everyone! Percona Monitoring and Management (PMM) 2.43.0 is now available as a Tech Preview Release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-2430-tech-preview-release"&gt;Percona Monitoring and Management 2.43.0 Tech Preview Release&lt;/h2&gt;
&lt;p&gt;Hello everyone! Percona Monitoring and Management (PMM) 2.43.0 is now available as a Tech Preview Release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;To see the full list of changes, check out the &lt;a href="https://pmm-doc-pr-1271.onrender.com/release-notes/2.43.0.html" target="_blank" rel="noopener noreferrer"&gt;PMM 2.43.0 Tech Preview Release Notes&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker-installation"&gt;PMM server Docker installation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server with Docker instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.43.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-29.tar.gz" target="_blank" rel="noopener noreferrer"&gt;Download AMD64&lt;/a&gt; or &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client-arm/pmm2-client-latest-49.tar.gz" target="_blank" rel="noopener noreferrer"&gt;Download ARM64&lt;/a&gt; pmm2-client tarball for 2.43.0.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To install pmm2-client package, enable testing repository via Percona-release:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Install pmm2-client package for your OS via Package Manager.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server as a VM instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.43.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.43.0.ova file&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server hosted at AWS Marketplace instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-0db618c7da6e202f4&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us on the [Percona Community Forums](&lt;a href="https://forums.percona.com/]" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/]&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Ondrej Patocka</author>
      <category>PMM</category>
      <category>Release</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Timeline for Database on Kubernetes</title>
      <link>https://percona.community/blog/2024/07/16/timeline-for-database-on-kubernetes/</link>
      <guid>https://percona.community/blog/2024/07/16/timeline-for-database-on-kubernetes/</guid>
      <pubDate>Tue, 16 Jul 2024 00:00:00 UTC</pubDate>
      <description>The Evolution Since its inception in June 2014, Kubernetes has dramatically transformed container orchestration, revolutionizing the management and scaling of applications. To mark its tenth anniversary, the Data on Kubernetes Community (DoKC) released an infographic showcasing key milestones and community contributions to the evolution of operators for managing stateful applications. This project was made possible by the collaboration of DoKC members Edith Puclla, Sergey Pronin, Robert Hodges, Gabriele Bartolini, Chris Malarky, Mark Kember, Paul Au, and Luciano Stabel.</description>
      <content:encoded>&lt;h2 id="the-evolution"&gt;The Evolution&lt;/h2&gt;
&lt;p&gt;Since its inception in June 2014, &lt;strong&gt;Kubernetes&lt;/strong&gt; has dramatically transformed container orchestration, revolutionizing the management and scaling of applications. To mark its tenth anniversary, the &lt;a href="https://dok.community/" target="_blank" rel="noopener noreferrer"&gt;Data on Kubernetes Community (DoKC)&lt;/a&gt; released an infographic showcasing key milestones and community contributions to the evolution of operators for managing stateful applications. This project was made possible by the collaboration of DoKC members &lt;strong&gt;Edith Puclla, Sergey Pronin, Robert Hodges, Gabriele Bartolini, Chris Malarky, Mark Kember, Paul Au, and Luciano Stabel&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Explore the infographic to see how Kubernetes has shaped the future of database management on Kubernetes.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/07/databases-kubernetes-timeline.png" alt="Databases Kubernetes Timeline" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="adoption-and-impact"&gt;Adoption and Impact&lt;/h2&gt;
&lt;p&gt;The &lt;a href="https://www.cncf.io/" target="_blank" rel="noopener noreferrer"&gt;CNCF&lt;/a&gt; says that 84% of organizations are using or considering Kubernetes, with 70% running stateful applications on it in production. The number of users and containers has grown, showing that more people are contributing, adopting cloud-native technologies, and finding new ways to use Kubernetes to manage stateful applications.&lt;/p&gt;
&lt;h2 id="looking-ahead"&gt;Looking Ahead&lt;/h2&gt;
&lt;p&gt;As we celebrate ten years of Kubernetes, the way databases are integrated keeps improving, thanks to community efforts and new technology. &lt;strong&gt;Percona Everest is an excellent example of this progress&lt;/strong&gt;. It’s more than just a tool for databases; it represents the future of running databases on Kubernetes. It’s open-source and makes running any database on cloud-based Kubernetes clusters easy. If you want to try it, visit our &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;Percona Everest GitHub Repository&lt;/a&gt; and give us a star if you like it. For feedback or comments, join the &lt;a href="https://forums.percona.com/c/percona-everest/81" target="_blank" rel="noopener noreferrer"&gt;Percona Forum&lt;/a&gt; for Percona Everest discussion.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/07/percona-everest.png" alt="Percona Everest" /&gt;&lt;/figure&gt;&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>CNCF</category>
      <category>Percona Everest</category>
      <category>Kubernetes</category>
      <category>DoK</category>
      <category>Databases</category>
      <media:thumbnail url="https://percona.community/blog/2024/07/databases-kubernetes-timeline_hu_567dc635c8d5b739.jpg"/>
      <media:content url="https://percona.community/blog/2024/07/databases-kubernetes-timeline_hu_1372998829c52fee.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Joins Community Over Code 2024 in Bratislava, Slovakia</title>
      <link>https://percona.community/blog/2024/06/21/percona-joins-community-over-code-2024-in-bratislava-slovakia/</link>
      <guid>https://percona.community/blog/2024/06/21/percona-joins-community-over-code-2024-in-bratislava-slovakia/</guid>
      <pubDate>Fri, 21 Jun 2024 00:00:00 UTC</pubDate>
      <description>Last week, I participated as a speaker for the first time at Community Over Code 2024.</description>
      <content:encoded>&lt;p&gt;Last week, I participated as a speaker for the first time at Community Over Code 2024.&lt;/p&gt;
&lt;h2 id="community-over-code"&gt;Community Over Code&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://communityovercode.org/" target="_blank" rel="noopener noreferrer"&gt;Community Over Code&lt;/a&gt; is a key principle at Apache, highlighting the importance of having a solid and collaborative community rather than just focusing on the code. While good code is essential, the community’s strength and resilience keep a project going and growing. I love how this is expressed in the “Apache Way.”&lt;/p&gt;
&lt;p&gt;Renaming ApacheCon to “Community Over Code” reflects this idea, emphasizing the central role of community in Apache’s approach. This year, &lt;a href="https://eu.communityovercode.org/" target="_blank" rel="noopener noreferrer"&gt;Community Over Code was in Bratislava, Slovakia.&lt;/a&gt;
My talk was in the Community track, where I spoke about &lt;a href="https://www.outreachy.org/" target="_blank" rel="noopener noreferrer"&gt;Outreachy&lt;/a&gt; Internship, &lt;a href="https://apache.org/" target="_blank" rel="noopener noreferrer"&gt;the Apache Software Foundation&lt;/a&gt;, and &lt;a href="https://airflow.apache.org/" target="_blank" rel="noopener noreferrer"&gt;Apache Airflow&lt;/a&gt;. My goal was to celebrate the success of the Outreachy program, which has surpassed 1,000 internships, and I am proud to be one of them.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/06/coc-talk.png" alt="Community Over Code Talk" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I talked about the community, shared success stories, and provided clear examples, such as &lt;a href="https://www.linkedin.com/in/bowrna/" target="_blank" rel="noopener noreferrer"&gt;Bowrna Prabhakaran&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/ephraimanierobi/" target="_blank" rel="noopener noreferrer"&gt;Ephraim Anierobi&lt;/a&gt;, and shared my personal experience. I also explained how Outreachy, along with my mentors from Apache Airflow, &lt;a href="https://www.linkedin.com/in/jarekpotiuk/" target="_blank" rel="noopener noreferrer"&gt;Jarek Potiuk&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/elad-kalif-811b4887/" target="_blank" rel="noopener noreferrer"&gt;Elad Kalif&lt;/a&gt;, boosted my professional career in Open Source 03 years ago.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/coc-community_hu_40334573b85eb2a0.png 480w, https://percona.community/blog/2024/06/coc-community_hu_5799465dd1dcf162.png 768w, https://percona.community/blog/2024/06/coc-community_hu_82a3f21768d271f6.png 1400w"
src="https://percona.community/blog/2024/06/coc-community.png" alt="Community Over Code Slide" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="working-at-percona"&gt;Working at Percona&lt;/h2&gt;
&lt;p&gt;Now I work for Percona, which has been recognized as one of &lt;a href="https://www.inc.com/profile/percona" target="_blank" rel="noopener noreferrer"&gt;Inc. Magazine’s 2024 Best Workplaces&lt;/a&gt;! I love my job and the amazing things we are building at Percona.
&lt;a href="https://www.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt; is an open-source software company that fully believes that open source should remain open throughout. This belief inspires me to work at Percona.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/06/coc-percona.png" alt="Work At Percona" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="what-were-focused-on-now"&gt;What We’re Focused on Now&lt;/h2&gt;
&lt;p&gt;If you’re interested in improving database performance on Kubernetes, you will love &lt;a href="https://docs.percona.com/everest/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt;. It is our cloud-native solution for efficiently managing and running databases on Kubernetes. With a very user-friendly interface, you can run any database on any cloud provider or on-premises.&lt;/p&gt;
&lt;p&gt;Here are some use cases where you might use Percona Everest.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Seeking ways to make the development of internal platforms easier and faster.&lt;/li&gt;
&lt;li&gt;Looking for affordable alternatives to public Database-as-a-Service (DBaaS) offerings.&lt;/li&gt;
&lt;li&gt;Building multi-cloud or hybrid cloud setups to meet data compliance needs for multi-regional businesses.&lt;/li&gt;
&lt;li&gt;Wishing to leverage Kubernetes for its scalability and stability to run databases efficiently.&lt;/li&gt;
&lt;li&gt;Transitioning from monolithic to microservices architecture to modernize legacy database infrastructure.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/coc-percona-everest_hu_340fb379522d6ef1.png 480w, https://percona.community/blog/2024/06/coc-percona-everest_hu_f443fb3b5db8747d.png 768w, https://percona.community/blog/2024/06/coc-percona-everest_hu_e397a6e56e8bcf48.png 1400w"
src="https://percona.community/blog/2024/06/coc-percona-everest.png" alt="Percona Everest" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Any feedback is welcome on our &lt;a href="https://forums.percona.com/c/percona-everest/81" target="_blank" rel="noopener noreferrer"&gt;Percona Everest forum&lt;/a&gt;, and if you like it, give us a start on GitHub: github.com/percona/everest.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Percona Everest</category>
      <category>Kubernetes</category>
      <category>Databases</category>
      <category>Events</category>
      <media:thumbnail url="https://percona.community/blog/2024/06/coc-talk_hu_951765de4549236a.jpg"/>
      <media:content url="https://percona.community/blog/2024/06/coc-talk_hu_4dc8c3844cc16894.jpg" medium="image"/>
    </item>
    <item>
      <title>Let's take a look at Percona Everest 1.0.0 RC</title>
      <link>https://percona.community/blog/2024/06/14/lets-take-a-look-at-percona-everest-1.0.0-rc/</link>
      <guid>https://percona.community/blog/2024/06/14/lets-take-a-look-at-percona-everest-1.0.0-rc/</guid>
      <pubDate>Fri, 14 Jun 2024 00:00:00 UTC</pubDate>
      <description>Hi, the Percona Everest 1.0.0-rc1 release was published on GitHub.</description>
      <content:encoded>&lt;p&gt;Hi, the Percona Everest 1.0.0-rc1 release was published on &lt;a href="https://github.com/percona/everest/releases" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona.community/projects/everest/"&gt;Percona Everest&lt;/a&gt; is the first open source cloud-native platform for provisioning and managing PostgreSQL, MongoDB and MySQL database clusters.&lt;/p&gt;
&lt;p&gt;I want to tell you how to install it so you can try it out.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RC builds aren’t meant for the general public; we don’t support upgrading from RC to stable versions. This means that this is only for testing and familiarizing yourself with the features. RC builds are not stable and are often buggy. There will be no upgrade. :)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To get started, you will need a Kubernetes cluster. Right now, &lt;a href="https://docs.percona.com/everest/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt; is in Beta. Don’t use production clusters; use test clusters in the cloud like GKE or local in Minikube, k3d, or Kind.&lt;/p&gt;
&lt;p&gt;I created a test cluster in GKE with the command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;gcloud container clusters create test-everest-rc --project percona-product --zone us-central1-a --cluster-version 1.27 --machine-type n1-standard-4 --num-nodes=3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Delete it after the test with the command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;gcloud container clusters delete test-everest-rc --zone us-central1-a&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now, we need &lt;a href="https://docs.percona.com/everest/install/installEverestCLI.html" target="_blank" rel="noopener noreferrer"&gt;Everest CLI&lt;/a&gt; for the RC version; &lt;a href="https://github.com/percona/everest/releases" target="_blank" rel="noopener noreferrer"&gt;download&lt;/a&gt; it from GitHub for your operating system.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/percona-everest-1-rc-github_hu_6ae705806e7101b3.png 480w, https://percona.community/blog/2024/06/percona-everest-1-rc-github_hu_e59dcb21cd3e3c50.png 768w, https://percona.community/blog/2024/06/percona-everest-1-rc-github_hu_7738417e3944ae3d.png 1400w"
src="https://percona.community/blog/2024/06/percona-everest-1-rc-github.png" alt="Percona Everest 1.0.0-RC1 GitHub" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I downloaded it, renamed it to &lt;code&gt;everestctl&lt;/code&gt;, and copied it to a folder for experimentation.&lt;/p&gt;
&lt;p&gt;Now, we need to make it executable&lt;/p&gt;
&lt;p&gt;&lt;code&gt;chmod +x ./everestctl&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Let’s check that everestctl works and that we have the correct version.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;./everestctl version&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We can now install Percona Everest.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;./everestctl install --version-metadata-url https://check-dev.percona.com&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Note that I use the &lt;code&gt;--version-metadata-url parameter https://check-dev.percona.com&lt;/code&gt;; this is required for RC builds.&lt;/p&gt;
&lt;p&gt;During the installation process, you must set one or more namespaces and databases.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/percona-everest-1-rc-install_hu_43e3ad7581b7e60c.png 480w, https://percona.community/blog/2024/06/percona-everest-1-rc-install_hu_71237ed3cca19982.png 768w, https://percona.community/blog/2024/06/percona-everest-1-rc-install_hu_f3f8e58de434f9bf.png 1400w"
src="https://percona.community/blog/2024/06/percona-everest-1-rc-install.png" alt="Percona Everest 1.0.0-RC1 Install" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Once the installation is complete, the new user authentication feature is the first significant change. You will be offered two commands.&lt;/p&gt;
&lt;p&gt;Command to retrieve the admin user password that was generated automatically during installation:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;./everestctl accounts initial-admin-password&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Command to set a new password:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;./everestctl accounts set-password --username admin&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/percona-everest-1-rc-admin-pass_hu_281d9fa40252f230.png 480w, https://percona.community/blog/2024/06/percona-everest-1-rc-admin-pass_hu_c239019808a71a10.png 768w, https://percona.community/blog/2024/06/percona-everest-1-rc-admin-pass_hu_a97f68e66de27083.png 1400w"
src="https://percona.community/blog/2024/06/percona-everest-1-rc-admin-pass.png" alt="Percona Everest 1.0.0-RC1 Admin Password" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now that we know the admin user password, we can open Percona Everest in a browser. Run the following command to use kubectl port forwarding to connect to Percona Everest without exposing the service:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubectl port-forward svc/everest 8080:8080 -n everest-system&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;More information in &lt;a href="https://docs.percona.com/everest/install/installEverest.html" target="_blank" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/percona-everest-1-rc-port_hu_2afb08041fc4dd1f.png 480w, https://percona.community/blog/2024/06/percona-everest-1-rc-port_hu_27c8e86961a74ed5.png 768w, https://percona.community/blog/2024/06/percona-everest-1-rc-port_hu_eb8c9c3656902b45.png 1400w"
src="https://percona.community/blog/2024/06/percona-everest-1-rc-port.png" alt="Percona Everest 1.0.0-RC1 Port Forward" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now you can open localhost:8080 in your browser and use admin and password to log in.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/percona-everest-1-rc-login_hu_a4abed4e8073e91c.png 480w, https://percona.community/blog/2024/06/percona-everest-1-rc-login_hu_dcbee8c9f8d91988.png 768w, https://percona.community/blog/2024/06/percona-everest-1-rc-login_hu_4dcb2cb7c2e81ee9.png 1400w"
src="https://percona.community/blog/2024/06/percona-everest-1-rc-login.png" alt="Percona Everest 1.0.0-RC1 User Authentication" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Create a PostgreSQL cluster to test how it works.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/percona-everest-1-rc-db_hu_e06f9c0209f0d0e2.png 480w, https://percona.community/blog/2024/06/percona-everest-1-rc-db_hu_68e03db4623dd0cf.png 768w, https://percona.community/blog/2024/06/percona-everest-1-rc-db_hu_fc772e769205e9c2.png 1400w"
src="https://percona.community/blog/2024/06/percona-everest-1-rc-db.png" alt="Percona Everest 1.0.0-RC1 Create PostgreSQL" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/percona-everest-1-rc-postgres_hu_ff54650567559751.png 480w, https://percona.community/blog/2024/06/percona-everest-1-rc-postgres_hu_945cf22260bc62cb.png 768w, https://percona.community/blog/2024/06/percona-everest-1-rc-postgres_hu_5cb256eb92f073f5.png 1400w"
src="https://percona.community/blog/2024/06/percona-everest-1-rc-postgres.png" alt="Percona Everest 1.0.0-RC1 PostgreSQL" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;You can also create other databases, set up backups, and monitoring with &lt;a href="https://www.percona.com/open-source-database-monitoring-tools-for-mysql-mongodb-postgresql-more-percona" target="_blank" rel="noopener noreferrer"&gt;PMM&lt;/a&gt;. By the way, PMM has some cool &lt;a href="https://www.percona.com/blog/postgresql-monitoring-with-percona-monitoring-and-management-a-redefined-summary/" target="_blank" rel="noopener noreferrer"&gt;new dashboards&lt;/a&gt; in the Experimental section.&lt;/p&gt;
&lt;p&gt;Your feedback would be greatly appreciated. Create a new topic on &lt;a href="https://forums.percona.com/c/percona-everest/81" target="_blank" rel="noopener noreferrer"&gt;the forum&lt;/a&gt; or issue on &lt;a href="https://github.com/percona/everest/issues" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Don’t forget to delete the test cluster to save your budget.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Thank you very much.&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>Percona Everest</category>
      <category>Opensource</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>PostgreSQL</category>
      <category>MongoDB</category>
      <media:thumbnail url="https://percona.community/blog/2024/06/percona-everest-1-rc-cover_hu_269c13c75db4af18.jpg"/>
      <media:content url="https://percona.community/blog/2024/06/percona-everest-1-rc-cover_hu_26541ea0da955d49.jpg" medium="image"/>
    </item>
    <item>
      <title>Take a Clone it will last longer</title>
      <link>https://percona.community/blog/2024/06/02/take-a-clone-it-will-last-longer/</link>
      <guid>https://percona.community/blog/2024/06/02/take-a-clone-it-will-last-longer/</guid>
      <pubDate>Sun, 02 Jun 2024 00:00:00 UTC</pubDate>
      <description>So cloning is a great subject. I mean we clone sheep, we can clone human organs in time we might be able to clone humans, but thats a topic for scientist and philosophers.</description>
      <content:encoded>&lt;p&gt;So cloning is a great subject. I mean we clone sheep, we can clone human organs in time we might be able to clone humans,
but thats a topic for scientist and philosophers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is MySQL Replicaion:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The MySQL clone plugin can be used to replicate data from a MySQL server to another MySQL server, and it supports replication. The cloning process creates a physical snapshot of the data, including tables, schemas, tablespaces, and data dictionary metadata. It tracks replication coordinates from the source server and transfers them to the replica, which allows replication to begin at a consistent position in the replication stream. This data includes the binary log position (filename, offset) and the gtid_executed GTID set. The replication metadata repositories are also copied during the cloning operation.&lt;/p&gt;
&lt;p&gt;The clone plugin that was released with MySQL Version 8.0.17. Its quick and very easy to setup. You can use it for so many different solutions. I’ve listed some common ones below, but I know that there are many more use cases.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a new replica.&lt;/li&gt;
&lt;li&gt;Recover a replica which is out of sync with the primary.&lt;/li&gt;
&lt;li&gt;Quickly deploy MySQL servers with data set already in place.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In this article I will cover the basics of setting up and running a clone from and existing MySQL server. I will be using MySQL version 8.0.36.&lt;/p&gt;
&lt;h2 id="prepare-the-source-server"&gt;Prepare the Source Server&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Install the clone plugin.&lt;/li&gt;
&lt;li&gt;Create a clone user.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Install Clone plugin&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;source &gt; INSTALL PLUGIN clone SONAME 'mysql_clone.so';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verify the clone plugin was installed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;source &gt; show plugins;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/clone-plugin-img1_hu_92b7fba6627fff9c.png 480w, https://percona.community/blog/2024/06/clone-plugin-img1_hu_c39e39890752fd7d.png 768w, https://percona.community/blog/2024/06/clone-plugin-img1_hu_fb34b4287d629b8d.png 1400w"
src="https://percona.community/blog/2024/06/clone-plugin-img1.png" alt="clone image 1" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Create Clone User&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now we need to create a user to run the clone process. I highly suggest you create a user for the cloning. Please don’t use an ID with full admin rights. Create a user with the least amount of privileges needed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;source &gt; CREATE USER 'clone_user'@'%' IDENTIFIED BY 'S3k3rtPassWd';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;source &gt; GRANT BACKUP_ADMIN ON *.* TO 'clone_user'@'%';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verify your new clone user.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;source &gt; show grants for clone_user;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/06/clone-plugin-img2.png" alt="clone image 2" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Preparation on the source server is complete. Lets move on to the clone.&lt;/p&gt;
&lt;h2 id="prepare-the-clone-server"&gt;Prepare the Clone Server&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;We need to start by installing the clone plugin.&lt;/li&gt;
&lt;li&gt;Then we need to define the source that clone will be based on.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Install plugin&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;clone &gt; INSTALL PLUGIN clone SONAME 'mysql_clone.so';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verify the clone plugin was installed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;clone &gt; show plugins;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/06/clone-plugin-img1_hu_92b7fba6627fff9c.png 480w, https://percona.community/blog/2024/06/clone-plugin-img1_hu_c39e39890752fd7d.png 768w, https://percona.community/blog/2024/06/clone-plugin-img1_hu_fb34b4287d629b8d.png 1400w"
src="https://percona.community/blog/2024/06/clone-plugin-img1.png" alt="clone image 1" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Define Source Server&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can use the source host name or host IP address.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;clone &gt; SET GLOBAL clone_valid_donor_list='SOURCE_HOSTNAME:3306';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="start-and-monitor-the-cloning-process"&gt;Start and monitor the cloning process&lt;/h2&gt;
&lt;p&gt;We are now ready to kick off the cloning.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;clone &gt; CLONE INSTANCE FROM clone_user@SOURCE_HOSTNAME:3306 IDENTIFIED BY 'S3k3rtPassWd';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Cloning should start now. If you have any issues, check your log files and very the steps above. Now that the cloning is running we can monitor the cloning process using this command.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;clone &gt; SELECT * FROM performance_schema.clone_progress\G&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you can see in the output below, that the cloning has completed DROP DATA and is now working on FILE COPY.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/06/clone-plugin-img3.png" alt="clone image 3" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="observations"&gt;Observations&lt;/h2&gt;
&lt;p&gt;Just recently I worked with a customer who moved a 5.6TB database from a Galera Cluster to standard MySQL replication. The data was moved from the source to replicas in two different locations. Timings are detailed below.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Source to Replica in same location: 5.6TB in approxmently 50 mins.
&lt;ul&gt;
&lt;li&gt;112GB per minute.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Source to Replica in different geographic locations: 5.6TB in approxmently 1 hour and 45 mins.
&lt;ul&gt;
&lt;li&gt;53.33GB per minute.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;MySQL Clone Plugin’s robust performance and efficiency in managing large data transfers, making it an excellent tool for environments requiring quick and reliable data replication.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;The MySQL Clone Plugin offers several benefits, including:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Fast Data Copying:
&lt;ul&gt;
&lt;li&gt;Enables rapid cloning of MySQL instances, facilitating quick data replication and environment setup.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Consistent Data State:
&lt;ul&gt;
&lt;li&gt;Ensures data consistency during cloning, avoiding issues that can arise from manual copying or inconsistent states.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Reduced Downtime:
&lt;ul&gt;
&lt;li&gt;Minimizes downtime during cloning operations, crucial for maintaining service availability.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Ease of Use:
&lt;ul&gt;
&lt;li&gt;Simplifies the cloning process through straightforward commands, reducing the complexity for database administrators.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Automated Cloning Process:
&lt;ul&gt;
&lt;li&gt;Automates many steps involved in the cloning process, reducing the potential for human error and increasing efficiency.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="reference"&gt;Reference&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/clone-plugin.html" target="_blank" rel="noopener noreferrer"&gt;The Clone Plugin&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.freepik.com/free-photo/dna-microscopic-view_854596.htm#fromView=search&amp;page=1&amp;position=1&amp;uuid=b58a4350-e1ba-44f8-9c0a-0c4498e84ac5" target="_blank" rel="noopener noreferrer"&gt;Image by kjpargeter on Freepik&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Opensource</category>
      <category>Replication</category>
      <category>MySQL</category>
      <category>Community</category>
      <media:thumbnail url="https://percona.community/blog/2024/06/dna-microscopic-view_hu_67a13e60dded1a02.jpg"/>
      <media:content url="https://percona.community/blog/2024/06/dna-microscopic-view_hu_fe8dd06a71438439.jpg" medium="image"/>
    </item>
    <item>
      <title>Release Roundup May 15, 2024</title>
      <link>https://percona.community/blog/2024/05/15/release-roundup-may-15-2024/</link>
      <guid>https://percona.community/blog/2024/05/15/release-roundup-may-15-2024/</guid>
      <pubDate>Wed, 15 May 2024 00:00:00 UTC</pubDate>
      <description>Percona software releases and updates April 30 - May 15, 2024.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona software releases and updates April 30 - May 15, 2024.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Percona is a leading provider of unbiased, performance-first, open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive, free from vendor lock-in. Percona software is designed for peak performance, uncompromised security, limitless scalability, and disaster-proofed availability.&lt;/p&gt;
&lt;p&gt;Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights, critical information, links to the full release notes, and direct links to the software or service itself to download.&lt;/p&gt;
&lt;p&gt;Today’s post includes releases and updates that have been released since April 29, 2024. Take a look.&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-6015"&gt;Percona Distribution for MongoDB 6.0.15&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/6.0/release-notes-v6.0.15.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 6.0.15&lt;/a&gt; was released on April 30, 2024. It is a freely available MongoDB database alternative that gives you a single solution that combines enterprise components from the open source community, designed and tested to work together. Please see the release notes for a full list of improvements and bug fixes.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 6.0.15&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-6015-12"&gt;Percona Server for MongoDB 6.0.15-12&lt;/h2&gt;
&lt;p&gt;On April 30, 2024, we released &lt;a href="https://docs.percona.com/percona-server-for-mongodb/6.0/release_notes/6.0.15-12.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 6.0.15-12&lt;/a&gt;. It is an enhanced, source-available, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition 6.0.15. It is based on MongoDB 6.0.15 Community Edition and supports the upstream protocols and drivers. Please see the release notes for a full list of improvements and bug fixes.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MongoDB 6.0.15-12&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-xtradb-cluster-5744-31652"&gt;Percona XtraDB Cluster 5.7.44-31.65.2&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-xtradb-cluster/5.7/release-notes/5.7.44-31.65.2.html" target="_blank" rel="noopener noreferrer"&gt;Percona XtraDB Cluster 5.7.44-31.65.2&lt;/a&gt; was released on May 2, 2024. This release is part of MySQL 5.7 Post-EOL Support from Percona, and the fixes are available to &lt;a href="https://www.percona.com/post-mysql-5-7-eol-support" target="_blank" rel="noopener noreferrer"&gt;MySQL 5.7 Post-EOL Support from Percona customers&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;That’s it for this roundup, and be sure to &lt;a href="https://twitter.com/Percona" target="_blank" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt; to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments and is trusted by global brands to unify, monitor, manage, secure, and optimize their database environments.&lt;/p&gt;</content:encoded>
      <author>David Quilty</author>
      <category>Opensource</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>XtraDB</category>
      <category>Releases</category>
      <category>Percona</category>
      <media:thumbnail url="https://percona.community/blog/2024/05/Roundup-May-15_hu_17a4b7960167bb8d.jpg"/>
      <media:content url="https://percona.community/blog/2024/05/Roundup-May-15_hu_238202edaac71d25.jpg" medium="image"/>
    </item>
    <item>
      <title>How to Provision a MongoDB Cluster in Kubernetes with Percona Everest Summary</title>
      <link>https://percona.community/blog/2024/05/02/how-to-provision-a-mongodb-cluster-in-kubernetes-with-percona-everest-summary/</link>
      <guid>https://percona.community/blog/2024/05/02/how-to-provision-a-mongodb-cluster-in-kubernetes-with-percona-everest-summary/</guid>
      <pubDate>Thu, 02 May 2024 00:00:00 UTC</pubDate>
      <description>Kubernetes continues evolving, and the complexity of deploying and managing databases within the ecosystem is a topic of considerable discussion and importance these days. This article summarizes a detailed discussion between Piotr Szczepaniak and Diogo Recharte, who offer insights and live demonstrations to simplify database operations on Kubernetes with a new technology for cloud-native applications: Percona Everest. If you want to watch the full video, check out How to Provision a MongoDB Cluster in Kubernetes Webinar.</description>
      <content:encoded>&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt; continues evolving, and the complexity of deploying and managing databases within the ecosystem is a topic of considerable discussion and importance these days. This article summarizes a detailed discussion between &lt;a href="https://www.linkedin.com/in/petersgd/" target="_blank" rel="noopener noreferrer"&gt;Piotr Szczepaniak&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/diogo-recharte/" target="_blank" rel="noopener noreferrer"&gt;Diogo Recharte&lt;/a&gt;, who offer insights and live demonstrations to simplify database operations on Kubernetes with a new technology for cloud-native applications: Percona Everest. If you want to watch the full video, check out &lt;a href="https://www.youtube.com/live/ITeM7Pdp4oc?si=XAeL_4myDdhyq38h" target="_blank" rel="noopener noreferrer"&gt;How to Provision a MongoDB Cluster in Kubernetes Webinar&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/05/peterdiogo.png" alt="Percona Everest Webinar" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Peter mentions that Initially, people were doubtful about using virtual machines for databases, just like they were skeptical about Kubernetes. However, the topic brings together many people who run databases on containers to share their use cases and new discussions at events like Data on &lt;a href="https://www.youtube.com/playlist?list=PLHgdNuGxrJt1eqQeSHJ4J-RydHO6-LTeW" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Day at Kubecon&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The introduction of &lt;strong&gt;StatefulSets&lt;/strong&gt; and &lt;strong&gt;Persistent Volumes&lt;/strong&gt; has altered the perception of Kubernetes from being purely ephemeral to being capable of handling persistent data. This change is important for database applications that require data retention over time.&lt;/p&gt;
&lt;p&gt;The Kubernetes ecosystem is rapidly expanding. This growth is thanks to its open-source nature and the continuous addition of new functionalities, such as support for specialized hardware like GPUs, which are crucial for AI and machine learning applications.&lt;/p&gt;
&lt;p&gt;Peter also mentioned that its complexity is the main barrier to Kubernetes adoption for databases. Organizations often need help with the layer added by Kubernetes on top of database management. Also, failure in initial attempts to integrate Kubernetes can discourage organizations from further attempts, primarily due to a lack of internal expertise.&lt;/p&gt;
&lt;h4 id="benefits-of-database-as-a-service-dbaas"&gt;Benefits of Database as a Service (DBaaS)&lt;/h4&gt;
&lt;p&gt;DBaaS significantly reduces the time required for database provisioning, which is particularly useful in organizations needing rapid deployment. Public and private DBaaS solutions offer scalability, which is crucial for handling varying workloads and organizational growth without compromising performance.&lt;/p&gt;
&lt;h4 id="private-vs-public-dbaas"&gt;Private vs. Public DBaaS&lt;/h4&gt;
&lt;p&gt;Private DBaaS offers more extensive customization options and control over databases, which is essential for companies with specific needs that public solutions cannot meet.&lt;/p&gt;
&lt;p&gt;Data security and compliance with regulations are more manageable in a private DBaaS because it operate within the company’s internal infrastructure.&lt;/p&gt;
&lt;h4 id="demo-to-deploying-mongodb-on-kubernetes"&gt;Demo  to Deploying MongoDB on Kubernetes&lt;/h4&gt;
&lt;p&gt;Diogo presented a demo of deploying a MongoDB database using Percona’s Everest platform on Kubernetes, where he showed how to handle daily operations and disaster recovery scenarios efficiently. Watch the &lt;a href="https://youtu.be/ITeM7Pdp4oc?t=1039" target="_blank" rel="noopener noreferrer"&gt;Percona Everest Demo on YouTube&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/05/percona-everest-mongodb.png" alt="Percona Everest Draw" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The session explained how Kubernetes operators and custom resources help manage databases more easily. They do this by simplifying complex processes and automating regular tasks.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/05/everest-gui_hu_b50d53946abc8f4e.png 480w, https://percona.community/blog/2024/05/everest-gui_hu_84f30b9b15bb341f.png 768w, https://percona.community/blog/2024/05/everest-gui_hu_7b6b85bd53e63366.png 1400w"
src="https://percona.community/blog/2024/05/everest-gui.png" alt="Percona Everest GUI" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Some questions that users asked in this presentation are:&lt;/p&gt;
&lt;h3 id="how-do-we-handle-the-pv-when-the-pods-go-down"&gt;How do we handle the PV when the Pods go down?&lt;/h3&gt;
&lt;p&gt;The PV will remain in place; this is a standard functionality of a stateful set. After the Pod goes down, the replacement Pod will attach to the PVC, which is standard behavior for a stateful set.&lt;/p&gt;
&lt;h3 id="what-happens-if-the-node-in-kubernetes-goes-down"&gt;What happens if the node in Kubernetes goes down?&lt;/h3&gt;
&lt;p&gt;It depends on the storage layer that you have configured in your cluster. If the storage class you are using is tied to that node, then placing it on a new node will provision a new one, and some reconciliation will occur within the database itself.&lt;/p&gt;
&lt;h3 id="what-is-the-current-state-of-percona-everest"&gt;What is the current state of Percona Everest?&lt;/h3&gt;
&lt;p&gt;Percona Everest is currently in a Beta stage, and Percona aims to release a GA version. The project is fully open source, and anyone can join our project on GitHub. We appreciate feedback from the community.&lt;/p&gt;
&lt;p&gt;Do you want to send us feedback or contribute in this cool project? We are completely open-source, you can visit &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;Percona Everest on GitHub&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Percona Everest</category>
      <category>MongoDB</category>
      <category>Kubernetes</category>
      <category>Databases</category>
      <media:thumbnail url="https://percona.community/blog/2024/05/percona-everest-mongodb_hu_6b827eddfe206ace.jpg"/>
      <media:content url="https://percona.community/blog/2024/05/percona-everest-mongodb_hu_4dc3ff5621cad4a0.jpg" medium="image"/>
    </item>
    <item>
      <title>Using ProxySQL Query Mirroring to test query performance on a new cluster</title>
      <link>https://percona.community/blog/2024/05/01/using-proxysql-query-mirroring-to-test-query-peromance-on-a-new-cluster/</link>
      <guid>https://percona.community/blog/2024/05/01/using-proxysql-query-mirroring-to-test-query-peromance-on-a-new-cluster/</guid>
      <pubDate>Wed, 01 May 2024 00:00:00 UTC</pubDate>
      <description>ProxySQL is an SQL aware proxy, which gives DBA’s fine grained control over clients’ access to the MySQL cluster. A key part of our DBA team’s process in testing and preparing for major MySQL version upgrades is comparing query plans using ProxySQL query mirroring. This feature allows us to mirror queries to another cluster / host, by configuring query rules. What makes mirroring particularly useful is the ability to selectively mirror queries based on the query digest, or client user. Results from the queries that are mirrored are not returned to the client, and are sent to /dev/null.</description>
      <content:encoded>&lt;p&gt;ProxySQL is an SQL aware proxy, which gives DBA’s fine grained control over clients’ access to the MySQL cluster. A key part of our DBA team’s process in testing and preparing for major MySQL version upgrades is comparing query plans using &lt;a href="https://proxysql.com/documentation/mirroring/" target="_blank" rel="noopener noreferrer"&gt;ProxySQL query mirroring&lt;/a&gt;. This feature allows us to mirror queries to another cluster / host, by configuring query rules. What makes mirroring particularly useful is the ability to selectively mirror queries based on the query digest, or client user. Results from the queries that are mirrored are not returned to the client, and are sent to /dev/null.&lt;/p&gt;
&lt;p&gt;Before configuring ProxySQL for Query Mirroring, ensure that the clients that you want to mirror the queries for, are able to connect to both the current, and the new cluster. You should also ensure that the ProxySQL monitor can connect to the new cluster, otherwise ProxySQL will mark the new hosts as offline, and the queries will not be mirrored there.&lt;/p&gt;
&lt;h2 id="to-set-up-query-mirroring-in-proxysql"&gt;To set up query mirroring in ProxySQL:&lt;/h2&gt;
&lt;p&gt;In order to set up query mirroring, you need to add the new hosts into the &lt;code&gt;mysql_server&lt;/code&gt; table in ProxySQL. This is how the current &lt;code&gt;mysql_servers&lt;/code&gt; table looks, before we add the new host that we want to mirror the queries to:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MySQL&gt; SELECT hostgroup_id, hostname FROM mysql_servers;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| hostgroup_id | hostname |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 10 | 10.12.0.123 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 20 | 10.12.0.123 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 20 | 10.16.0.456 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 20 | 10.16.0.789 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;4 rows in set (0.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It is important to choose a &lt;code&gt;hostgroup_id&lt;/code&gt; that is not yet in use. You can double check the currently configured host groups in the mysql hostgroups table, as you do not want to inadvertently add the mirror hosts into the production traffic!&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MySQL&gt; select * from mysql_replication_hostgroups;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------+------------------+------------+----------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| writer_hostgroup | reader_hostgroup | check_type | comment |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------+------------------+------------+----------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 10 | 20 | read_only | Async Cluster |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------+------------------+------------+----------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Please note, in our example, we are using async replication, so we check the &lt;code&gt;mysql_replication_hostgroups&lt;/code&gt; table, but the hostgroups table you need to check, depends on the cluster architecture you are using:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;async replica clusters check the &lt;code&gt;mysql_replications_hostgroups&lt;/code&gt; table.&lt;/li&gt;
&lt;li&gt;galera clusters check the &lt;code&gt;mysql_galera_hostgroups&lt;/code&gt; table&lt;/li&gt;
&lt;li&gt;group replication check the &lt;code&gt;mysql_group_replication_hostgroups&lt;/code&gt; table.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We are using hostgroup 10 for the writer hostgroup, and hostgroup 20 for the reader. For this example, we will choose 100 for the mirror &lt;code&gt;hostgroup_id&lt;/code&gt;. Once you have decided on an unused hostgroup ID, add the new clusters’ nodes to the &lt;code&gt;mysql_servers&lt;/code&gt; table in ProxySQL.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MySQL&gt; INSERT INTO mysql_servers(host, hostgroup, comment) VALUES ("10.12.0.987", 100, "mirror_cluster");
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;LOAD MYSQL SERVERS TO RUN;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SAVE MYSQL SERVERS TO DISK;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;mysql_servers&lt;/code&gt; table will now include the new host:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MySQL&gt; SELECT hostgroup_id, hostname FROM mysql_servers;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| hostgroup_id | hostname |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 10 | 10.12.0.123 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 20 | 10.12.0.123 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 20 | 10.16.0.456 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 20 | 10.16.0.789 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 100 | 10.12.0.987 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------+--------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;4 rows in set (0.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In order to enable query mirroring, you need to update the &lt;code&gt;mirror_hostgroup&lt;/code&gt; column in the &lt;code&gt;mysql_query_rules&lt;/code&gt; table. When mirroring is not enabled, the value of the &lt;code&gt;mirror_hostgroup&lt;/code&gt; column is &lt;code&gt;NULL&lt;/code&gt;.
Our query rules before enabling query mirroring are defined as:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MySQL&gt; select rule_id, username, match_digest, destination_hostgroup, mirror_hostgroup from mysql_query_rules;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------+------------------------+---------------------+-----------------------+------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| rule_id | username | match_digest | destination_hostgroup | mirror_hostgroup |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------+------------------------+---------------------+-----------------------+------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 | myApplicationUser | ^SELECT.*FOR UPDATE | 10 | NULL |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 2 | myApplicationUser | ^SELECT | 20 | NULL |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------+------------------------+---------------------+-----------------------+------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2 rows in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To enable mirroring, we just need to update the &lt;code&gt;mirror_hostgroup&lt;/code&gt;. For this example, we will mirror all the &lt;code&gt;SELECT&lt;/code&gt; queries made by &lt;code&gt;myApplicationUser&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;UPDATE mysql_query_rules SET mirror_hostgroup = 100 where rule_id=2;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;LOAD mysql query rules TO RUN;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SAVE mysql query rules TO DISK;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The rules should now be updated:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MySQL&gt; select rule_id, username, match_digest, destination_hostgroup, mirror_hostgroup from mysql_query_rules;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------+-----------------------+---------------------+-----------------------+------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| rule_id | username | match_digest | destination_hostgroup | mirror_hostgroup |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------+-----------------------+---------------------+-----------------------+------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 | myApplicationUser | ^SELECT.*FOR UPDATE | 10 | NULL |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 2 | myApplicationUser | ^SELECT | 20 | 100 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------+-----------------------+---------------------+-----------------------+------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2 rows in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The incoming queries that match the query rule, (in our example above, this is all queries as matching the regular expression ‘^SELECT’, for myApplicationUser, excluding queries matching ‘^SELECT.*FOR UPDATE’), will now be mirrored to the new cluster. You can verify this by checking the MySQL processlist on the new cluster.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;stats_mysql_query_digest&lt;/code&gt; table on ProxySQL holds statistics for the queries that are being processed by ProxySQL. To use the &lt;code&gt;stats_mysql_query_digest&lt;/code&gt; table, the global variables &lt;code&gt;mysql-commands_stats&lt;/code&gt; and &lt;code&gt;mysql-query_digests&lt;/code&gt; must be set to true, which is the default.&lt;/p&gt;
&lt;p&gt;Comparing query performance between two clusters&lt;/p&gt;
&lt;p&gt;Query the &lt;code&gt;stats_mysql_query_digest&lt;/code&gt; table to compare the performance per query between the current and the new cluster:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MySQL&gt; select
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; (b.count_star+a.count_star)/2 as count,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; cast(round(((b.sum_time + 0.0)/(b.count_star + 0.0))/((a.sum_time + 0.0)/(a.count_star + 0.0)),2)*100 as int) as percent,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; cast(round(((b.sum_time + 0.0)/(b.count_star + 0.0))/((a.sum_time + 0.0)/(a.count_star + 0.0)),2)*100 as int)*(b.count_star+a.count_star)/2 as load ,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; substr(a.digest_text,1,150)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;from
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; stats_mysql_query_digest a
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;inner join
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; stats_mysql_query_digest b on
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; a.digest = b.digest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;where
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; a.hostgroup = 10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; and b.hostgroup = 100
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;order by
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; percent ASC;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this example, the current production cluster has hostgroup 10, and the new mirror cluster was assigned hostgroup 100. The queries with a percentage above 100 are the queries that perform slower on the new cluster, and may be worth investigating, while the queries with a percentage below 100 are more performant on the new cluster. To investigate queries, you can compare the EXPLAIN plan of the query on the current and the new cluster. We use PMM Query Analytics to compare query analytics and the explain plan of the queries on the two separate clusters.&lt;/p&gt;
&lt;p&gt;It is worth noting, that you should allow enough time for the MySQL buffer pool to get filled, before checking the &lt;code&gt;stats_mysql_query_digest&lt;/code&gt; table. Otherwise, the query times on the new cluster can be skewed, as the active dataset may not yet be in memory (whereas on the current cluster it might be). Also, keep in mind that if you are mirroring only a subset of queries, the load on the new cluster will be different to the current cluster, and could affect the query performance on the new cluster, so that they appear significantly faster. Checking the execution plan of the query to see whether it has changed, is therefore more important than looking at overall load.&lt;/p&gt;
&lt;p&gt;To conclude, using query mirroring to test queries on a new system, before making the migration, allows you to compare latency and query plans per normalised query, and proactively detect any necessary alterations before switching live traffic to the new cluster.&lt;/p&gt;</content:encoded>
      <author>Isobel Smith</author>
      <category>ProxySQL</category>
      <category>Upgrades</category>
      <media:thumbnail url="https://percona.community/blog/2024/04/proxysql-query-mirroring_hu_ad2dbd1d1ccccd65.jpg"/>
      <media:content url="https://percona.community/blog/2024/04/proxysql-query-mirroring_hu_b4ee1838b6b620a9.jpg" medium="image"/>
    </item>
    <item>
      <title>Release Roundup April 30, 2024</title>
      <link>https://percona.community/blog/2024/04/30/release-roundup-april-30-2024/</link>
      <guid>https://percona.community/blog/2024/04/30/release-roundup-april-30-2024/</guid>
      <pubDate>Tue, 30 Apr 2024 00:00:00 UTC</pubDate>
      <description>Percona software releases and updates April 17 - April 30, 2024.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona software releases and updates April 17 - April 30, 2024.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Percona is a leading provider of unbiased, performance-first, open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive, free from vendor lock-in. Percona software is designed for peak performance, uncompromised security, limitless scalability, and disaster-proofed availability.&lt;/p&gt;
&lt;p&gt;Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights, critical information, links to the full release notes, and direct links to the software or service itself to download.&lt;/p&gt;
&lt;p&gt;Today’s post includes releases and updates that have been released since April 15, 2024. Take a look.&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mysql-830-1-ps-based-variant"&gt;Percona Distribution for MySQL 8.3.0-1 (PS-based variant)&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mysql/innovation-release/release-notes-ps-8.3.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MySQL 8.3.0-1 (PS-based variant)&lt;/a&gt; was released on April 16, 2024. This release is based on Percona Server for MySQL 8.3.0-1 and merges the MySQL 8.3 code base. It introduces the following changes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Percona updates the Binary Log UDFs to make them compatible with new tagged GTIDs (Global Transaction Identifiers).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9044" target="_blank" rel="noopener noreferrer"&gt;PS-9044&lt;/a&gt;: Adds the following variables to MyRocks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_block_cache_numshardbits" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_block_cache_numshardbits&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_check_iterate_bounds" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_check_iterate_bounds&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_compact_lzero_now" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_compact_lzero_now&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_file_checksums" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_file_checksums&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_max_file_opening_threads" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_max_file_opening_threads&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_partial_index_ignore_killed" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_partial_index_ignore_killed&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Changes the default values for the following variables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_compaction_sequential_deletes" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_compaction_sequential_deletes&lt;/code&gt;&lt;/a&gt; from 0 to 14999&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_compaction_sequential_deletes_count_sd" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_compaction_sequential_deletes_count_sd&lt;/code&gt;&lt;/a&gt; from &lt;code&gt;OFF&lt;/code&gt; to &lt;code&gt;ON&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_compaction_sequential_deletes_window" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_compaction_sequential_deletes_window&lt;/code&gt;&lt;/a&gt; from 0 to 15000&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_force_flush_memtable_now" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_force_flush_memtable_now&lt;/code&gt;&lt;/a&gt; from &lt;code&gt;ON&lt;/code&gt; to &lt;code&gt;OFF&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-server/innovation-release/myrocks-server-variables.html#rocksdb_large_prefix" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;rocksdb_large_prefix&lt;/code&gt;&lt;/a&gt; from &lt;code&gt;OFF&lt;/code&gt; to &lt;code&gt;ON&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MySQL 8.3.0-1 (PS-based variant)&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mysql-83"&gt;Percona Server for MySQL 8.3&lt;/h2&gt;
&lt;p&gt;On April 16, 2024, we released &lt;a href="https://docs.percona.com/percona-server/innovation-release/release-notes/8.3.0-1.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL 8.3&lt;/a&gt;. It includes all the features and bug fixes available in the MySQL 8.3 Community Edition in addition to enterprise-grade features developed by Percona. This release merges the MySQL 8.3 code base. Within this merge, Percona updates the Binary Log UDFs to make them compatible with new tagged GTIDs (Global Transaction Identifiers).&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software/percona-server-for-mysql" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MySQL 8.3&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-708"&gt;Percona Distribution for MongoDB 7.0.8&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/7.0/release-notes-v7.0.8.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 7.0.8&lt;/a&gt; was released on April 24, 2024. It is a freely available MongoDB database alternative that gives you a single solution that combines enterprise components from the open source community, designed and tested to work together. Bug fixes and improvements provided by MongoDB are included in Percona Distribution for MongoDB. Note: a number of issues with sharded multi-document transactions in sharded clusters of 2 or more shards have been identified that result in returning incorrect results and missing reads and writes. The issues occur when the transactions’ metadata is being concurrently modified by using the following operations: &lt;code&gt;moveChunk&lt;/code&gt;, &lt;code&gt;moveRange&lt;/code&gt;, &lt;code&gt;movePrimary&lt;/code&gt;, &lt;code&gt;renameCollection&lt;/code&gt;, &lt;code&gt;drop&lt;/code&gt;, and &lt;code&gt;reshardCollection&lt;/code&gt;. Please check the release notes for further information.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 7.0.8&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-708-5"&gt;Percona Server for MongoDB 7.0.8-5&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server-for-mongodb/7.0/release_notes/7.0.8-5.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 7.0.8-5&lt;/a&gt; was released on April 24, 2024. It is an enhanced, source-available, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition 7.0.8. A number of issues with sharded multi-document transactions in sharded clusters of 2 or more shards have been identified that result in returning incorrect results and missing reads and writes. The issues occur when the transactions’ metadata is being concurrently modified by using the following operations: &lt;code&gt;moveChunk&lt;/code&gt;, &lt;code&gt;moveRange&lt;/code&gt;, &lt;code&gt;movePrimary&lt;/code&gt;, &lt;code&gt;renameCollection&lt;/code&gt;, &lt;code&gt;drop&lt;/code&gt;, and &lt;code&gt;reshardCollection&lt;/code&gt;. Please check the release notes for further information.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MongoDB 7.0.8-5&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That’s it for this roundup, and be sure to &lt;a href="https://twitter.com/Percona" target="_blank" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt; to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments and is trusted by global brands to unify, monitor, manage, secure, and optimize their database environments.&lt;/p&gt;</content:encoded>
      <author>David Quilty</author>
      <category>Percona</category>
      <category>Opensource</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>Releases</category>
      <media:thumbnail url="https://percona.community/blog/2024/04/Roundup-April-30_hu_b1efd38c8e5e04b9.jpg"/>
      <media:content url="https://percona.community/blog/2024/04/Roundup-April-30_hu_d80971074c5cdf0c.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Bug Report: April 2024</title>
      <link>https://percona.community/blog/2024/04/29/percona-bug-report-april-2024/</link>
      <guid>https://percona.community/blog/2024/04/29/percona-bug-report-april-2024/</guid>
      <pubDate>Mon, 29 Apr 2024 00:00:00 UTC</pubDate>
      <description>At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.</description>
      <content:encoded>&lt;p&gt;At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.&lt;/p&gt;
&lt;p&gt;We constantly update our &lt;a href="https://jira.percona.com/" target="_blank" rel="noopener noreferrer"&gt;bug reports&lt;/a&gt; and monitor &lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;other boards&lt;/a&gt; to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. This post is a central place to get information on the most noteworthy open and recently resolved bugs.&lt;/p&gt;
&lt;p&gt;In this edition of our bug report, we have the following list of bugs,&lt;/p&gt;
&lt;h2 id="percona-servermysql-bugs"&gt;Percona Server/MySQL Bugs&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9092" target="_blank" rel="noopener noreferrer"&gt;PS-9092&lt;/a&gt;: A query over an InnoDB table that uses a backward scan over the index occasionally might return incorrect/incomplete results when changes to the table (for example, DELETEs in another or even the same connection followed by asynchronous purge) cause concurrent B-tree page merges.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 5.7.44, 8.0.35, 8.0.36&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug:&lt;/strong&gt; &lt;a href="https://bugs.mysql.com/bug.php?id=114248" target="_blank" rel="noopener noreferrer"&gt;114248&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Use descending indexes for the primary key. E.g.:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE bugTest.testTable (key int unsigned, version bigint unsigned, rowmarker char(3) not null default 'aaa', value MEDIUMBLOB, PRIMARY KEY (key DESC, version DESC)) Engine=InnoDB;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9107" target="_blank" rel="noopener noreferrer"&gt;PS-9107&lt;/a&gt;: A delete/insert into the secondary index is being change-buffered. This causes an insert into the ‘ibuf’ tree. There is a limit on the maximum size of the ibuf. So, on every ibuf insert, there is a compaction of ibuf tree (ibuf_contract). As part of ibuf_contract, we randomly open an ibuf page and apply the ibuf entries to the actual secondary index pages. After applying these ibuf entries from ibuf index, the ibuf tree goes on merging pages (optimistic vs pessimistic btree operations). To do this ibuf pessimistic delete on the tree, we save the cursor position, commit mtr and do a restore. This restore does a search and position again on the ibuf entry we were processing earlier, causing [MY-013183] [InnoDB] Assertion failure: ibuf0ibuf.cc:3833:ib::fatal triggered thread.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.34-26, 8.0.35-27, 8.0.36-28&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; PS 8.0.37-29 [Yet to Release]&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug:&lt;/strong&gt; &lt;a href="https://bugs.mysql.com/bug.php?id=114135" target="_blank" rel="noopener noreferrer"&gt;114135&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Disable &lt;a href="https://dev.mysql.com/doc/refman/8.3/en/innodb-parameters.html#sysvar_innodb_change_buffering" target="_blank" rel="noopener noreferrer"&gt;innodb_change_buffering&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9115" target="_blank" rel="noopener noreferrer"&gt;PS-9115&lt;/a&gt;: MySQL crashes due to getting a native index from get_mutex_cond in group replication, and before the crash, the following set of warnings/Errors is generated:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[Warning] [MY-011630] [Repl] Plugin group_replication reported: 'Due to a plugin error, some transactions were unable to be certified and will now rollback.'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[ERROR] [MY-011631] [Repl] Plugin group_replication reported: 'Error when trying to unblock non certified or consistent transactions. Check for consistency errors when restarting the service'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-27&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is in progress and expected to be included in an upcoming release of Percona Servers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; We can not guarantee that the bug will be avoided 100%, but shutting down/stopping group replication off-hours when the workload recedes should prevent the situation.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9121" target="_blank" rel="noopener noreferrer"&gt;PS-9121&lt;/a&gt;: InnoDB updates the primary index but not the spatial index, which eventually corrupts the spatial index. The MySQL server crashes with “[ERROR] [MY-013183] [InnoDB] Assertion failure: row0ins.cc:268:!cursor-&gt;index-&gt;is_committed().” The update query changes the data from point(0.0000000000000099,0) to point(0.00000000000001, 0). When Innodb updates the record containing a spatial index, It updates the clustered index. This issue can be repeated using the following set of SQL statements.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE a
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; id INT PRIMARY KEY,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; a GEOMETRY NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SPATIAL KEY (a)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; )
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ENGINE=InnoDB;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO a VALUES (1,POINT(0.0000000000000099, 0));
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;UPDATE a SET a = Point(0.00000000000001, 0);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DELETE FROM a WHERE id = 1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO a VALUES (1,POINT(0.0000000000000099, 0));&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.36-28, 8.X [Innovative Release]&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is in progress and expected to be included in an upcoming release of Percona Servers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Upstream Bug:&lt;/strong&gt; &lt;a href="https://bugs.mysql.com/bug.php?id=114252" target="_blank" rel="noopener noreferrer"&gt;114252&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Please consider resetting the shape to a different value from the new values.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9109" target="_blank" rel="noopener noreferrer"&gt;PS-9109&lt;/a&gt;: The Percona server’s slow query rate is not accurately logged, which is controlled via the &lt;a href="https://docs.percona.com/percona-server/8.0/slow-extended.html?h=log_slow_rate_type#log_slow_rate_type" target="_blank" rel="noopener noreferrer"&gt;Log_slow_rate_type&lt;/a&gt; and &lt;a href="https://docs.percona.com/percona-server/8.0/slow-extended.html?h=log_slow_rate_type#log_slow_rate_limit" target="_blank" rel="noopener noreferrer"&gt;Log_slow_rate_limit&lt;/a&gt; variables. Due to this issue, the slow query log records every query regardless of these variables’ values.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-27&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in an upcoming release of Percona Servers.&lt;/p&gt;
&lt;h2 id="percona-xtradb-cluster"&gt;Percona Xtradb Cluster&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4380" target="_blank" rel="noopener noreferrer"&gt;PXC-4380&lt;/a&gt;: In a large cluster, for example, 15 nodes, in case one node has network issues and disconnects/re-connects/disconnects, the cluster might be sent to a non-primary state if the evs.install_timeout is reached.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 5.7.42-31-65, 8.0.33-25, 8.0.35-27&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; In such a big cluster, reaching consensus between nodes takes more time, so evs.install_timeout has to be adjusted. We can also configure the cluster to evict unresponsive nodes by setting evs.auto_evict=1. So, after investigation, we found that no fix is required, as everything works as expected/designed.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Increasing the evs.install_timeout might fix the issue. The maximum value is 15S, which can still be reached depending on cluster size and how bad the flapping is.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;The default value for evs.install_timeout is evs.inactive_timeout/2.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;The minimum value is evs.join_retrans_period
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;The maximum time is evs.inactive_timeout + 1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So, If we keep defaults:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;evs.join_retrans_period = 1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;evs.inactive_timeout = 15s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;evs.install_timeout (default) = 7.5s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;evs.install_timeout(min) = 1s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;evs.install_timeout(max) = 16s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Please note that 15s is not a hard limit and is determined by evs.inactive_timeout&lt;/p&gt;
&lt;p&gt;We have set wsrep_provider_options=“evs.install_timeout=PT15S” for all nodes in the test environment and are now unable to reproduce the issue. So, we are on some timeout boundaries, and the above configuration parameters were introduced to fine-tune in such environments.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4367" target="_blank" rel="noopener noreferrer"&gt;PXC-4367&lt;/a&gt;: Innodb semaphore wait timeout failure seen after upgrade from 8.0.34 to 8.0.35. This issue is a possible side effect of this &lt;a href="https://github.com/percona/percona-xtradb-cluster/pull/1854" target="_blank" rel="noopener noreferrer"&gt;patch&lt;/a&gt;, Where PXC node acts as the async replica to some master and in parallel, the row is updated on PXC node and via replication, the PXC node hangs. To avoid the issue, upgrading to PXC 8.0.36 is recommended.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-27&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 8.0.36-28&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4363" target="_blank" rel="noopener noreferrer"&gt;PXC-4363&lt;/a&gt;: Concurrent CREATE and DROP USER queries on different nodes lead to permanent lock.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 16 | system user | db1 | Query | 411 | Waiting for table metadata lock | drop user IF EXISTS `msandbox_rw11`@`localhost` | 411380 | 0 | &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;These queries cannot be cancelled or killed, and nodes refuse to be gracefully restarted. The shutdown is stuck with this forever. Only the forcible service kill helps restore the cluster.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2024-01-18T17:49:24.470428Z 0 [Note] [MY-000000] [Galera] Closing slave action queue.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-27&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in an upcoming release of Percona Servers.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4399" target="_blank" rel="noopener noreferrer"&gt;PXC-4399&lt;/a&gt;: FLUSH TABLES during writes to the table with unique keys stall the cluster node; due to the stall, it’s not possible to abort/kill any of the above connections. The node sends a permanent flow control pause. The only way to get out of this stall is to kill the node.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.33-25, 8.0.35-27&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in an upcoming release of Percona Servers.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4418" target="_blank" rel="noopener noreferrer"&gt;PXC-4418&lt;/a&gt;: In MySQL, the optimizer creates a temp table definition with indexes, where 2 of them have the same name (&lt;auto_key2&gt;, &lt;auto_key1&gt;, &lt;auto_key2&gt;). When the query is executed, a temp table is created, then MySql tries to access &lt;auto_key2&gt;. In InnoDB, we search for index by name (dict_table_get_index_on_name()), which returns the wrong &lt;auto_key2&gt;. Then row_sel_convert_mysql_key_to_innobase() crashes as structures are not aligned. Please note this bug affects only very complicated queries with many JOINs and subqueries. To repeat this bug, the internal temporary table needs to be created at least for two parts of the query.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.32-24, 8.0.34-26&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 8.0.36-28&lt;/p&gt;
&lt;h2 id="percona-toolkit"&gt;Percona Toolkit&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2190" target="_blank" rel="noopener noreferrer"&gt;PT-2190&lt;/a&gt;:The &lt;a href="https://docs.percona.com/percona-toolkit/pt-show-grants.html" target="_blank" rel="noopener noreferrer"&gt;pt-show-grants&lt;/a&gt; use SHOW CREATE USER command to obtain grants from the MySQL server. By default, this query returns values as they are stored in the mysql.user table. When using caching_sha256_password, such hash of the password could contain a special character. Therefore, it would not be possible to use output printed by pt-show-grants to re-create users in the database. Since version 3.6.0, pt-show-grants checks if it runs against MySQL version 8.0.17 or higher and sets session option &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_print_identified_with_as_hex" target="_blank" rel="noopener noreferrer"&gt;print_identified_as_hex&lt;/a&gt; to true before running SHOW CREATE USER command. This allows to print commands that could be used to re-create users.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 3.6.0 [It is expected to be released soon]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2215" target="_blank" rel="noopener noreferrer"&gt;PT-2215&lt;/a&gt;: pt-table-sync does not recognize the privileges in roles for MariaDB&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.2&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in an upcoming release of Percona ToolKit.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2316" target="_blank" rel="noopener noreferrer"&gt;PT-2316&lt;/a&gt;: pt-config-diff with –pid option is broken with “Can’t locate object method “make_PID_file” via package “Daemon” at /usr/bin/pt-config-diff line 5522” on Ubuntu 20.04&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.7&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in an upcoming release of Percona ToolKit.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2314" target="_blank" rel="noopener noreferrer"&gt;PT-2314&lt;/a&gt;: pt-online-schema-change fails due to duplicate constraint names when it attempts to make a table copy for alteration.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.7&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in an upcoming release of Percona ToolKit.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; Do not duplicate constraints name.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2322" target="_blank" rel="noopener noreferrer"&gt;PT-2322&lt;/a&gt;: pt-mysql-summary does not detect jemalloc when installed as systemd.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 3.5.6, 3.5.7&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in an upcoming release of Percona ToolKit.&lt;/p&gt;
&lt;h2 id="pmm-percona-monitoring-and-management"&gt;PMM [Percona Monitoring and Management]&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-11583" target="_blank" rel="noopener noreferrer"&gt;PMM-11583&lt;/a&gt;: On MySQL 8.0, &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_redo_log_capacity" target="_blank" rel="noopener noreferrer"&gt;innodb_redo_log_capacity&lt;/a&gt; supersedes &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_files_in_group" target="_blank" rel="noopener noreferrer"&gt;innodb_log_files_in_group&lt;/a&gt; and &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size" target="_blank" rel="noopener noreferrer"&gt;innodb_log_file_size&lt;/a&gt;, which eventually breaks InnoDB Logging graphs. Due to this issue, the user can not determine whether the combined InnoDB redo log file size has to be increased or not.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.41.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.41.3 [It is expected to be released soon]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-13017" target="_blank" rel="noopener noreferrer"&gt;PMM-13017&lt;/a&gt;: For certain db.collection.find(query, projection, options) queries, the Explain tab for QAN returns an error message saying, “error decoding key command: invalid JSON input; expected value for 64-bit integer.” Please note this issue specifically affects MongoDB monitoring.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.35.0, 2.37.1, 2.41.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in an upcoming release of PMM&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12522" target="_blank" rel="noopener noreferrer"&gt;PMM-12522&lt;/a&gt;: When adding data relatively large chunks of data to MongoDB sharded cluster, pmm-agent log starts flooded with “level=error msg="cannot create metric for changelog… &amp; level=error msg="Failed to get database names:…” which eventually shows MongoS as disconnected in PMM UI (PMM-Inventory/Services).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.39.0, 2.40.0, 2.41.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.41.3 [It is expected to be released soon]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12880" target="_blank" rel="noopener noreferrer"&gt;PMM-12880&lt;/a&gt;: pmm-admin &lt;a href="https://docs.percona.com/percona-monitoring-and-management/details/commands/pmm-admin.html#mongodb" target="_blank" rel="noopener noreferrer"&gt;–tls-skip-verify&lt;/a&gt; does not work when &lt;a href="https://dev.mysql.com/doc/mysql-secure-deployment-guide/5.7/en/secure-deployment-user-accounts.html" target="_blank" rel="noopener noreferrer"&gt;x509 authentication&lt;/a&gt; is used.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.41.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.41.3 [It is expected to be released soon]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12989" target="_blank" rel="noopener noreferrer"&gt;PMM-12989&lt;/a&gt;: PMM agent logs flooded with wrong log entries when monitoring auth-enabled arbiters. Please note this issue specifically affects MongoDB monitoring.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.41.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.41.3 [It is expected to be released soon]&lt;/p&gt;
&lt;h2 id="percona-xtrabackup"&gt;Percona XtraBackup&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3251" target="_blank" rel="noopener noreferrer"&gt;PXB-3251&lt;/a&gt;: When PXB fails to load the encryption key, the xtrabackup_logfile is still created in the target dir. This causes a second attempt at running PXB to fail with a new error. The &lt;a href="https://docs.percona.com/percona-xtrabackup/8.0/xtrabackup-files.html?h=xtrabackup_logfile" target="_blank" rel="noopener noreferrer"&gt;xtrabackup_logfile&lt;/a&gt; file contains data needed to run the –prepare process. The bigger this file is, the longer the –prepare process will take to finish. So, PXB should not create any files on disk until the encryption key is loaded.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 8.0.35-30&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; The fix is expected to be included in future releases of PXB&lt;/p&gt;
&lt;h2 id="percona-kubernetes-operator"&gt;Percona Kubernetes Operator&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-496" target="_blank" rel="noopener noreferrer"&gt;K8SPG-496&lt;/a&gt;: When a PostgreSQL Database is set to a paused state via spec, the operator waits until all backups for the Database finish. After the backups finish, the PostgreSQL Database shall be paused, which is not happening.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.3.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.3.1&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-494" target="_blank" rel="noopener noreferrer"&gt;K8SPG-494&lt;/a&gt;: High vulnerabilities found for pgbackrest, Postgres &amp; pgbouncer package.&lt;/p&gt;
&lt;p&gt;For pgbackrest and postgres: &lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2023-38408" target="_blank" rel="noopener noreferrer"&gt;CVE-2023-38408&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For pgbouncer: &lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32067" target="_blank" rel="noopener noreferrer"&gt;CVE-2023-32067&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.3.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.3.1&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-521" target="_blank" rel="noopener noreferrer"&gt;K8SPG-521&lt;/a&gt;: The upgrade path described in the &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/update.html#update-database-and-operator-version-2x" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; leads to disabled built-in extensions(pg_stat_monitor, pg_audit).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.3.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.4.0 [It is expected to be released soon]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-522" target="_blank" rel="noopener noreferrer"&gt;K8SPG-522&lt;/a&gt;: Cluster is broken if PG_VERSION file is missing during the upgrade from 2.2.0 to 2.3.1.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.3.1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.4.0 [It is expected to be released soon]&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; In order to fix the issue, please do the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create PG_VERSION with contents 14 in DB instance pod.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;echo 14 &gt; /pgdata/pg14/PG_VERSION&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Apply crd and rbac&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubectl apply --force-conflicts --server-side -f crd.yaml&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubectl -n operatornew apply --force-conflicts --server-side -f rbac.yaml&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the operator deployment.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubectl -n operatornew rollout restart deployment percona-postgresql-operator&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;At this moment, patronictl show-config starts to enlist extensions.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl -n operatornew exec cluster1-instance1-gsvs-0 -it -- bash
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Defaulted container "database" out of: database, replication-cert-copy, postgres-startup (init), nss-wrapper-init (init)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bash-4.4$ patronictl show-config
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;loop_wait: 10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;postgresql:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; parameters:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; archive_command: pgbackrest --stanza=db archive-push "%p"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; archive_mode: 'on'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; archive_timeout: 60s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; huge_pages: 'off'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; jit: 'off'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; password_encryption: scram-sha-256
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pg_stat_monitor.pgsm_query_max_len: '2048'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; restore_command: pgbackrest --stanza=db archive-get %f "%p"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;shared_preload_libraries: pg_stat_monitor,pgaudit&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now extensions appear in the \dx output without pod restarts, but all Postgresql servers will be restarted:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-547" target="_blank" rel="noopener noreferrer"&gt;K8SPG-547&lt;/a&gt;: The pgbackrest container can’t use pgbackrest 2.50. This is because pgbackrest 2.50 requires libssh2.so.1, which requires epel. Without that fix, microdnf installs pgbackrest 2.48, which creates inconsistency with the Postgresql container.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reported Affected Version/s:&lt;/strong&gt; 2.2.0&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fixed Version:&lt;/strong&gt; 2.4.0 [It is expected to be released soon]&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workaround/Fix:&lt;/strong&gt; This &lt;a href="https://github.com/percona/percona-docker/pull/960" target="_blank" rel="noopener noreferrer"&gt;patch&lt;/a&gt; can be used until it is fixed.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, &lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;How to Report Bugs, Improvements, New Feature Requests for Percona Products&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the most up-to-date information, be sure to follow us on &lt;a href="https://twitter.com/percona" target="_blank" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/percona" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://www.facebook.com/Percona?fref=ts" target="_blank" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Quick References:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://jira.percona.com" target="_blank" rel="noopener noreferrer"&gt;Percona JIRA&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL Bug Report&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;Report a Bug in a Percona Product&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Aaditya Dubey</author>
      <category>Percona</category>
      <category>Opensource</category>
      <category>PMM</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>PostgreSQL</category>
      <media:thumbnail url="https://percona.community/blog/2024/04/BugReportApril2024_hu_de8550711f3fe3be.jpg"/>
      <media:content url="https://percona.community/blog/2024/04/BugReportApril2024_hu_4e33d575799f088.jpg" medium="image"/>
    </item>
    <item>
      <title>Deploying Percona Everest on GCP with Kubectl for Windows 11 Users</title>
      <link>https://percona.community/blog/2024/04/19/deploying-percona-everest-on-gcp-with-kubectl-for-windows-11-users/</link>
      <guid>https://percona.community/blog/2024/04/19/deploying-percona-everest-on-gcp-with-kubectl-for-windows-11-users/</guid>
      <pubDate>Fri, 19 Apr 2024 00:00:00 UTC</pubDate>
      <description>Welcome to this blog post! Today, our primary goal is to guide you through deploying Percona Everest on GCP using Kubectl, specifically for users on Windows 11. It’s been some time since I last used Windows, so this will be an excellent opportunity to do it from scratch.</description>
      <content:encoded>&lt;p&gt;Welcome to this blog post! Today, our primary goal is to guide you through deploying Percona Everest on GCP using Kubectl, specifically for users on Windows 11. It’s been some time since I last used Windows, so this will be an excellent opportunity to do it from scratch.&lt;/p&gt;
&lt;p&gt;Let me tell you a little bit about &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt;. You may have already heard it recently. It is the new open source tool launched by Percona and is already well-received by Kubernetes database users.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Percona Everest&lt;/strong&gt; is an open source cloud-native database platform that helps developers deploy code faster, scale deployments rapidly, and reduce database administration overhead while regaining control over their data, database configuration, and DBaaS costs. It is designed for those who want to break free from vendor lock-in, ensure optimal database performance, enable cost-effective and right-sized database deployments, and reduce database administration overhead.&lt;/p&gt;
&lt;p&gt;If you use Windows and want to try the deployment and use of Percona Everest, you are in the right place.&lt;/p&gt;
&lt;p&gt;This image shows what Percona Everest does and what we want to achieve:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/04/percona-everest.png" alt="Percona Everest" /&gt;&lt;/figure&gt;
Let’s start it!&lt;/p&gt;
&lt;h2 id="install-wsl"&gt;Install WSL&lt;/h2&gt;
&lt;p&gt;We will use Kubectl to run commands on our Kubernetes clusters. There are many ways to use Kubectl on Windows.&lt;/p&gt;
&lt;p&gt;I will use WSL (Windows Subsystem for Linux) to use the Linux environment directly on Windows. It is beneficial because “kubectl” and other Kubernetes tools often have better support.&lt;/p&gt;
&lt;p&gt;In Windows 11, open PowerShell as Administrator and run:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wsl --install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This command will install WSL using the default options, including Ubuntu distribution and enabling the WSL 2 version.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/04/pe-installing-wsl.jpeg" alt="WSL Installing" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Then restart your computer and open the newly installed Linux distribution from the Start menu.
Complete the initial setup by creating a user account and password. Then update and upgrade your Linux distribution:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt update &lt;span class="o"&gt;&amp;&amp;&lt;/span&gt; sudo apt upgrade&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Woolaa! We have Ubuntu running on Windows!
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/04/pe-installed-wsl_hu_4b740a4f62a42b8f.jpeg 480w, https://percona.community/blog/2024/04/pe-installed-wsl_hu_2664582b4817e57a.jpeg 768w, https://percona.community/blog/2024/04/pe-installed-wsl_hu_e14abe8fbc89622.jpeg 1400w"
src="https://percona.community/blog/2024/04/pe-installed-wsl.jpeg" alt="WSL Installed" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Installing WSL allows your Windows machine to run kubectl and other Linux-only applications smoothly. This setup is beneficial for developers and system administrators who work with both Windows and Linux systems.&lt;/p&gt;
&lt;h2 id="install-kubectl"&gt;Install Kubectl&lt;/h2&gt;
&lt;p&gt;In our Ubuntu terminal on Windows, we will use official documentation to install it using the native package management system.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Download the latest release with the command:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -LO &lt;span class="s2"&gt;"https://dl.k8s.io/release/&lt;/span&gt;&lt;span class="k"&gt;$(&lt;/span&gt;curl -L -s https://dl.k8s.io/release/stable.txt&lt;span class="k"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/bin/linux/amd64/kubectl"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Install kubectl&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo install -o root -g root -m &lt;span class="m"&gt;0755&lt;/span&gt; kubectl /usr/local/bin/kubectl
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Test to ensure the version you installed is up-to-date:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl version --client&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="creating-a-kubernetes-cluster-in-google-cloud"&gt;Creating a Kubernetes Cluster in Google Cloud&lt;/h2&gt;
&lt;p&gt;To create a Kubernetes Cluster with GKE, you need to have access to Google Cloud. Ensure it functions correctly and that you can access your Google project and create Kubernetes clusters. Also, ensure you have the gke-gcloud-auth-plugin installed. You can check if this is installed by running the “gcloud components list” command. If it is not installed, follow the &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl" target="_blank" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I have already set it up. Now, I will proceed to create my Kubernetes cluster.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gcloud container clusters create percona-everest --zone europe-west2-c --machine-type n1-standard-4 --num-nodes&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="install-percona-everest"&gt;Install Percona Everest&lt;/h2&gt;
&lt;p&gt;A prerequisite for installing Percona Everest is having a Kubernetes cluster. I have one that I created with GKE. To verify the Kubernetes cluster, run the following:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get nodes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME STATUS ROLES AGE VERSION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-percona-everest-default-pool-1f7a9664-b3hd Ready &lt;none&gt; 1h11m v1.27.8-gke.1067004
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-percona-everest-default-pool-1f7a9664-b5c3 Ready &lt;none&gt; 1h11m v1.27.8-gke.1067004
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-percona-everest-default-pool-1f7a9664-nck4 Ready &lt;none&gt; 1h11m v1.27.8-gke.1067004&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Before running the commands in the Installation section, note that Everest will search for the kubeconfig file at the ~/.kube/config path&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;~/.kube/config&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The time to install Percona Everest. To install it, run the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -sfL &lt;span class="s2"&gt;"https://raw.githubusercontent.com/percona/everest/v0.9.1/install.sh"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; bash&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After installing it, you will see an output similar to the one on the left. In your browser, you can directly open 127.0.0.0:8080. Voilà! We now have Percona Everest up and running!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/04/pe-login_hu_29047cff5f60bbd2.jpeg 480w, https://percona.community/blog/2024/04/pe-login_hu_ebb958b9073ed806.jpeg 768w, https://percona.community/blog/2024/04/pe-login_hu_b324660a7bbdbefd.jpeg 1400w"
src="https://percona.community/blog/2024/04/pe-login.jpeg" alt="Percona Everest Login" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As the output indicates, the Percona Everest app will be available at http://127.0.0.1:8080. We use the authorization token to access the Everest UI and API.&lt;/p&gt;
&lt;p&gt;We don’t have a database, so let’s create a new one!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/04/pe-first_hu_63c5453d2c5a899d.jpeg 480w, https://percona.community/blog/2024/04/pe-first_hu_e09ef42791edea67.jpeg 768w, https://percona.community/blog/2024/04/pe-first_hu_56c8ec7eeaeee088.jpeg 1400w"
src="https://percona.community/blog/2024/04/pe-first.jpeg" alt="Percona Everest Create Database" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is the amazing thing about Percona Everest… you can create MySQL, MongoDB, and PostgreSQL databases on Kubernetes! Woohoo!!!
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/04/pe-second_hu_51afce8ecdc942e2.jpeg 480w, https://percona.community/blog/2024/04/pe-second_hu_f85e54b021f5adae.jpeg 768w, https://percona.community/blog/2024/04/pe-second_hu_3b8030f8ddd35cbc.jpeg 1400w"
src="https://percona.community/blog/2024/04/pe-second.jpeg" alt="Percona Everest Databases" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;You can configure the resources for a new database, set up backups, monitoring, point-in-time recovery, and more:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/04/pe-third_hu_f7b06c2ed3483a23.jpeg 480w, https://percona.community/blog/2024/04/pe-third_hu_d7b45f4a6304fc4a.jpeg 768w, https://percona.community/blog/2024/04/pe-third_hu_9ba47a8cdf981488.jpeg 1400w"
src="https://percona.community/blog/2024/04/pe-third.jpeg" alt="Percona Everest Screen" /&gt;&lt;/figure&gt;
And this is how it looks: your database is in Kubernetes!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/04/pe-last_hu_e09d67f2bac6f061.jpeg 480w, https://percona.community/blog/2024/04/pe-last_hu_ef2e806133f82acb.jpeg 768w, https://percona.community/blog/2024/04/pe-last_hu_b6fb5bc3d2e6da45.jpeg 1400w"
src="https://percona.community/blog/2024/04/pe-last.jpeg" alt="Percona Everest Details" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Deploying Percona Everest on GCP using kubectl from a Windows 11 platform demonstrates the versatility and robust capabilities of managing databases on Kubernetes. The process should help you set up a powerful cloud-native database platform efficiently. We’ve walked through setting up your environment, installing necessary tools, creating a Kubernetes cluster, and finally deploying Percona Everest. Now, you can take full advantage of everything Percona Everest offers, from operational flexibility to cost efficiency.&lt;/p&gt;
&lt;p&gt;If Percona Everest seems cool, feel free to contribute—it’s open source! Find &lt;a href="https://github.com/percona/everest" target="_blank" rel="noopener noreferrer"&gt;Percona Everest on GitHub&lt;/a&gt;. If you encounter any issues during installation or have more questions, write to us in our &lt;a href="https://forums.percona.com/c/percona-everest/81" target="_blank" rel="noopener noreferrer"&gt;community forum&lt;/a&gt;. If you prefer learning visually through videos, we have a friendly &lt;a href="https://www.youtube.com/watch?v=vxhNon-el9Q&amp;list=PLWhC0zeznqkny4ehPTejdPwCnZ_RS3_Np" target="_blank" rel="noopener noreferrer"&gt;playlist of Percona Everest&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Percona Everest</category>
      <category>Windows</category>
      <category>Databases</category>
      <media:thumbnail url="https://percona.community/blog/2024/04/percona-everest_hu_491ab6499dd461e1.jpg"/>
      <media:content url="https://percona.community/blog/2024/04/percona-everest_hu_7a2db783af9d6008.jpg" medium="image"/>
    </item>
    <item>
      <title>Release Roundup April 17, 2024</title>
      <link>https://percona.community/blog/2024/04/17/release-roundup-april-17-2024/</link>
      <guid>https://percona.community/blog/2024/04/17/release-roundup-april-17-2024/</guid>
      <pubDate>Wed, 17 Apr 2024 00:00:00 UTC</pubDate>
      <description>Percona software releases and updates April 2 - April 17, 2024.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona software releases and updates April 2 - April 17, 2024.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Percona is a leading provider of unbiased, performance-first, open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive, free from vendor lock-in. Percona software is designed for peak performance, uncompromised security, limitless scalability, and disaster-proofed availability.&lt;/p&gt;
&lt;p&gt;Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights, critical information, links to the full release notes, and direct links to the software or service itself to download.&lt;/p&gt;
&lt;p&gt;Today’s post includes those releases and updates that have come out since April 1, 2024. Take a look.&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mysql-8036-pxc-based-variant"&gt;Percona Distribution for MySQL 8.0.36 (PXC-based variant)&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mysql/8.0/release-notes-pxc-v8.0.36.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MySQL 8.0.36 (PXC-based variant)&lt;/a&gt; was released on April 3, 2024. It is the most stable, scalable, and secure open source MySQL distribution, with two download options: one based on Percona Server for MySQL and one based on Percona XtraDB Cluster. This release is focused on the Percona XtraDB Cluster-based deployment variation and is based on Percona XtraDB Cluster 8.0.36. Release highlights include improvements and bug fixes provided by Oracle for MySQL 8.0.36:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The hashing algorithm employed yielded poor performance when using a HASH field to check for uniqueness. (Bug #109548, Bug #34959356)&lt;/li&gt;
&lt;li&gt;All statement instrument elements that begin with &lt;code&gt;statement/sp/%&lt;/code&gt;, except &lt;code&gt;statement/sp/stmt&lt;/code&gt;, are disabled by default.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MySQL 8.0.36 (PXC-based variant)&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-xtradb-cluster-8036"&gt;Percona XtraDB Cluster 8.0.36&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-xtradb-cluster/8.0/release-notes/8.0.36-28.html" target="_blank" rel="noopener noreferrer"&gt;Percona XtraDB Cluster 8.0.36&lt;/a&gt; was released on April 3, 2024. It supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software/percona-xtradb-cluster" target="_blank" rel="noopener noreferrer"&gt;Download Percona XtraDB Cluster 8.0.36&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-707"&gt;Percona Distribution for MongoDB 7.0.7&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/7.0/release-notes-v7.0.7.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 7.0.7&lt;/a&gt; was released on April 4, 2024. It is a freely available MongoDB database alternative, giving you a single solution that combines enterprise components from the open source community, designed and tested to work together.&lt;/p&gt;
&lt;p&gt;It includes the following components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Percona Server for MongoDB&lt;/em&gt; is a fully compatible source-available, drop-in replacement for MongoDB.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Percona Backup for MongoDB&lt;/em&gt; is a distributed, low-impact solution for achieving consistent backups of MongoDB sharded clusters and replica sets.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 7.0.7&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-5026"&gt;Percona Distribution for MongoDB 5.0.26&lt;/h2&gt;
&lt;p&gt;April 9, 2024, saw the release of &lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/5.0/release-notes-v5.0.26.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 5.0.26&lt;/a&gt;. This release is based on Percona Server for MongoDB 5.0.26-22 and Percona Backup for MongoDB 2.4.1.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 5.0.26&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-4429"&gt;Percona Distribution for MongoDB 4.4.29&lt;/h2&gt;
&lt;p&gt;On April 2, 2024, we released &lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/4.4/release-notes-v4.4.29.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 4.4.29.&lt;/a&gt; This is the last minor release in Percona Distribution for MongoDB 4.4.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 4.4.29&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-707-4"&gt;Percona Server for MongoDB 7.0.7-4&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server-for-mongodb/7.0/release_notes/7.0.7-4.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 7.0.7-4&lt;/a&gt; was released on April 4, 2024. It is an enhanced, source-available, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition 7.0.5 and includes the improvements and bug fixes of MongoDB 7.0.6 Community Edition and MongoDB 7.0.7 Community Edition.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MongoDB 7.0.7-4&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-5026-22"&gt;Percona Server for MongoDB 5.0.26-22&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server-for-mongodb/5.0/release_notes/5.0.26-22.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 5.0.26-22&lt;/a&gt; was released on April 9, 2024. It is an enhanced, source-available, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 5.0.x Community Edition. Percona Server for MongoDB 5.0.26-22 includes both improvements and bug fixes of MongoDB 5.0.25 Community Edition and MongoDB 5.0.26 Community Edition.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MongoDB 5.0.26-22&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-4429-28"&gt;Percona Server for MongoDB 4.4.29-28&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server-for-mongodb/4.4/release_notes/4.4.29-28.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 4.4.29-28&lt;/a&gt; was released on April 2, 2024. This is the last minor release in Percona Server for MongoDB 4.4.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MongoDB 4.4.29-28&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That’s it for this roundup, and be sure to &lt;a href="https://twitter.com/Percona" target="_blank" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt; to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments and is trusted by global brands to unify, monitor, manage, secure, and optimize their database environments.&lt;/p&gt;</content:encoded>
      <author>David Quilty</author>
      <category>Opensource</category>
      <category>XtraDB</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>Releases</category>
      <media:thumbnail url="https://percona.community/blog/2024/04/Roundup-April-17_hu_4fafaeddff437b97.jpg"/>
      <media:content url="https://percona.community/blog/2024/04/Roundup-April-17_hu_b3bd42422b7c7b5e.jpg" medium="image"/>
    </item>
    <item>
      <title>Release Roundup April 2, 2024</title>
      <link>https://percona.community/blog/2024/04/02/release-roundup-april-2-2024/</link>
      <guid>https://percona.community/blog/2024/04/02/release-roundup-april-2-2024/</guid>
      <pubDate>Tue, 02 Apr 2024 00:00:00 UTC</pubDate>
      <description>Percona software releases and updates March 18 - April 2, 2024.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona software releases and updates March 18 - April 2, 2024.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Percona is a leading provider of unbiased, performance-first, open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive, free from vendor lock-in. Percona software is designed for peak performance, uncompromised security, limitless scalability, and disaster-proofed availability.&lt;/p&gt;
&lt;p&gt;Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights, critical information, links to the full release notes, and direct links to the software or service itself to download.&lt;/p&gt;
&lt;p&gt;Today’s post includes those releases and updates that have come out since March 18, 2024. Take a look.&lt;/p&gt;
&lt;h2 id="percona-monitoring-and-management-2412"&gt;Percona Monitoring and Management 2.41.2&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/release-notes/2.41.2.html" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management 2.41.2&lt;/a&gt; was released on March 22, 2024. It is an open source database monitoring, management, and observability solution for MySQL, PostgreSQL, and MongoDB. Starting with PMM 2.41.2, we now offer pmm-client packages for the latest version of Debian. You can install these packages by following the instructions in our documentation. We have also added several experimental dashboards, which are subject to change and recommended for testing purposes only.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Download Percona Monitoring and Management 2.41.2&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-operator-for-mysql-based-on-percona-server-for-mysql-070"&gt;Percona Operator for MySQL based on Percona Server for MySQL 0.7.0&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-operator-for-mysql/ps/ReleaseNotes/Kubernetes-Operator-for-PS-RN0.7.0.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MySQL based on Percona Server for MySQL 0.7.0&lt;/a&gt; was released on March 25, 2024. Percona Operator for MySQL allows users to deploy MySQL clusters with both asynchronous and group replication topology. This release includes various stability improvements and bug fixes, getting the Operator closer to the General Availability stage. Version 0.7.0 of the Percona Operator for MySQL is still a tech preview release and it is not recommended for production environments.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software/percona-operator-for-mysql" target="_blank" rel="noopener noreferrer"&gt;Download Percona Operator for MySQL based on Percona Server for MySQL 0.7.0&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-xtrabackup-830-1"&gt;Percona XtraBackup 8.3.0-1&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-xtrabackup/innovation-release/release-notes/8.3.0-1.html" target="_blank" rel="noopener noreferrer"&gt;Percona XtraBackup 8.3.0-1&lt;/a&gt; was released on March 26, 2024. Percona XtraBackup 8.3.0-1 is based on MySQL 8.3 and fully supports the Percona Server for MySQL 8.3 Innovation series and the MySQL 8.3 Innovation series. This release allows taking backups of Percona Server 8.3.0-1 and MySQL 8.3. This Innovation release is only supported for a short time and is designed to be used in an environment with fast upgrade cycles. Developers and DBAs are exposed to the latest features and improvements. Patches and security fixes are included in the next Innovation release instead of a patch release or fix release within an Innovation release. To keep your environment current with the latest patches or security fixes, upgrade to the latest release.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software/percona-xtrabackup" target="_blank" rel="noopener noreferrer"&gt;Download Percona XtraBackup 8.3.0-1&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-6014"&gt;Percona Distribution for MongoDB 6.0.14&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/6.0/release-notes-v6.0.14.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 6.0.14&lt;/a&gt; was released on March 26, 2024. It is a freely available MongoDB database alternative, giving you a single solution that combines enterprise components from the open source community, designed and tested to work together. Release highlights include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fixed the issue with missing peer certificate validation if neither CAFile nor clusterCAFile is provided.&lt;/li&gt;
&lt;li&gt;Fixed the issue with multi-document transactions missing documents when the movePrimary operation runs concurrently by detecting placement conflicts in multi-document transactions.&lt;/li&gt;
&lt;li&gt;Allow a clustered index scan in a clustered collection if a notablescan option is enabled.&lt;/li&gt;
&lt;li&gt;Fixed tracking memory usage in SharedBufferFragment to prevent out of memory issues in the WiredTiger storage engine.&lt;/li&gt;
&lt;li&gt;Added an index on the process field for the &lt;code&gt;config.locks&lt;/code&gt; collection to ensure update operations on it are completed even in heavy loaded deployments.&lt;/li&gt;
&lt;li&gt;Fixed the incorrect hardware checksum calculation on zSeries for buffers on stack.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 6.0.14&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-6014-11"&gt;Percona Server for MongoDB 6.0.14-11&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server-for-mongodb/6.0/release_notes/6.0.14-11.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 6.0.14-11&lt;/a&gt; was released on March 26, 2024. It is an enhanced, source-available, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition 6.0.14. It is based on MongoDB 6.0.14 Community Edition and includes improvements and bug fixes provided by MongoDB.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MongoDB 6.0.14-11&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-backup-for-mongodb-241"&gt;Percona Backup for MongoDB 2.4.1&lt;/h2&gt;
&lt;p&gt;On March 25, 2024, &lt;a href="https://docs.percona.com/percona-backup-mongodb/release-notes/2.4.1.html" target="_blank" rel="noopener noreferrer"&gt;Percona Backup for MongoDB 2.4.1&lt;/a&gt; was released. It is a distributed, low-impact solution for consistent backups of MongoDB sharded clusters and replica sets. This is a tool for creating consistent backups across a MongoDB sharded cluster (or a non-sharded replica set), and for restoring those backups to a specific point in time.&lt;/p&gt;
&lt;p&gt;This release fixes the issue of failing incremental backups. It was caused by the backup metadata document reaching the maximum size limit of 16MB. The issue is fixed by introducing the new approach to handling the metadatada document: it no longer contains the list of backup files which is now stored separately on the storage and is read by PBM on demand. The new metadata handling approach applies to physical, incremental, and snapshot-based backups.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-backup-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Backup for MongoDB 2.4.1&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That’s it for this roundup, and be sure to &lt;a href="https://twitter.com/Percona" target="_blank" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt; to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments and is trusted by global brands to unify, monitor, manage, secure, and optimize their database environments.&lt;/p&gt;</content:encoded>
      <author>David Quilty</author>
      <category>Percona</category>
      <category>Opensource</category>
      <category>XtraBackup</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>PMM</category>
      <category>Releases</category>
      <media:thumbnail url="https://percona.community/blog/2024/04/Roundup-April-2_hu_3665416025618530.jpg"/>
      <media:content url="https://percona.community/blog/2024/04/Roundup-April-2_hu_684b0754b818da96.jpg" medium="image"/>
    </item>
    <item>
      <title>Creating a Standby Cluster With the Percona Operator for PostgreSQL</title>
      <link>https://percona.community/blog/2024/03/27/creating-a-standby-cluster-with-the-percona-operator-for-postgresql/</link>
      <guid>https://percona.community/blog/2024/03/27/creating-a-standby-cluster-with-the-percona-operator-for-postgresql/</guid>
      <pubDate>Wed, 27 Mar 2024 00:00:00 UTC</pubDate>
      <description>In this video, Nickolay Ihalainen, a Senior Scaling Specialist at Percona Global Services, explains how to set up replication with standby clusters for Kubernetes databases using Percona’s open-source tools, including the Percona Operator for PostgreSQL</description>
      <content:encoded>&lt;p&gt;In this video, &lt;a href="https://www.linkedin.com/in/nickolay-ihalainen-b8a35838/?originalSubdomain=ru" target="_blank" rel="noopener noreferrer"&gt;Nickolay Ihalainen&lt;/a&gt;, a Senior Scaling Specialist at Percona Global Services, explains how to set up replication with standby clusters for Kubernetes databases using Percona’s open-source tools, including the &lt;a href="https://www.percona.com/postgresql" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;Standby Cluster&lt;/strong&gt; is a backup version of your main database. It’s there to keep your data safe and make sure your database can keep running even if something goes wrong with the main one.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/03/standby.png" alt="Percona Demo for StandBy Cluster" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;For this demo, we use &lt;strong&gt;Percona Operators for PostgreSQL&lt;/strong&gt; to create the clusters, which facilitates high availability setups and database management by automating deployment, scaling, and management tasks within Kubernetes environments.&lt;/p&gt;
&lt;p&gt;Nickolay created a primary node and the configuration of replication to standby clusters for PostgreSQL, ensuring data redundancy and availability. This primary node has two standby databases in the same primary node.&lt;/p&gt;
&lt;p&gt;Then, we have the &lt;strong&gt;object storage (S3)&lt;/strong&gt; for backups, highlighting the importance of having offsite backups in different geographical locations to safeguard against data loss. This is where the primary node will access to make a data replication.&lt;/p&gt;
&lt;p&gt;This demo also includes using &lt;strong&gt;Patroni&lt;/strong&gt; to manage this process, enabling replication and failover between primary and standby servers, and &lt;strong&gt;PgBouncer&lt;/strong&gt;, a tool that manages how applications are aligned to communicate with a PostgreSQL database.&lt;/p&gt;
&lt;p&gt;Watch our complete hands on in this YouTube video:&lt;/p&gt;
&lt;br /&gt;
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/nqeGvvZ5G5Y?si=n3ho7xHJiT6F8u9v" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;You can find more instructions on how to &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/standby.html" target="_blank" rel="noopener noreferrer"&gt;deploy a standby cluster for Disaster Recovery&lt;/a&gt;, and also you can &lt;a href="https://www.percona.com/blog/creating-a-standby-cluster-with-the-percona-distribution-for-postgresql-operator/" target="_blank" rel="noopener noreferrer"&gt;Create a Standby Cluster With the Percona Operator for PostgreSQL&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Is you have questions o feedback, write to us in our &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona Comunity Forum&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Zsolt Parragi</author>
      <author>Edith Puclla</author>
      <category>PostgreSQL</category>
      <category>Backups</category>
      <category>Percona</category>
      <category>pg_zsolt</category>
      <media:thumbnail url="https://percona.community/blog/2024/03/standby_hu_bb4bc2d9eb612df1.jpg"/>
      <media:content url="https://percona.community/blog/2024/03/standby_hu_2e050ddc08ff64b3.jpg" medium="image"/>
    </item>
    <item>
      <title>Release Roundup March 18, 2024</title>
      <link>https://percona.community/blog/2024/03/18/release-roundup-march-18-2024/</link>
      <guid>https://percona.community/blog/2024/03/18/release-roundup-march-18-2024/</guid>
      <pubDate>Mon, 18 Mar 2024 00:00:00 UTC</pubDate>
      <description>Percona software releases and updates March 5 - March 18, 2024.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona software releases and updates March 5 - March 18, 2024.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Percona is a leading provider of unbiased, performance-first, open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive, free from vendor lock-in. Percona software is designed for peak performance, uncompromised security, limitless scalability, and disaster-proofed availability.&lt;/p&gt;
&lt;p&gt;Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights, critical information, links to the full release notes, and direct links to the software or service itself to download.&lt;/p&gt;
&lt;p&gt;Today’s post includes those releases and updates that have come out since March 4, 2024. Take a look.&lt;/p&gt;
&lt;h2 id="percona-server-for-mysql-5744-49-post-eol-support-version"&gt;Percona Server for MySQL 5.7.44-49 (Post-EOL support version)&lt;/h2&gt;
&lt;p&gt;This release is &lt;a href="https://docs.percona.com/percona-server/5.7/release-notes/5.7.44-49.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL 5.7.44-49 (Post-EOL support version)&lt;/a&gt;, and the fixes are available to &lt;a href="https://www.percona.com/post-mysql-5-7-eol-support" target="_blank" rel="noopener noreferrer"&gt;MySQL 5.7 Post-EOL Support from Percona customers&lt;/a&gt;. Community members can &lt;a href="https://docs.percona.com/percona-server/5.7/installation/git-source-tree.html" target="_blank" rel="noopener noreferrer"&gt;build this release from the source&lt;/a&gt;. Percona Server for MySQL 5.7.44-49 contains the fix for &lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2024-20963" target="_blank" rel="noopener noreferrer"&gt;CVE-2024-20963&lt;/a&gt; and a portability fix.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/downloads#percona-server-mysql" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MySQL 5.7.44-49 (Post-EOL support version)&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mysql-8036-ps-based-variant"&gt;Percona Distribution for MySQL 8.0.36 (PS-based variant)&lt;/h2&gt;
&lt;p&gt;On March 4, 2024, &lt;a href="https://docs.percona.com/percona-distribution-for-mysql/8.0/release-notes-ps-v8.0.36.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MySQL 8.0.36 (PS-based variant)&lt;/a&gt; was released. It is the most stable, scalable, and secure open source MySQL distribution, with two download options: one based on Percona Server for MySQL and one based on Percona XtraDB Cluster. This release is focused on the Percona Server for MySQL-based deployment variation. This release fixes the Orchestrator issues.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MySQL 8.0.36 (PS-based variant)&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mysql-8036"&gt;Percona Server for MySQL 8.0.36&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server/8.0/release-notes/8.0.36-28.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL 8.0.36&lt;/a&gt; was released on March 4, 2024. It includes all the features and bug fixes available in the MySQL 8.0.36 Community Edition, and enterprise-grade features developed by Percona. Improvements and bug fixes provided by Oracle for MySQL 8.0.36 and included in Percona Server for MySQL are the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The hashing algorithm employed yielded poor performance when using a HASH field to check for uniqueness. (Bug #109548, Bug #34959356)&lt;/li&gt;
&lt;li&gt;All statement instrument elements that begin with &lt;code&gt;statement/sp/%&lt;/code&gt;, except &lt;code&gt;statement/sp/stmt&lt;/code&gt;, are disabled by default.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software/percona-server-for-mysql" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MySQL 8.0.36&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-operator-for-mysql-based-on-percona-xtradb-cluster-1140"&gt;Percona Operator for MySQL based on Percona XtraDB Cluster 1.14.0&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-operator-for-mysql/pxc/ReleaseNotes/Kubernetes-Operator-for-PXC-RN1.14.0.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MySQL based on Percona XtraDB Cluster 1.14.0&lt;/a&gt; was released on March 4, 2024. It contains everything you need to quickly and consistently deploy and scale Percona XtraDB Cluster instances in a Kubernetes-based environment on-premises or in the cloud. Among other new features, a custom prefix for Percona Monitoring and Management (PMM) allows using one PMM Server to monitor multiple databases, even if they have identical cluster names.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software/percona-operator-for-mysql" target="_blank" rel="noopener noreferrer"&gt;Download Percona Operator for MySQL based on Percona XtraDB Cluster 1.14.0&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-postgresql-1314"&gt;Percona Distribution for PostgreSQL 13.14&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/postgresql/13/release-notes-v13.14.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL 13.14&lt;/a&gt; was released on March 6, 2024. It is a solution of a collection of tools from the PostgreSQL community that are tested to work together and assist you in deploying and managing PostgreSQL. A release highlight is that the Docker image for Percona Distribution for PostgreSQL is now available for ARM architectures. This improves the user experience with the Distribution for developers with ARM-based workstations.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/postgresql/software/postgresql-distribution" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for PostgreSQL 13.14&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-postgresql-1218"&gt;Percona Distribution for PostgreSQL 12.18&lt;/h2&gt;
&lt;p&gt;On March 11, 2024, we released &lt;a href="https://docs.percona.com/postgresql/12/release-notes-v12.18.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL 12.18. &lt;/a&gt;This release of Percona Distribution for PostgreSQL is based on PostgreSQL 12.18&lt;a href="https://docs.percona.com/postgresql/12/release-notes-v12.18.html" target="_blank" rel="noopener noreferrer"&gt;.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/postgresql/software/postgresql-distribution" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for PostgreSQL 12.18&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-backup-for-mongodb-240"&gt;Percona Backup for MongoDB 2.4.0&lt;/h2&gt;
&lt;p&gt;On March 5, 2024, &lt;a href="https://docs.percona.com/percona-backup-mongodb/release-notes/2.4.0.html" target="_blank" rel="noopener noreferrer"&gt;Percona Backup for MongoDB 2.4.0&lt;/a&gt; was released. It is a distributed, low-impact solution for consistent backups of MongoDB sharded clusters and replica sets. This is a tool for creating consistent backups across a MongoDB sharded cluster (or a non-sharded replica set), and for restoring those backups to a specific point in time. A release highlight is that you can now &lt;a href="https://docs.percona.com/percona-backup-mongodb/usage/delete-backup.html#__tabbed_2_3" target="_blank" rel="noopener noreferrer"&gt;delete backup snapshots of a specific type&lt;/a&gt;. For example, delete only logical backups that you might have created and no longer need. You can also check what exactly will be deleted with the new &lt;code&gt;--dry-run flag&lt;/code&gt;. This improvement helps you better meet the organization’s backup policy and improves your experience with cleaning up outdated data.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-backup-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Backup for MongoDB 2.4.0&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;That’s it for this roundup, and be sure to &lt;a href="https://twitter.com/Percona" target="_blank" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt; to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments and is trusted by global brands to unify, monitor, manage, secure, and optimize their database environments.&lt;/p&gt;</content:encoded>
      <author>David Quilty</author>
      <category>Opensource</category>
      <category>PostgreSQL</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>Releases</category>
      <media:thumbnail url="https://percona.community/blog/2024/03/Roundup-March-18_hu_95f64134d084da08.jpg"/>
      <media:content url="https://percona.community/blog/2024/03/Roundup-March-18_hu_470021b41db4fe23.jpg" medium="image"/>
    </item>
    <item>
      <title>Release Roundup March 4, 2024</title>
      <link>https://percona.community/blog/2024/03/04/release-roundup-march-4-2024/</link>
      <guid>https://percona.community/blog/2024/03/04/release-roundup-march-4-2024/</guid>
      <pubDate>Mon, 04 Mar 2024 00:00:00 UTC</pubDate>
      <description>Percona software releases and updates February 21 - March 4, 2024.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona software releases and updates February 21 - March 4, 2024.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Percona is a leading provider of unbiased, performance-first, open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive, free from vendor lock-in. Percona software is designed for peak performance, uncompromised security, limitless scalability, and disaster-proofed availability.&lt;/p&gt;
&lt;p&gt;Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights, critical information, links to the full release notes, and direct links to the software or service itself to download.&lt;/p&gt;
&lt;p&gt;Today’s post includes those releases and updates that have come out since February 20, 2024. Take a look.&lt;/p&gt;
&lt;h2 id="percona-distribution-for-postgresql-162"&gt;Percona Distribution for PostgreSQL 16.2&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/postgresql/16/release-notes-v16.2.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL 16.2&lt;/a&gt; was released on February 27, 2024. It provides the best and most critical enterprise components from the open source community in a single distribution, designed and tested to work together. This release is based on PostgreSQL 16.2. A release highlight is that a Docker image for Percona Distribution for PostgreSQL is now available for ARM architectures. This improves the user experience with the Distribution for developers with ARM-based workstations.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/postgresql/software/postgresql-distribution" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for PostgreSQL 16.2&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-postgresql-156"&gt;Percona Distribution for PostgreSQL 15.6&lt;/h2&gt;
&lt;p&gt;On February 28, 2024, we released &lt;a href="https://docs.percona.com/postgresql/15/release-notes-v15.6.html#get-expert-help" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL 15.6&lt;/a&gt;, which is based on PostgreSQL 15.6. A Docker image for Percona Distribution for PostgreSQL is now available for ARM architectures. This improves the user experience with the Distribution for developers with ARM-based workstations.&lt;/p&gt;
&lt;p&gt;Percona Distribution for PostgreSQL also includes the following packages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;llvm&lt;/code&gt; 12.0.1 packages for Red Hat Enterprise Linux 8 and compatible derivatives. This fixes compatibility issues with LLVM from upstream.&lt;/li&gt;
&lt;li&gt;supplemental &lt;code&gt;ETCD&lt;/code&gt; packages which can be used for setting up Patroni clusters. These packages are available for the following operating systems:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/postgresql/software/postgresql-distribution" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for PostgreSQL 15.6&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-postgresql-1411"&gt;Percona Distribution for PostgreSQL 14.11&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/postgresql/14/release-notes-v14.11.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL 14.11&lt;/a&gt; was released on March 1, 2024, and is based on PostgreSQL 14.11. A release highlight is a Docker image for Percona Distribution for PostgreSQL is now available for ARM architectures. This improves the user experience with the Distribution for developers with ARM-based workstations.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/postgresql/software/postgresql-distribution" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for PostgreSQL 14.11&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That’s it for this roundup, and be sure to &lt;a href="https://twitter.com/Percona" target="_blank" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt; to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments and is trusted by global brands to unify, monitor, manage, secure, and optimize their database environments.&lt;/p&gt;</content:encoded>
      <author>David Quilty</author>
      <category>Percona</category>
      <category>opensource</category>
      <category>PostgreSQL</category>
      <media:thumbnail url="https://percona.community/blog/2024/03/Roundup-March-4_hu_8f318ec53f63be23.jpg"/>
      <media:content url="https://percona.community/blog/2024/03/Roundup-March-4_hu_17618ba17fe96993.jpg" medium="image"/>
    </item>
    <item>
      <title>Setting Up Your Environment for Kubernetes Operators Using Docker, kubectl, and k3d</title>
      <link>https://percona.community/blog/2024/03/04/setting-up-your-environment-for-kubernetes-operators-using-docker-kubectl-and-k3d/</link>
      <guid>https://percona.community/blog/2024/03/04/setting-up-your-environment-for-kubernetes-operators-using-docker-kubectl-and-k3d/</guid>
      <pubDate>Mon, 04 Mar 2024 00:00:00 UTC</pubDate>
      <description>If you are just starting out in the world of Kubernetes operators, like me, preparing the environment for their installation should be something we do with not much difficulty. This blog will quickly guide you in setting the minimal environment.</description>
      <content:encoded>&lt;p&gt;If you are just starting out in the world of Kubernetes operators, like me, preparing the environment for their installation should be something we do with not much difficulty. This blog will quickly guide you in setting the minimal environment.&lt;/p&gt;
&lt;p&gt;Kubernetes operators are invaluable for automating complex database operations, tasks that Kubernetes does not handle directly. Operators make it easy for us – they take care of essential tasks like &lt;strong&gt;backups&lt;/strong&gt; and &lt;strong&gt;restores&lt;/strong&gt;, which are crucial in database management.&lt;/p&gt;
&lt;p&gt;If you want an introduction to Kubernetes Operators, I cover it in this 5-minute blog post, &lt;a href="https://www.percona.com/blog/exploring-the-kubernetes-application-lifecycle-with-percona/" target="_blank" rel="noopener noreferrer"&gt;Exploring the Kubernetes Application Lifecycle With Percona&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Now that we know why Kubernetes Operators are essential let’s prepare our environment to install some of them. We are going to base this installation on Linux for now.
Prerequisites:
For this, we will need a basic understanding of Kubernetes concepts and some Linux command line skills.
We also need &lt;a href="https://docs.docker.com/engine/install/ubuntu/" target="_blank" rel="noopener noreferrer"&gt;Docker Engine&lt;/a&gt; to be able to use K3d at all for containerization. To test, make sure this command runs appropriately:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run hello-world&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="installing-kubectl"&gt;Installing kubectl&lt;/h2&gt;
&lt;p&gt;To manage and deploy applications on Kubernetes, we will need &lt;strong&gt;kubectl&lt;/strong&gt; tool, which is included in most Kubernetes distributions. If it’s not installed, let’s do it following the &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" target="_blank" rel="noopener noreferrer"&gt;official installation instructions&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;To install the &lt;strong&gt;kubectl&lt;/strong&gt; binary with curl on Linux, we need to download the latest release of kubectl using the command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -LO &lt;span class="s2"&gt;"https://dl.k8s.io/release/&lt;/span&gt;&lt;span class="k"&gt;$(&lt;/span&gt;curl -L -s https://dl.k8s.io/release/stable.txt&lt;span class="k"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/bin/linux/amd64/kubectl"&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The previous binary installs kubectl in /usr/local/bin/kubectl. We need root ownership and specific permissions for secure execution.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo install -o root -g root -m &lt;span class="m"&gt;0755&lt;/span&gt; kubectl /usr/local/bin/kubectl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To test the installation, we use the following:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl version --client&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl version --client --output&lt;span class="o"&gt;=&lt;/span&gt;yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you receive a response like this, it indicates that you are ready to use &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Client Version: v1.29.2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="installing-k3d"&gt;Installing K3d&lt;/h2&gt;
&lt;p&gt;k3d is a lightweight tool that simplifies running k3s (Rancher Lab’s minimal Kubernetes distribution in Docker), enabling easy creation of single and multi-node k3s clusters for local development.&lt;/p&gt;
&lt;p&gt;Install the current latest release of k3d with curl:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh &lt;span class="p"&gt;|&lt;/span&gt; bash&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To test the installation, you can use the following:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;k3d --help&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you see a message similar to this, you are ready to create your k3d Kubernetes clusters.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;https://k3d.io/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;k3d is a wrapper CLI that helps you to easily create k3s clusters inside docker.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Nodes of a k3d cluster are docker containers running a k3s image.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;All Nodes of a k3d cluster are part of the same docker network.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Usage:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;k3d &lt;span class="o"&gt;[&lt;/span&gt;flags&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;k3d &lt;span class="o"&gt;[&lt;/span&gt;command&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Available Commands:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cluster Manage cluster&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;completion Generate completion scripts &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;bash, zsh, fish, powershell &lt;span class="p"&gt;|&lt;/span&gt; psh&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;config Work with config file&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="starting-the-kubernetes-cluster"&gt;Starting the Kubernetes cluster&lt;/h2&gt;
&lt;p&gt;Let’s use K3d and create a Kubernetes cluster with three nodes. Using the flag -a, you can specify the number of nodes you want to add to the k3d cluster.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;k3d cluster create database-cluster -a &lt;span class="m"&gt;3&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now, list details for our k3d cluster.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;k3d cluster list database-cluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME SERVERS AGENTS LOADBALANCER
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;database-cluster 1/1 3/3 true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now, our environment is ready to begin installing our Percona Kubernetes Operators.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this tutorial, we chose k3d over Minikube due to its efficiency and speed in setting up Kubernetes clusters with multiple nodes, which are essential for effectively testing Kubernetes operators in a local environment. Although it’s possible to perform tests on a single node with both systems, k3d makes it easier to simulate a more realistic distributed environment, allowing us to utilize our resources more efficiently.&lt;/p&gt;
&lt;p&gt;Take a look at our GitHub repository for our Percona Kubernetes Operators:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server-mysql-operator" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MySQL&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-postgresql-operator" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;They are fully Open Source. And if you are looking for a version with a graphical interface, we have &lt;a href="https://docs.percona.com/everest/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt;, our cloud-native database platform: docs.percona.com/everest&lt;/p&gt;
&lt;p&gt;What’s Next? Let’s install our Kubernetes Operators!&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>edith_puclla</category>
      <category>kubernetes</category>
      <category>operators</category>
      <category>k3d</category>
      <category>docker</category>
      <media:thumbnail url="https://percona.community/blog/2024/03/intro_hu_f38b7c56cf487f0.jpg"/>
      <media:content url="https://percona.community/blog/2024/03/intro_hu_6fa6a90910cc0565.jpg" medium="image"/>
    </item>
    <item>
      <title>Release Roundup February 21, 2024</title>
      <link>https://percona.community/blog/2024/02/21/release-roundup-february-21-2024/</link>
      <guid>https://percona.community/blog/2024/02/21/release-roundup-february-21-2024/</guid>
      <pubDate>Wed, 21 Feb 2024 00:00:00 UTC</pubDate>
      <description>Percona software releases and updates February 5 - February 21, 2024.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona software releases and updates February 5 - February 21, 2024.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Percona is a leading provider of unbiased, performance-first, open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive, free from vendor lock-in. Percona software is designed for peak performance, uncompromised security, limitless scalability, and disaster-proofed availability.&lt;/p&gt;
&lt;p&gt;Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights, critical information, links to the full release notes, and direct links to the software or service itself to download.&lt;/p&gt;
&lt;p&gt;Today’s post includes those releases and updates that have come out since February 5, 2024. Take a look.&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mysql-ps-based-variation-820"&gt;Percona Distribution for MySQL (PS-based variation) 8.2.0&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mysql/innovation-release/release-notes-ps-8.2.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MySQL (PS-based variation) 8.2.0&lt;/a&gt; was released on February 5, 2024. It is a bundling of open source MySQL software enhanced with carefully curated and designed enterprise-grade features. Percona Distribution for MySQL offers two download options; this one is based on Percona Server for MySQL.&lt;/p&gt;
&lt;p&gt;This release merges the MySQL 8.2 code base, introducing several significant changes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Removes remains of Percona-specific encryption features (support for custom Percona 5.7 encrypted binlog format).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Removes the deprecated &lt;code&gt;rocksdb_strict_collation_check&lt;/code&gt; and &lt;code&gt;rocksdb_strict_collation_exceptions&lt;/code&gt; RocksDB system variables.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mysql/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MySQL (PS-based variation) 8.2.0&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-6013"&gt;Percona Distribution for MongoDB 6.0.13&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/6.0/release-notes-v6.0.13.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 6.0.13&lt;/a&gt; was released on February 20, 2024. It includes the following components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Percona Server for MongoDB is a fully compatible source-available, drop-in replacement for MongoDB.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Percona Backup for MongoDB is a distributed, low-impact solution for achieving consistent backups of MongoDB sharded clusters and replica sets.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This release of Percona Distribution for MongoDB is based on the production release of &lt;a href="https://docs.percona.com/percona-server-for-mongodb/6.0/release_notes/6.0.13-10.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 6.0.13-10&lt;/a&gt; and &lt;a href="https://docs.percona.com/percona-backup-mongodb/release-notes/2.3.1.html" target="_blank" rel="noopener noreferrer"&gt;Percona Backup for MongoDB 2.3.1.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 6.0.13&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-4428"&gt;Percona Distribution for MongoDB 4.4.28&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/4.4/release-notes-v4.4.28.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 4.4.28&lt;/a&gt; was released on February 7, 2024. It’s a freely available MongoDB database alternative, giving you a single solution that combines enterprise components from the open source community, designed and tested to work together. In addition to bug fixes and improvements provided by MongoDB and included in Percona Server for MongoDB, Percona Backup for MongoDB 2.3.1 enhancements include the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Support for Percona Server for MongoDB 7.0.x&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The ability to define custom endpoints when using Microsoft Azure Blob Storage for backups&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Improved PBM Docker image to allow making physical backups with the shared mongodb data volume&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Updated Golang libraries that include fixes for the security vulnerability CVE-2023-39325.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In addition, Percona Server for MongoDB 4.4.28-27 is no longer available on Ubuntu 18.04 (Bionic Beaver).&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 4.4.28&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-distribution-for-mongodb-4225"&gt;Percona Distribution for MongoDB 4.2.25&lt;/h2&gt;
&lt;p&gt;On February 8, 2024, &lt;a href="https://docs.percona.com/percona-distribution-for-mongodb/4.2/release-notes-v4.2.25.html" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB 4.2.25&lt;/a&gt; was released. Release highlights include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Optimized the construction of the balancer’s collection distribution status histogram&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fixed the query planner logic to distinguish parameterized queries in the presence of a partial index that contains logical expressions ($and, $or).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Improved performance of updating the routing table and prevented blocking client requests during refresh for clusters with 1 million of chunks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Avoided traversing routing table in balancer split chunk policy&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fixed the issue that caused the modification of the original ChunkMap vector during the chunk migration and that could lead to data loss. The issue affects MongoDB versions 4.4.25, 5.0.21, 6.0.10 through 6.0.11 and 7.0.1 through 7.0.2. Requires stopping all chunk merge activities and restarting all the binaries in the cluster (both mongod and mongos).&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software" target="_blank" rel="noopener noreferrer"&gt;Download Percona Distribution for MongoDB 4.2.25&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-6013-10"&gt;Percona Server for MongoDB 6.0.13-10&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server-for-mongodb/6.0/release_notes/6.0.13-10.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 6.0.13-10&lt;/a&gt; was released on February 20, 2024. It is based on MongoDB 6.0.13 Community Edition and supports the upstream protocols and drivers.&lt;/p&gt;
&lt;p&gt;Release highlights include:&lt;/p&gt;
&lt;p&gt;Percona Server for MongoDB packages are available for ARM64 architectures, enabling users to install it on-premises. The ARM64 packages are available for the following operating systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ubuntu 20.04 (Focal Fossa)&lt;/li&gt;
&lt;li&gt;Ubuntu 22.04 (Jammy Jellyfish)&lt;/li&gt;
&lt;li&gt;Red Hat Enterprise Linux 8 and compatible derivatives&lt;/li&gt;
&lt;li&gt;Red Hat Enterprise Linux 9 and compatible derivatives&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MongoDB 6.0.13-10&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-4428-27"&gt;Percona Server for MongoDB 4.4.28-27&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server-for-mongodb/4.4/release_notes/4.4.28-27.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 4.4.28-27&lt;/a&gt; was released on February 7, 2024. It is a source available, highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 4.4.28 Community Edition enhanced with enterprise-grade features. Release highlights include these bug fixes, provided by MongoDB and included in Percona Server for MongoDB:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Fixed the issue with the data and the ShardVersion mismatch for sharded multi-document transactions by adding the check that no chunk has moved for the collection being referenced since transaction started&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Improved cluster balancer performance by optimizing the construction of the balancer’s collection distribution status histogram&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fixed the issue with blocking acquiring read/write tickets by TransactionCoordinator by validating that it can be recovered on step-up and can commit the transaction when there are no storage tickets available&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Investigated a solution to avoid a Full Time Diagnostic Data Capture (FTDC) mechanism to stall during checkpoint&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Percona Server for MongoDB 4.4.28-27 is no longer available on Ubuntu 18.04 (Bionic Beaver).&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Download Percona Server for MongoDB 4.4.28-27&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="percona-server-for-mongodb-4225-25"&gt;Percona Server for MongoDB 4.2.25-25&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-server-for-mongodb/4.2/release_notes/4.2.25-25.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 4.2.25-25&lt;/a&gt; was released on February 7, 2024. A release highlight is that Percona Server for MongoDB includes telemetry that fills in the gaps in our understanding of how you use Percona Server for MongoDB to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Read more about Telemetry.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/mongodb/software/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB 4.2.25-25&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That’s it for this roundup, and be sure to &lt;a href="https://twitter.com/Percona" target="_blank" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt; to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments and is trusted by global brands to unify, monitor, manage, secure, and optimize their database environments.&lt;/p&gt;</content:encoded>
      <author>David Quilty</author>
      <category>Percona</category>
      <category>opensource</category>
      <category>MySQL</category>
      <category>MongoDB</category>
      <media:thumbnail url="https://percona.community/blog/2024/02/Roundup-Feb-24_hu_4d9a09f09cf91615.jpg"/>
      <media:content url="https://percona.community/blog/2024/02/Roundup-Feb-24_hu_8425b2ecff996dad.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Bug Report: January 2024</title>
      <link>https://percona.community/blog/2024/02/19/percona-bug-report-january-2024/</link>
      <guid>https://percona.community/blog/2024/02/19/percona-bug-report-january-2024/</guid>
      <pubDate>Mon, 19 Feb 2024 00:00:00 UTC</pubDate>
      <description>At Percona, we believe that transparency is key to improving our products. We are dedicated to creating top-of-the-line open-source database solutions and providing support for any issues that may arise. We encourage feedback and bug reports to help us continually improve.</description>
      <content:encoded>&lt;p&gt;At Percona, we believe that transparency is key to improving our products. We are dedicated to creating top-of-the-line open-source database solutions and providing support for any issues that may arise. We encourage feedback and bug reports to help us continually improve.&lt;/p&gt;
&lt;p&gt;We stay updated on &lt;a href="https://perconadev.atlassian.net/" target="_blank" rel="noopener noreferrer"&gt;bug reports&lt;/a&gt; through our own platform as well as &lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;other sources&lt;/a&gt; to ensure we have the most up-to-date information. To make it easier for you, we have compiled a central list of the most critical bugs for your reference in this edition of our bug report.&lt;/p&gt;
&lt;p&gt;In this episode of our bug report, we provide the following list of bugs.&lt;/p&gt;
&lt;h2 id="percona-servermysql-bugs"&gt;Percona Server/MySQL Bugs&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-8983" target="_blank" rel="noopener noreferrer"&gt;PS-8983&lt;/a&gt;: System variable &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/group-replication-system-variables.html#sysvar_group_replication_view_change_uuid" target="_blank" rel="noopener noreferrer"&gt;group_replication_view_change_uuid&lt;/a&gt; introduced in MySQL 8.0.26 which corrected the issue &lt;a href="https://bugs.mysql.com/bug.php?id=103641" target="_blank" rel="noopener noreferrer"&gt;Bug#103641&lt;/a&gt; in where data is inconsistent between nodes after killing primary node in group replication, However there is still an issue where these events are also generated on the standby/secondary cluster in a ClusterSet thus creating errant transactions, and if binlogs containing these events are purged, then it will not be possible to perform a failover between clusters.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.26&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Fixed Version: 8.0.31-23&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9048" target="_blank" rel="noopener noreferrer"&gt;PS-9048&lt;/a&gt;: When innodb_optimize_fulltext_only is enabled and running &lt;code&gt;OPTIMIZE TABLE &lt;table_name&gt;&lt;/code&gt; which has fulltext index actually causing assertion in Percona server debug build, Please note issue is specifically happening when PARSER is &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/fulltext-search-ngram.html" target="_blank" rel="noopener noreferrer"&gt;ngram&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 5.7.42, 8.0.34&lt;/em&gt;
&lt;em&gt;Fixed Version: N/A [Fix in Progress]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9018" target="_blank" rel="noopener noreferrer"&gt;PS-9018&lt;/a&gt;: When replica has a non-replicated database then during intensive workload from the source where multi-threaded slave applier (MTS) is enabled and log_slave_updates=0 then DDL executed against this non-replicated database completely stalls the replica instance.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Upstream Bug: &lt;a href="https://bugs.mysql.com/bug.php?id=113727" target="_blank" rel="noopener noreferrer"&gt;113727&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.19+&lt;/em&gt;
&lt;em&gt;Fixed Version: N/A [Fix in Review]&lt;/em&gt;
&lt;em&gt;Workaround : Use log_slave_updates=1, Please note enabling this may produce huge binlog volume on the replica which may or may not be feasible with respect to storage.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9083" target="_blank" rel="noopener noreferrer"&gt;PS-9083&lt;/a&gt;: Percona server crashes when server is running with slow_query_log in conjunction with &lt;em&gt;long_query_time&lt;/em&gt;, &lt;em&gt;log_slow_verbosity = profiling&lt;/em&gt;,&lt;em&gt;query_info&lt;/em&gt; variables.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.35&lt;/em&gt;
&lt;em&gt;Fixed Version: 8.0.36 [Pending Release]&lt;/em&gt;
&lt;em&gt;Workaround : Remove “query_info” from log_slow_verbosity&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9081" target="_blank" rel="noopener noreferrer"&gt;PS-9081&lt;/a&gt;: Materializing happens when a query is being executed against performance_schema.data_locks can lead to excessive memory usage and OOM.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.34+&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PS 8.0.37&lt;/em&gt;
&lt;em&gt;Workaround : Putting a LIMIT clause to read queries.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-xtradb-cluster"&gt;Percona Xtradb Cluster&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4341" target="_blank" rel="noopener noreferrer"&gt;PXC-4341&lt;/a&gt;: Execution of prepared statement after FLUSH TABLES makes the node abort from the cluster.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.33+&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PXC 8.0.36&lt;/em&gt;
&lt;em&gt;Workaround : There is no straight forward workaround but one can run the prepared statement and FLUSH TABLES statement separately.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4316" target="_blank" rel="noopener noreferrer"&gt;PXC-4316&lt;/a&gt;: Network loss may lead to node’s logs flooded with “changed identity” events which eventually let primary node go non-primary, and reconnect another node. It will keep non primary nodes so we ended with all nodes as non primary.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.33+&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PXC 8.0.36&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4348" target="_blank" rel="noopener noreferrer"&gt;PXC-4348&lt;/a&gt;: Cluster state interrupted with MDL BF-BF conflict when forcing deadlock. To hit the crash we are required to run queries on multiple sessions where one session should run “optimize table &lt;tbl_name&gt;;” multiple times so mysqlslap is the right candidate to repeat this behavior and other sessions will run delete/insert on the same table.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.33+&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PXC 8.0.36&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-toolkit"&gt;Percona Toolkit&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2217" target="_blank" rel="noopener noreferrer"&gt;PT-2217&lt;/a&gt;: When running pt-mongodb-summary against psmdb6.0/psmdb7.0 it gives error “BSON field ‘getCmdLineOpts.recordStats’ is an unknown field” Please note that PT tool does not work with MongoDB 6.0+.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.5.X&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PT 3.6.0&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2309" target="_blank" rel="noopener noreferrer"&gt;PT-2309&lt;/a&gt;: When the primary key is a UUID binary 16 column pt-table-sync hits with error “Cannot nibble table &lt;code&gt;db_name&lt;/code&gt;.&lt;code&gt;table_name&lt;/code&gt; because MySQL chose no index instead of the &lt;code&gt;PRIMARY&lt;/code&gt;”&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.5.7&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PT 3.5.8&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2305" target="_blank" rel="noopener noreferrer"&gt;PT-2305&lt;/a&gt;: pt-online-schema-change should error out if server is a slave/replica in row based replication. This can lead to source/replica becoming inconsistent if there are writes on source when the tool runs on replica.&lt;/p&gt;
&lt;p&gt;Please find the example below where data loss is seen:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Set-up classic source-replica&lt;/li&gt;
&lt;li&gt;Make sure &lt;code&gt;binlog_format=row&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Create a table on master and add sufficient data so that pt-osc takes a little bit of time to run.&lt;/li&gt;
&lt;li&gt;Then start pt-osc on slave, and execute updates/deletes on master.&lt;/li&gt;
&lt;li&gt;Once pt-osc is done, check table checksum or table count to verify the data differences. Please check the below output with row differences:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master [localhost:22536] {msandbox} (test) &gt; select count(*) from sbtest1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| count(*) |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 999999 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.58 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave1 [localhost:22537] {msandbox} (test) &gt; select count(*) from sbtest1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| count(*) |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1000000 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.40 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.5.7&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PT 3.6.0&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2284" target="_blank" rel="noopener noreferrer"&gt;PT-2284&lt;/a&gt;: When running pt-kill with the –daemonize option, if the query has character like ‘柏木’, pt-kill process exists with message “Wide character in printf at /usr/bin/pt-kill line 7508.”&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.5.7&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PT 3.6.0&lt;/em&gt;
&lt;em&gt;Workaround : Running pt-kill without –daemonize option manually.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2089" target="_blank" rel="noopener noreferrer"&gt;PT-2089&lt;/a&gt;: When SHOW ENGINE INNODB STATUS reports garbled UTF characters then pt-deadlock-logger crashes with “server ts thread txn_id txn_time user hostname ip db tbl idx lock_type lock_mode wait_hold victim query”&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.3.1, 3.5.7&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-monitoring-and-management-pmm"&gt;Percona Monitoring and Management (PMM)&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12806" target="_blank" rel="noopener noreferrer"&gt;PMM-12806&lt;/a&gt;: We can’t tune VictoriaMetrics running inside PMM since PMM does not honor the environment variables for VictoriaMetrics. So PMM pre-defines certain flags that allow users to set all other &lt;a href="https://docs.victoriametrics.com/#list-of-command-line-flags" target="_blank" rel="noopener noreferrer"&gt;VictoriaMetrics parameters&lt;/a&gt; as environment variables.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Example:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;To set downsampling, use the downsampling.period parameter as follows:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-e VM_downsampling_period=20d:10m,120d:2h&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This instructs VictoriaMetrics to &lt;a href="https://docs.victoriametrics.com/#deduplication" target="_blank" rel="noopener noreferrer"&gt;deduplicate&lt;/a&gt; samples older than 20 days with 10 minute intervals and samples older than 120 days with two hour intervals.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2,40.1&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PMM 2.41.2&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12805" target="_blank" rel="noopener noreferrer"&gt;PMM-12805&lt;/a&gt;: When monitoring MongoDB servers, logs might get filled with the a CommandNotSupportOnView message, As a result, disk space fills up.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2,40.0, 2.41.0&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PMM 2.41.2&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12809" target="_blank" rel="noopener noreferrer"&gt;PMM-12809&lt;/a&gt;: Common Vulnerabilities and Exposures (CVE) found in PMM gRPC(Remote Procedure Call (RPC)) which impacts PMM v2.40.1+&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.41.0+&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PMM 2.41.2&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-xtrabackup"&gt;Percona XtraBackup&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3024" target="_blank" rel="noopener noreferrer"&gt;PXB-3024&lt;/a&gt;: Backups are not reliable when running on a secondary node of Group Replication(GR) since –lock-ddl does not have any effect on secondary node of GR.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.28-20, 8.0.31-24&lt;/em&gt;
&lt;em&gt;Fixed Version: 8.0.32-26&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-2928" target="_blank" rel="noopener noreferrer"&gt;PXB-2928&lt;/a&gt;: Xtrabackup crashes with signal 11 when taking a backup using &lt;a href="https://docs.percona.com/percona-xtrabackup/8.0/page-tracking.html#install-the-component" target="_blank" rel="noopener noreferrer"&gt;–page-tracking&lt;/a&gt; option. So if you are using this option while taking backup then upgrading to PXB 8.0.31 is recommended since there is no workaround available to this issue at the moment.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.29-22&lt;/em&gt;
&lt;em&gt;Fixed Version: 8.0.31-24&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3037" target="_blank" rel="noopener noreferrer"&gt;PXB-3037&lt;/a&gt;: In order to assure a consistent replication state, &lt;a href="https://docs.percona.com/percona-xtrabackup/8.0/make-backup-in-replication-env.html?h=safe+backup#the-safe-slave-backup-option" target="_blank" rel="noopener noreferrer"&gt;–safe-slave-backup&lt;/a&gt; option stops the replication SQL thread and waits to start backing up until slave_open_temp_tables in SHOW STATUS is zero. If there are no open temporary tables, the backup will take place, otherwise the SQL thread will be started and stopped until there are no open temporary tables. The backup will fail if slave_open_temp_tables does not become zero after –safe-slave-backup-timeout seconds (defaults to 300 seconds). The replication SQL thread will be restarted when the backup finishes, But due to this bug if backup fails in between then SQL thread is not getting restarted. So restarting the SQL thread manually is required.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.31-24, 8.0.35-30&lt;/em&gt;
&lt;em&gt;Fixed Version: No ETA&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-2733" target="_blank" rel="noopener noreferrer"&gt;PXB-2733&lt;/a&gt;: backup-lock-timeout and backup-lock-retry-count do not work.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.4.24, 8.0.27-19, 8.0.35-30&lt;/em&gt;
&lt;em&gt;Fixed Version: No ETA&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-kubernetes-operator"&gt;Percona Kubernetes Operator&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-492" target="_blank" rel="noopener noreferrer"&gt;K8SPG-492&lt;/a&gt;: Restore job created by PerconaPGRestore doesn’t inherit .spec.instances[].tolerations since restore Job pod get stuck in pending and causing down time.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.2.0&lt;/em&gt;
&lt;em&gt;Fixed Version: It is expected to be fixed by PG operator 2.4.0&lt;/em&gt;
&lt;em&gt;Workaround: Remove &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" target="_blank" rel="noopener noreferrer"&gt;taint&lt;/a&gt;, wait until the restore container is scheduled and re-add it again. &lt;a href="https://perconadev.atlassian.net/browse/K8SPSMDB-958" target="_blank" rel="noopener noreferrer"&gt;K8SPSMDB-958&lt;/a&gt;: PMM fails to monitor mongos due to lack of permission.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 1.14.0&lt;/em&gt;
&lt;em&gt;Fixed Version: 1.15.0&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-291" target="_blank" rel="noopener noreferrer"&gt;K8SPG-291&lt;/a&gt;: Modifying existing backup schedule does not work with PG operator v1.3.0&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 1.3.0&lt;/em&gt;
&lt;em&gt;Fixed Version: 1.4.0&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-286" target="_blank" rel="noopener noreferrer"&gt;K8SPG-286&lt;/a&gt;: When requiring TLS for all connections, PMM client fails to connect with “no pg_hba.conf entry”.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 1.2.0, 1.3.0, 2.0.0&lt;/em&gt;
&lt;em&gt;Fixed Version: 1.4.0&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, &lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;How to Report Bugs, Improvements, New Feature Requests for Percona Products&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the most up-to-date information, be sure to follow us on &lt;a href="https://twitter.com/percona" target="_blank" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/percona" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://www.facebook.com/Percona?fref=ts" target="_blank" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Quick References:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net" target="_blank" rel="noopener noreferrer"&gt;Percona JIRA&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL Bug Report&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;Report a Bug in a Percona Product&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona Forums&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Aaditya Dubey</author>
      <category>Percona</category>
      <category>opensource</category>
      <category>PMM</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2024/02/BugReportJanuary2024_hu_1daad131906eaa43.jpg"/>
      <media:content url="https://percona.community/blog/2024/02/BugReportJanuary2024_hu_d0b4df4b2e0a5579.jpg" medium="image"/>
    </item>
    <item>
      <title>Unexpected Stalled Upgrade to MySQL 8.0</title>
      <link>https://percona.community/blog/2024/01/26/unexpected-stalled-upgrade-to-mysql-8-0/</link>
      <guid>https://percona.community/blog/2024/01/26/unexpected-stalled-upgrade-to-mysql-8-0/</guid>
      <pubDate>Fri, 26 Jan 2024 00:00:00 UTC</pubDate>
      <description>A multi-tenant database is a database that serves multiple clients, or tenants, who share the same database schema but have separate data sets. One way to achieve data isolation for each client is to create a separate MySQL database for each tenant.</description>
      <content:encoded>&lt;p&gt;A multi-tenant database is a database that serves multiple clients, or tenants, who share the same database schema but have separate data sets. One way to achieve data isolation for each client is to create a separate MySQL database for each tenant.&lt;/p&gt;
&lt;p&gt;Some advantages of this approach are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It allows for easy backup and restore of individual tenant data.&lt;/li&gt;
&lt;li&gt;It simplifies the database administration and maintenance tasks, as each database can be managed independently.&lt;/li&gt;
&lt;li&gt;Scaling is easily achieved by adding more database servers and distributing tenant databases across them.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This approach requires a large number of tables on each server. Combined with the default value of &lt;code&gt;innodb_file_per_table=ON&lt;/code&gt;, this results in a large number of files that affects &lt;a href="https://percona.community/blog/2019/07/23/impact-of-innodb_file_per_table-option-on-crash-recovery-time" target="_blank" rel="noopener noreferrer"&gt;crash recovery time&lt;/a&gt; or &lt;a href="https://www.percona.com/blog/using-percona-xtrabackup-mysql-instance-large-number-tables" target="_blank" rel="noopener noreferrer"&gt;Percona XtraBackup&lt;/a&gt; execution.&lt;/p&gt;
&lt;p&gt;This blog post describes how to take care of a large number of files when upgrading to MySQL 8.0 in-place.&lt;/p&gt;
&lt;h3 id="version-selection"&gt;Version Selection&lt;/h3&gt;
&lt;p&gt;A steady stream of MySQL 8.0 minor releases provides improvements and refactoring of new MySQL 8.0 features. However, some of these releases introduce incompatibilities that require corresponding changes on the application side. Limiting scope of the application-side changes, we chose MySQL 8.0.25 version. This was our first step towards the major version 8.0.&lt;/p&gt;
&lt;h2 id="upgrade-in-place"&gt;Upgrade In-Place&lt;/h2&gt;
&lt;p&gt;A new MySQL 8.0 option is the upgrade in-place procedure. According to the &lt;a href="https://docs.percona.com/percona-server/8.0/upgrading-guide.html" target="_blank" rel="noopener noreferrer"&gt;upgrading guide&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;An in-place upgrade is performed by using existing data on the server and involves the following actions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Stopping the MySQL 5.7 server&lt;/li&gt;
&lt;li&gt;Replacing the old binaries with MySQL 8.0 binaries&lt;/li&gt;
&lt;li&gt;Starting the MySQL 8.0 server with the same data files.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While an in-place upgrade may not be suitable for all environments, especially those environments with many variables to consider, the upgrade should work in most cases.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;As an exception, in the case of an environment with a large number of tables, the upgrade in-place may get &lt;a href="https://forums.mysql.com/read.php?35,697581" target="_blank" rel="noopener noreferrer"&gt;stalled for weeks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Below we describe how to debug and resolve such issue.&lt;/p&gt;
&lt;h3 id="encountering-the-issue"&gt;Encountering the Issue&lt;/h3&gt;
&lt;p&gt;In our test environment, we encountered a similar issue. Despite steady CPU usage, the in-place upgrade looks stalled. We monitored the upgrade progress by counting files modified in the last 24 hours. Monitoring revealed low modification rates, like&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;find /var/lib/mysql -name "*.ibd" -mtime -1 | wc -l
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;14887&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;that were decreasing. Although the InnoDB files continued to be modified, the decreasing modification rate was too low to be practical.&lt;/p&gt;
&lt;h3 id="investigating-the-issue"&gt;Investigating the Issue&lt;/h3&gt;
&lt;p&gt;To debug this problem we used the Linux &lt;a href="https://percona.community/blog/2020/02/05/finding-mysql-scaling-problems-using-perf" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;perf&lt;/code&gt;&lt;/a&gt; tool. While the &lt;code&gt;mysqld&lt;/code&gt; process was running during upgrade:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;we collected &lt;code&gt;perf&lt;/code&gt; data&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;perf record -F 10 -o mysqld.perf -p $(pidof mysqld) -- sleep 20;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[ perf record: Woken up 1 times to write data ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[ perf record: Captured and wrote 0.256 MB mysqld.perf (1016 samples) ]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;and produced the &lt;code&gt;perf&lt;/code&gt; report&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;perf report --input mysqld.perf --stdio
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# To display the perf.data header info, please use --header/--header-only options.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Total Lost Samples: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Samples: 1K of event 'cpu-clock'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Event count (approx.): 101600000000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Overhead Command Shared Object Symbol
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# ........ ....... .................. ................................
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 34.55% mysqld libc-2.17.so [.] __sched_yield
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 14.86% mysqld [kernel.kallsyms] [k] __raw_spin_unlock_irq
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 11.32% mysqld [kernel.kallsyms] [k] system_call_after_swapgs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 11.32% mysqld mysqld [.] Fil_shard::reserve_open_slot
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To find out why &lt;code&gt;mysqld&lt;/code&gt; process stuck in the &lt;code&gt;Fil_shard::reserve_open_slot&lt;/code&gt; call, we checked the &lt;a href="https://github.com/percona/percona-server/blob/Percona-Server-8.0.25-15/storage/innobase/fil/fil0fil.cc#L2125" target="_blank" rel="noopener noreferrer"&gt;Percona Server source code&lt;/a&gt; that shows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the function code&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/** Wait for an empty slot to reserve for opening a file.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;@return true on success. */
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bool Fil_shard::reserve_open_slot(size_t shard_id) {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; size_t expected = EMPTY_OPEN_SLOT;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; return s_open_slot.compare_exchange_weak(expected, shard_id);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;and the corresponding comments&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;The data structure (Fil_shard) that keeps track of the tablespace ID to
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;fil_space_t* mapping are hashed on the tablespace ID. The tablespace name to
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;fil_space_t* mapping is stored in the same shard. A shard tracks the flushing
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;and open state of a file. When we run out open file handles, we use a ticketing
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;system to serialize the file open, see Fil_shard::reserve_open_slot() and
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Fil_shard::release_open_slot().&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Apparently, the stalled upgrade process hit the open files limit, given the large number of files in our environment.&lt;/p&gt;
&lt;h3 id="resolving-the-issue"&gt;Resolving the Issue&lt;/h3&gt;
&lt;p&gt;To prevent the &lt;code&gt;mysqld&lt;/code&gt; upgrade process from running out of open file handles, we followed &lt;a href="https://www.percona.com/blog/using-percona-xtrabackup-mysql-instance-large-number-tables" target="_blank" rel="noopener noreferrer"&gt;Percona guidance&lt;/a&gt; for setting open files limit&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Counted files as&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;find /var/lib/mysql/ -name "*.ibd" | wc -l
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;324780&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;and added another 1000 to this number for other miscellaneous open file needs.&lt;/p&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Increased the &lt;code&gt;innodb_open_files&lt;/code&gt; limit in two places:&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;added a corresponding line to configuration file &lt;code&gt;/etc/my.cnf&lt;/code&gt; like&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_open_files = 325780&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;added a corresponding line to the &lt;code&gt;systemd&lt;/code&gt; configuration file such as &lt;code&gt;/etc/systemd/system/mysqld.service.d/override.conf&lt;/code&gt; like&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[Service]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;LimitNOFILE = 325780&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;With these adjustments our upgrade completed processing of a terabyte of data in just a few hours. To provide more visibility into the upgrade process we also increased the default level of error log verbosity by adding another line to the &lt;code&gt;/etc/my.cnf&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log_error_verbosity = 3&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Increased verbosity enabled progress monitoring in the mysql error log during upgrade, like:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:09.331924Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.25-15) starting as process 16034
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:09.353871Z 1 [System] [MY-011012] [Server] Starting upgrade of data directory.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:09.353986Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:19.412572Z 1 [Note] [MY-012206] [InnoDB] Found 324780 '.ibd' and 0 undo files
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:19.412757Z 1 [Note] [MY-012207] [InnoDB] Using 17 threads to scan 324780 tablespace files
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:28.764032Z 0 [Note] [MY-012200] [InnoDB] Thread# 0 - Checked 15615/20298 files
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:31.718051Z 0 [Note] [MY-012201] [InnoDB] Checked 20298 files
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:31.718440Z 1 [Note] [MY-012208] [InnoDB] Completed space ID check of 324780 files.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:48.821432Z 1 [Note] [MY-012922] [InnoDB] Waiting for purge to start
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:48.878058Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:48.885203Z 1 [Note] [MY-011088] [Server] Data dictionary initializing version '80023'.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T00:27:49.187508Z 1 [Note] [MY-010337] [Server] Created Data Dictionary for upgrade
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T01:57:55.312683Z 2 [System] [MY-011003] [Server] Finished populating Data Dictionary tables with data.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T01:59:15.187709Z 5 [System] [MY-013381] [Server] Server upgrade from '50700' to '80025' started.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T03:01:09.880932Z 5 [System] [MY-013381] [Server] Server upgrade from '50700' to '80025' completed.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2023-10-28T03:01:13.905459Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.25-15' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona Server (GPL), Release 15, Revision a558ec2.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="discussion"&gt;Discussion&lt;/h3&gt;
&lt;p&gt;While we were contemplating if this is a feature or a bug, MySQL release 8.0.28 refactored the related &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_open_files" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;innodb_open_files&lt;/code&gt;&lt;/a&gt; code. Further details are provided in the corresponding open source commit &lt;a href="https://github.com/percona/percona-server/commit/b184bd30f94df30a8bf178fc327590c5865d33bc" target="_blank" rel="noopener noreferrer"&gt;WL#14591 InnoDB: Make system variable &lt;code&gt;innodb_open_files&lt;/code&gt; dynamic&lt;/a&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;- The `innodb_open_files` system variable can now be set with a dynamic SQL procedure `innodb_set_open_files_limit(N)`. If the new value is too low, an error is returned to client with the minimum value presented. If the value is out of bounds or of incorrect type, it will be reported as error also.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;- `Fil_system::set_open_files_limit` was added to allow changes to the global opened files limit. The `Fil_system::m_max_n_open` is atomic now and extracted to a separate class `fil::detail::Open_files_limit`, instantiated as `Fil_system::m_open_files_limit`.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;- `Fil_shard::reserve_open_slot`, Fil_shard::release_open_slot and static Fil_shard::s_open_slot were removed. Now we have CAS-based system of assuring the opened files will not exceed the limit set.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Thus, the new MySQL 8.0.28 feature – dynamic &lt;code&gt;innodb_open_files&lt;/code&gt; variable – eliminated the need for open files limit adjustments in preparation for MySQL 8.0 upgrade.&lt;/p&gt;
&lt;h2 id="conclusions"&gt;Conclusions&lt;/h2&gt;
&lt;p&gt;Lessons learned:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Prepare for MySQL 8.0 upgrade in-place by taking a backup of the data directory.&lt;/li&gt;
&lt;li&gt;Take advantage of the Percona Server open source code.&lt;/li&gt;
&lt;li&gt;Follow guidance and advice posted in Percona blogs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Alexandre Vaniachine</author>
      <category>Intermediate Level</category>
      <category>MySQL</category>
      <category>Percona Server for MySQL</category>
      <category>upgrade</category>
      <media:thumbnail url="https://percona.community/blog/2024/01/unexpected-stalled-upgrade-to-mysql-8-0_hu_bbbfb5f7838bc57d.jpg"/>
      <media:content url="https://percona.community/blog/2024/01/unexpected-stalled-upgrade-to-mysql-8-0_hu_b802951e238a2edd.jpg" medium="image"/>
    </item>
    <item>
      <title>Our Top Picks from the Kubernetes 1.29 Release</title>
      <link>https://percona.community/blog/2024/01/12/our-top-picks-from-the-kubernetes-release/</link>
      <guid>https://percona.community/blog/2024/01/12/our-top-picks-from-the-kubernetes-release/</guid>
      <pubDate>Fri, 12 Jan 2024 00:00:00 UTC</pubDate>
      <description>The latest Kubernetes version, 1.29, was released on December 13th 2023. Inspired by the Mandala and symbolizing universal perfection, it concludes the 2023 release calendar. This version comes with various exciting improvements, many of which will be helpful for users who run databases on Kubernetes.</description>
      <content:encoded>&lt;p&gt;The latest &lt;strong&gt;Kubernetes&lt;/strong&gt; version, &lt;strong&gt;1.29&lt;/strong&gt;, was released on December 13th 2023. Inspired by the Mandala and symbolizing universal perfection, it concludes the 2023 release calendar. This version comes with various exciting improvements, many of which will be helpful for users who run databases on Kubernetes.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/01/k8s-mandala-medium.png" alt="Mandala" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Figure 1&lt;/strong&gt; - Mandala created in Excalidraw, not perfectly symmetrical.&lt;/p&gt;
&lt;p&gt;Here, we highlight this latest release’s four key features and improvements. Let’s take a look at them together.&lt;/p&gt;
&lt;h2 id="in-place-update-of-pod-resources"&gt;In-Place Update of Pod Resources&lt;/h2&gt;
&lt;p&gt;This alpha feature allows users to change requests and limits for containers without restarting. It simplifies scaling by a lot and opens new opportunities for auto scaling tools like HPA, VP, and Kubernetes Event-driven Autoscaling (KEDA). It removes the barriers of scaling the applications that were not easy to restart.&lt;/p&gt;
&lt;p&gt;When resource resizing is not possible in-place, there are clear strategies for users and controllers (like StatefulSets, JobController, etc.) to handle the situation effectively.&lt;/p&gt;
&lt;p&gt;It was first introduced in 1.27 but moved back to alpha as it requires additional architectural changes. It also has &lt;a href="https://github.com/kubernetes/kubernetes/pull/119665" target="_blank" rel="noopener noreferrer"&gt;performance improvements&lt;/a&gt; and comes with &lt;a href="https://github.com/kubernetes/kubernetes/pull/112599" target="_blank" rel="noopener noreferrer"&gt;Windows containers support&lt;/a&gt;.
Read more about this in its Kubernetes Enhancement Proposals (&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1287-in-place-update-pod-resources" target="_blank" rel="noopener noreferrer"&gt;KEP&lt;/a&gt;) and the &lt;a href="https://github.com/kubernetes/enhancements/issues/1287" target="_blank" rel="noopener noreferrer"&gt;issue #1287&lt;/a&gt; created to add this feature.&lt;/p&gt;
&lt;h2 id="kubernetes-volumeattributesclass-modifyvolume"&gt;Kubernetes VolumeAttributesClass ModifyVolume&lt;/h2&gt;
&lt;p&gt;The Kubernetes v1.29 release introduces an alpha feature enabling modification of volume attributes, like IOPS and throughput, by altering the volumeAttributesClassName in a PersistentVolumeClaim (PVC). This simplifies volume management by allowing direct updates within Kubernetes, avoiding the need for external provider API management. Previously, users had to create a new StorageClass resource and migrate to a new PVC; now, changes can be made directly in the existing PVC.&lt;/p&gt;
&lt;p&gt;Discover further details in the &lt;a href="https://github.com/kubernetes/enhancements/pull/3780" target="_blank" rel="noopener noreferrer"&gt;KEP&lt;/a&gt; and issue &lt;a href="https://github.com/kubernetes/enhancements/issues/3751" target="_blank" rel="noopener noreferrer"&gt;#1287&lt;/a&gt;, which was established for the inclusion of this feature.&lt;/p&gt;
&lt;h2 id="readwriteoncepod-persistentvolume-access-mode"&gt;ReadWriteOncePod PersistentVolume Access Mode&lt;/h2&gt;
&lt;p&gt;Kubernetes offers access modes for Persistent Volumes (PVs) and Persistent Volume Claims (PVCs), including ReadWriteOnce, ReadOnlyMany, and ReadWriteMany. In particular, ReadWriteOnce restricts volume access to a single node, enabling multiple pods on that node to read from and write to the same volume concurrently. This setup ensures exclusive volume access on a per-node basis while allowing shared volume usage within the node. However, this introduces a potential issue, especially for applications that require exclusive access by a single pod.
In this release, the ReadWriteOncePod access mode for PersistentVolumeClaims has become stable. Now that it is stable, a PVC can be configured to be mounted by a single Pod exclusively.&lt;/p&gt;
&lt;p&gt;Here are the Kubernetes Enhancement Proposal (&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode" target="_blank" rel="noopener noreferrer"&gt;KEP&lt;/a&gt;) and issue &lt;a href="https://github.com/kubernetes/enhancements/issues/2485" target="_blank" rel="noopener noreferrer"&gt;#2485&lt;/a&gt; that led to the development of this feature.&lt;/p&gt;
&lt;h2 id="make-kubernetes-aware-of-the-loadbalancer-behavior"&gt;Make Kubernetes aware of the LoadBalancer behavior&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;kube-proxy’s&lt;/strong&gt; handling of LoadBalancer Service External IPs is set to change. Traditional methods, such as IPVS and iptables, bind these IPs to nodes, optimizing traffic but causing issues with certain cloud providers and bypassing key Load Balancer features.&lt;/p&gt;
&lt;p&gt;There are numerous problems with existing behavior:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Some cloud providers (Scaleway, Tencent Cloud, …) are using the LB’s external IP (or a private IP) as source IP when sending packets to the cluster. This is a problem in the ipvs mode of kube-proxy since the IP is bounded to an interface, and healthchecks from the LB is never coming back.&lt;/li&gt;
&lt;li&gt;Some cloud providers (DigitalOcean, Scaleway, …) have features at the LB level (TLS termination, PROXY protocol, …). Bypassing the LB means missing these features when the packet arrives at the service (leading to protocol errors).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The solution would be to add a new field in the loadBalancer field of a Service’s status, like ipMode. This new field will be used by kube-proxy in order to not bind the Load Balancer’s External IP to the node (in both IPVS and iptables mode). The value VIP would be the default one (if not set, for instance), keeping the current behavior. The value Proxy would be used to disable the shortcut. This change allows more flexible handling of External IPs, maintaining current behavior as the default and offering an alternative to avoid these issues.&lt;/p&gt;
&lt;p&gt;Read more about this in its Kubernetes Enhancement Proposals (&lt;a href="https://github.com/kubernetes/enhancements/tree/b103a6b0992439f996be4314caf3bf7b75652366/keps/sig-network/1860-kube-proxy-IP-node-binding#kep-1860-make-kubernetes-aware-of-the-loadbalancer-behaviour" target="_blank" rel="noopener noreferrer"&gt;KEP&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;If you are interested in learning about databases on Kubernetes, you can start by &lt;a href="https://www.percona.com/blog/run-mysql-in-kubernetes-solutions-pros-and-cons/" target="_blank" rel="noopener noreferrer"&gt;running MySQL in Kubernetes&lt;/a&gt;. Explore the solutions, and weigh the pros and cons.
Also, discover our &lt;a href="https://www.percona.com/blog/cloud-native-predictions-for-2024/" target="_blank" rel="noopener noreferrer"&gt;predictions for Cloud Native&lt;/a&gt; technologies for this year.&lt;/p&gt;</content:encoded>
      <author>Sergey Pronin</author>
      <author>Edith Puclla</author>
      <category>edith_puclla</category>
      <category>sergey_pronin</category>
      <category>kubernetes</category>
      <category>release</category>
      <media:thumbnail url="https://percona.community/blog/2024/01/k8s-mandala-medium_hu_ccf52cc202bf7b2b.jpg"/>
      <media:content url="https://percona.community/blog/2024/01/k8s-mandala-medium_hu_18b7d20759a07139.jpg" medium="image"/>
    </item>
    <item>
      <title>Data on Kubernetes Community initiatives: Automated storage scaling</title>
      <link>https://percona.community/blog/2024/01/10/data-on-kubernetes-community-initiatives/</link>
      <guid>https://percona.community/blog/2024/01/10/data-on-kubernetes-community-initiatives/</guid>
      <pubDate>Wed, 10 Jan 2024 00:00:00 UTC</pubDate>
      <description>In the world of Kubernetes, where everything evolves quickly. Automated storage scaling stands out as a critical challenge. Members of the Data on Kubernetes Community have proposed a solution to address this issue for Kubernetes operators.</description>
      <content:encoded>&lt;p&gt;In the world of Kubernetes, where everything evolves quickly. Automated storage scaling stands out as a critical challenge. Members of the &lt;a href="https://dok.community/" target="_blank" rel="noopener noreferrer"&gt;Data on Kubernetes Community&lt;/a&gt; have proposed a solution to address this issue for Kubernetes operators.&lt;/p&gt;
&lt;p&gt;If, like me, this is your first time hearing about Automated storage scaling, this will help you understand it better:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Storage scaling in Kubernetes Operators&lt;/strong&gt; refers to the ability of an application running on Kubernetes to adjust its storage capacity automatically based on demand. In other words, it is about ensuring that an application has the right amount of storage available at any given time, optimizing for performance, cost, and operational efficiency, and doing this as automatically as possible.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2024/01/dok-initiatives_hu_4bcb895cefa55e4d.png 480w, https://percona.community/blog/2024/01/dok-initiatives_hu_7f5c00b0f4b03204.png 768w, https://percona.community/blog/2024/01/dok-initiatives_hu_3ea9e18a22740b26.png 1400w"
src="https://percona.community/blog/2024/01/dok-initiatives.png" alt="DoKC Initiatives" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As databases grow increasingly integral, the absence of unified solutions for storage scaling is becoming more evident. Let’s explore some existing solutions and their limitations:&lt;/p&gt;
&lt;h2 id="pvc-autoresizer"&gt;pvc-autoresizer&lt;/h2&gt;
&lt;p&gt;This project detects and scales &lt;strong&gt;PersistentVolumeClaims&lt;/strong&gt; (PVCs) when the free amount of storage is below the threshold. &lt;a href="https://github.com/topolvm/pvc-autoresizer" target="_blank" rel="noopener noreferrer"&gt;pvc-autoresizer&lt;/a&gt; It is and active open source project on GitHub.&lt;/p&gt;
&lt;p&gt;There are certain downsides:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Works with PVCs only. It does not work with StatefulSet and does not have integration with Kubernetes Operator.&lt;/li&gt;
&lt;li&gt;It requires Prometheus stack to be deployed.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Percona wrote a &lt;a href="https://www.percona.com/blog/storage-autoscaling-with-percona-operator-for-mongodb/" target="_blank" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; about pvc-autoresizer to automate storage scaling for MongoDB clusters on Kubernetes.&lt;/p&gt;
&lt;h2 id="ebs-params-controller"&gt;EBS params controller&lt;/h2&gt;
&lt;p&gt;This controller provides a way to control IOPS and throughput parameters for EBS volumes provisioned by EBS CSI Driver with annotations on corresponding PersistentVolumeClaim objects in Kubernetes. It also sets some annotations on PVCs backed by EBS CSI Driver representing current parameters and last modification status and timestamps.&lt;/p&gt;
&lt;p&gt;Find more about &lt;a href="https://github.com/Altinity/ebs-params-controller" target="_blank" rel="noopener noreferrer"&gt;EBS params controller on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="kubernetes-volume-autoscaler"&gt;Kubernetes Volume Autoscaler&lt;/h2&gt;
&lt;p&gt;This automatically increases the size of a Persistent Volume Claim (PVC) in Kubernetes when it is nearing full (either on space OR inode usage). It is a similar solution to pvc-autoresizer. Check out more about &lt;a href="https://github.com/DevOps-Nirvana/Kubernetes-Volume-Autoscaler" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Volume Autoscaler&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="kubernetes-event-driven-autoscalingkeda"&gt;Kubernetes Event-driven Autoscaling(KEDA)&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://keda.sh/" target="_blank" rel="noopener noreferrer"&gt;KEDA&lt;/a&gt; performs horizontal scaling for various resources in k8s, including custom resources. The metric tracking component is already figured out, but unfortunately, it does not work with vertical scaling or storage scaling yet. We opened &lt;a href="https://github.com/kedacore/keda/issues/5232" target="_blank" rel="noopener noreferrer"&gt;an issue in GitHub&lt;/a&gt; to start the discussion.&lt;/p&gt;
&lt;p&gt;As you can see, there are some limitations to performing Automated storage scaling, and to address this gap, the &lt;strong&gt;Data on Kubernetes community&lt;/strong&gt; wants to develop a solution that solves practical problems and contributes to the open source community.&lt;/p&gt;
&lt;p&gt;We’re tackling the significant challenge of unexpected disk usage alerts and potential system shutdowns due to insufficient volume space, a common issue in Kubernetes-based databases.&lt;/p&gt;
&lt;h2 id="possible-solutions"&gt;Possible Solutions&lt;/h2&gt;
&lt;p&gt;The following possible solutions were proposed:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Operators must be capable of changing the storage size when Custom Resource is changed.&lt;/li&gt;
&lt;li&gt;Operators must create resources following certain standards, like applying annotations with indications of which fields should be changed&lt;/li&gt;
&lt;li&gt;3rd party component (Scaler) will take care of monitoring the storage consumption and changing the field in the CR of the DB&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Our goal as a community is to develop a fully automated solution to prevent these inconveniences and failures.&lt;/p&gt;
&lt;h2 id="final-thoughts"&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;Once a new solution is validated and proven functional, it will benefit many communities, enabling them to integrate it with their operators. Additionally, it will present an excellent opportunity for Percona to incorporate it into our Operators, enhancing efficiency and facilitating automated storage scaling.&lt;/p&gt;
&lt;p&gt;We invite those interested, especially in this particular project, to join us. This is an opportunity to be at the forefront of shaping the automated scaling solutions in Kubernetes. You can join the &lt;a href="https://join.slack.com/t/dokcommunity/shared_invite/zt-2a0ahuhsh-MdZ4OpF4nr_s4kyOwTurVw" target="_blank" rel="noopener noreferrer"&gt;Data on Kubernetes community&lt;/a&gt; on Slack, specifically on the #SIG-Operator.&lt;/p&gt;
&lt;p&gt;Are you interested in understanding Storage Autoscaling in databases? Explore our detailed example of &lt;a href="https://www.percona.com/blog/storage-autoscaling-with-percona-operator-for-mongodb/" target="_blank" rel="noopener noreferrer"&gt;Storage Autoscaling using the Percona Operator for MongoDB&lt;/a&gt;. For questions or discussions, feel free to join our experts on the &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona Community Forum&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Sergey Pronin</author>
      <author>Edith Puclla</author>
      <category>edith_puclla</category>
      <category>sergey_pronin</category>
      <category>kubernetes</category>
      <category>dok</category>
      <category>storage</category>
      <category>operators</category>
      <media:thumbnail url="https://percona.community/blog/2024/01/dok-initiatives_hu_9ee5c4ee06283257.jpg"/>
      <media:content url="https://percona.community/blog/2024/01/dok-initiatives_hu_5d481efe00c9cbde.jpg" medium="image"/>
    </item>
    <item>
      <title>Volunteering as a Program Committee Member for Data on Kubernetes Day Europe 2024</title>
      <link>https://percona.community/blog/2024/01/10/volunteering-program-committee-data-kubernetes-europe/</link>
      <guid>https://percona.community/blog/2024/01/10/volunteering-program-committee-data-kubernetes-europe/</guid>
      <pubDate>Wed, 10 Jan 2024 00:00:00 UTC</pubDate>
      <description>The Data on Kubernetes Day Europe 2024 Program Committee is a group of professionals and experts responsible for organizing the Data on Kubernetes Day Europe 2024 content for the upcoming co-located events at Kubecon in Paris on 19 March.</description>
      <content:encoded>&lt;p&gt;The Data on &lt;strong&gt;Kubernetes Day Europe 2024 Program Committee&lt;/strong&gt; is a group of professionals and experts responsible for organizing the &lt;strong&gt;Data on Kubernetes Day Europe 2024&lt;/strong&gt; content for the upcoming co-located events at &lt;a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" target="_blank" rel="noopener noreferrer"&gt;Kubecon in Paris&lt;/a&gt; on 19 March.&lt;/p&gt;
&lt;p&gt;As Data on Kubernetes community members, &lt;a href="https://www.linkedin.com/in/sergeypronin/" target="_blank" rel="noopener noreferrer"&gt;Sergey Pronin&lt;/a&gt; (Group Product Manager at @Percona) and I (Tech Evangelist) volunteered to evaluate proposal topics submitted for the event through the Sessionize platform. Not only us but also many other members of the Data on Kubernetes community participated as volunteers.&lt;/p&gt;
&lt;p&gt;Community members who participate in this &lt;strong&gt;Program Committee&lt;/strong&gt; evaluate proposals for talks, workshops, and other sessions submitted by potential speakers. This involves identifying each submission’s relevance, quality, and originality and being completely transparent and honest when reviewing a set of talks for the Data On Kubernetes Community co-located event.&lt;/p&gt;
&lt;p&gt;Being a program committee means adhering to guidelines and following the Linux Foundation’s code of conduct.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Be professional and courteous.&lt;/li&gt;
&lt;li&gt;Even express feedback constructively, not destructively.&lt;/li&gt;
&lt;li&gt;Be considerate when choosing communication channels&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After all program committee members completed their evaluations, the Data on Kubernetes Day Co-located Events Europe 2024 &lt;a href="https://colocatedeventseu2024.sched.com/overview/type/Data+on+Kubernetes+Day?iframe=no" target="_blank" rel="noopener noreferrer"&gt;schedule&lt;/a&gt; was announced.&lt;/p&gt;
&lt;p&gt;Look at this promising agenda:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/02/dok2.png" alt="DoKC agenda" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We feel great to be a part of these efforts and to contribute to something significant by making it a reality at an in-person event. Thanks to the &lt;strong&gt;Linux Foundation&lt;/strong&gt; for recognizing our efforts as Program Committee Members and for considering &lt;a href="https://www.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt;, an active member of the DoK community.&lt;/p&gt;
&lt;p&gt;In recognition of this support, we earned a badge from The Linux Foundation.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2024/02/dok3.png" alt="DoKC badge" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;If you want to know more initiatives &lt;strong&gt;Percona&lt;/strong&gt; have in the DoK community, read &lt;a href="https://percona.community/blog/2024/01/10/data-on-kubernetes-community-initiatives/" target="_blank" rel="noopener noreferrer"&gt;Data on Kubernetes Community initiatives: Automated storage scaling&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you have any questions, remember to visit our &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona Community Forum&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>edith_puclla</category>
      <category>kubernetes</category>
      <category>dok</category>
      <category>kubecon</category>
      <category>europe</category>
      <media:thumbnail url="https://percona.community/blog/2024/02/dok3_hu_aaf69425b29d2d1a.jpg"/>
      <media:content url="https://percona.community/blog/2024/02/dok3_hu_d26ec19e4b6de0df.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Bug Report: November 2023</title>
      <link>https://percona.community/blog/2023/12/19/percona-bug-report-november-2023/</link>
      <guid>https://percona.community/blog/2023/12/19/percona-bug-report-november-2023/</guid>
      <pubDate>Tue, 19 Dec 2023 00:00:00 UTC</pubDate>
      <description>At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.</description>
      <content:encoded>&lt;p&gt;At Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.&lt;/p&gt;
&lt;p&gt;We constantly update our &lt;a href="https://perconadev.atlassian.net/" target="_blank" rel="noopener noreferrer"&gt;bug reports&lt;/a&gt; and monitor &lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;other boards&lt;/a&gt; to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. This post is a central place to get information on the most noteworthy open and recently resolved bugs.&lt;/p&gt;
&lt;p&gt;In this edition of our bug report, we have the following list of bugs,&lt;/p&gt;
&lt;h2 id="percona-servermysql-bugs"&gt;Percona Server/MySQL Bugs&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-8086" target="_blank" rel="noopener noreferrer"&gt;PS-8086&lt;/a&gt; : Increased memory usage in LRU manager with ROW_FORMAT=COMPRESSED, so it seems that after evicting uncompressed frames for a compressed table, Percona Server LRU manager uses more memory to track them than upstream MySQL.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 5.7.x, 8.0.26, 8.0.32&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-8737" target="_blank" rel="noopener noreferrer"&gt;PS-8737&lt;/a&gt; / [&lt;a href="https://bugs.mysql.com/bug.php?id=110706" target="_blank" rel="noopener noreferrer"&gt;Bug #110706&lt;/a&gt;] : Data will be lost when you perform table rebuild immediately after INSERT or DELETE commands. This means that all ALTER TABLE operations that require table rebuild, including a “null” alteration; that is, an ALTER TABLE statement that “changes” the table to use the storage engine that it already has Eg: “ALTER TABLE t1 ENGINE = InnoDB;”, So after INSERT and DELETE, please do not execute table rebuild statement immediately.You can find the full list of ALTER operations that require table rebuild at &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-operations.html" target="_blank" rel="noopener noreferrer"&gt;https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-operations.html&lt;/a&gt; Check for column “Rebuilds Table”. All operations for which this column contains “Yes” or “No*” are affected.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.[28/29/30/31/32]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-8428" target="_blank" rel="noopener noreferrer"&gt;PS-8428&lt;/a&gt; : ALTER TABLE t ADD FULLTEXT crashes the server when –innodb_encrypt_online_alter_logs=ON. The problem has nothing to do with either innodb_encrypt_online_alter_logs, or Parallel Threads for Online DDL Operations. The issues turned out that in binaries built with OpenSSL 3.0.x my_aes_crypt() function has a flaw and can no longer decrypt data encrypted with the same function previously.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.30-22&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Fixed version: 8.0.30-22&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Please don’t get confused with the affected &amp; fix version is the same for this bug since this bug was reported internally during testing of the release build &amp; that’s the reason it gets the same affected &amp; fix version.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-8987" target="_blank" rel="noopener noreferrer"&gt;PS-8987&lt;/a&gt; / [&lt;a href="https://bugs.mysql.com/bug.php?id=112935" target="_blank" rel="noopener noreferrer"&gt;Bug #112935&lt;/a&gt;] : This bug results in inconsistency seen between MYISAM and MEMORY for simple CREATE and SELECT operation.&lt;/p&gt;
&lt;p&gt;Please check the below scenario where result inconsistency is seen:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Added sample data to MyISAM/InnoDB Tables.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Executed SELECT statement which is a bit complex so can’t be added here but it can be seen in the bug report.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Empty set return when SELECT executed against MyISAM/InnoDB Tables &amp; 4 rows return when same query executed against Memory Engine.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.34-26, 8.0.35-27&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-8990" target="_blank" rel="noopener noreferrer"&gt;PS-8990&lt;/a&gt; / [&lt;a href="https://bugs.mysql.com/bug.php?id=112979" target="_blank" rel="noopener noreferrer"&gt;Bug #112979&lt;/a&gt;] : MySQL server does not respect system &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_transaction_compression_level_zstd" target="_blank" rel="noopener noreferrer"&gt;variable binlog_transaction_compression_level_zstd&lt;/a&gt; so it sets the compression level for binary log transaction compression on this server, which is enabled by the &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_transaction_compression" target="_blank" rel="noopener noreferrer"&gt;binlog_transaction_compression&lt;/a&gt; system variable. The value is an integer that determines the compression effort, from 1 (the lowest effort) to 22 (the highest effort). If you do not specify this system variable, the compression level is set to 3. As the compression level increases, the data compression ratio increases, which reduces the storage space and network bandwidth required for the transaction payload.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.34-26, 8.0.35-27&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9015" target="_blank" rel="noopener noreferrer"&gt;PS-9015&lt;/a&gt; / [&lt;a href="https://bugs.mysql.com/bug.php?id=113256" target="_blank" rel="noopener noreferrer"&gt;Bug #113256&lt;/a&gt;] : “DATA_FREE” shows a different value when comparing information_schema.TABLES vs information_schema.PARTITIONS. It is hard to say which result set is correct since we don’t have a source of truth.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 5.7.43-47, 8.0.34-26&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-9011" target="_blank" rel="noopener noreferrer"&gt;PS-9011&lt;/a&gt; / [&lt;a href="https://bugs.mysql.com/bug.php?id=112946" target="_blank" rel="noopener noreferrer"&gt;Bug #112946&lt;/a&gt;] : Prior to 8.0.29 INSTANT column exists on a non-system table with NULL columns in the MySQL SCHEMA which eventually leads to a corruption post 8.0.30+ upgrades. Although it’s probably not a best practice to create tables in mysql SCHEMA, it should not lead to corruption, especially when INSTANT is the default algorithm.&lt;/p&gt;
&lt;h2 id="percona-xtradb-cluster"&gt;Percona Xtradb Cluster&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4343" target="_blank" rel="noopener noreferrer"&gt;PXC-4343&lt;/a&gt; : Occasionally, during SST, InnoDB tablespace gets silently corrupted, resulting in the later Xtrabackup failure with the following error [MY-012224] [InnoDB] Header page contains inconsistent data in datafile. The triggering condition appears to be PXC 5.7 =&gt; 8.0 upgrade, where the corruption manifests in a 2nd node that joins later from the upgraded node. The corruption gets discovered once the 3rd node tries to join from the 2nd node as the donor or when a regular backup is taken from the 2nd node. To workaround this issue, always specify the first upgraded node as a donor.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.34-26&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4237" target="_blank" rel="noopener noreferrer"&gt;PXC-4237&lt;/a&gt; : When adding a new node, with PXC tarball installation error is being reported saying &lt;code&gt;[WSREP] Failed to read 'ready &lt;addr&gt;' from: wsrep_sst_xtrabackup-v2&lt;/code&gt;. However, this issue is expected to be fixed by the upcoming release of PXC 8.0.35, and fortunately, below workaround can fix the issue quickly:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Create /var/run/mysqld/ folder owned by mysql OS user.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Backup wsrep_sst_common file before editing:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;shell&gt; cp -nvp /usr/local/mysql/bin/wsrep_sst_common /usr/local/mysql/bin/wsrep_sst_common.orig.for-bug-PXC-4237
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Implement the change:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;shell&gt; sed -i '297s/.*/set +e; &amp; ; set -e/' /usr/local/mysql/bin/wsrep_sst_common
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Verify the changes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;shell&gt; sed -n '297p' /usr/local/mysql/bin/wsrep_sst_common.orig.for-bug-PXC-4237
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MYSQLD_PATH=$(readlink -f /proc/${WSREP_SST_OPT_PARENT}/exe)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;shell&gt; sed -n '297p' /usr/local/mysql/bin/wsrep_sst_common
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;set +e; MYSQLD_PATH=$(readlink -f /proc/${WSREP_SST_OPT_PARENT}/exe) ; set -e&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.32-24, 8.0.34-26&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4318" target="_blank" rel="noopener noreferrer"&gt;PXC-4318&lt;/a&gt; : PXC cluster stalls and eventually crashes due to a long semaphore wait, which is happening because ha_commit_low does not commit a transaction that does not perform any changes such as an empty transaction and it can’t be controlled since it is an internal process. Fortunately, the upcoming release of PXC 8.0.35 will fix the issue.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.33-25&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4336" target="_blank" rel="noopener noreferrer"&gt;PXC-4336&lt;/a&gt; : PXC node eviction when a new CHECK CONSTRAINT is created which violates the condition, Eg. table is created with one entry say id 100 and after creation we added CHECK CONSTRAINT using ALTER TABLE t ADD CONSTRAINT CHK_id CHECK (id &lt;=75); Here 100 is not less than 75 which violate the conditions and eventually node become inconsistent and get disconnected/evicted from the cluster. To avoid this scenario make sure to avoid violation of CHECK CONSTRAINT conditions.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.34-26&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXC-4034" target="_blank" rel="noopener noreferrer"&gt;PXC-4034&lt;/a&gt; : When PXC cluster uses as a source and an async replication as a replica where “set @@session.sql_log_bin = off;” is used, this introduces a GTID gap in the “gtid_executed” set on the PXC source. As a workaround to the issue avoid using statement with “set @@session.sql_log_bin = off;” in the source/PXC&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 5.7.38-31.59, 8.0.28-19, 8.0.34-26&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-toolkit"&gt;Percona Toolkit&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-1724" target="_blank" rel="noopener noreferrer"&gt;PT-1724&lt;/a&gt; : Percona toolkit unable to work if user using ‘caching_sha2_password’ Authentication plugin&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.0.13, 3.5.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2030" target="_blank" rel="noopener noreferrer"&gt;PT-2030&lt;/a&gt; : pt-heartbeat is not compatible with PostgreSQL throwing Cannot get MySQL var character_set_server: DBD::Pg::db selectrow_array failed: ERROR: syntax error at or near “LIKE” LINE 1: SHOW VARIABLES LIKE ‘character_set_server’&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.3.1, 3.5.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2083" target="_blank" rel="noopener noreferrer"&gt;PT-2083&lt;/a&gt; : when running pt-archiver with –charset option in MySQL 8.0 does not work.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.3.1, 3.5.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Fixed version: 3.5.6 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2106" target="_blank" rel="noopener noreferrer"&gt;PT-2106&lt;/a&gt; : In pt-online-schema-change adding column to table (parent table) with having foreign key reference which triggers rebuilding constraints and can cause inconsistencies.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.3.1, 3.5.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PT-2207" target="_blank" rel="noopener noreferrer"&gt;PT-2207&lt;/a&gt; : pt-archiver doesn’t work when ANSI_QUOTES is set in sql_mode&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 3.5.2, 3.5.5&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Fixed version: 3.5.6 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-monitoring-and-management-pmm"&gt;Percona Monitoring and Management (PMM)&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-4712" target="_blank" rel="noopener noreferrer"&gt;PMM-4712&lt;/a&gt; : PMM frequently crashes due to out of memory kills with postgres_exporter consuming 20-30GB of RAM and to debug it pprof endpoints to postgres_exporter was missing&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Fixed version: 2.41.0 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12013" target="_blank" rel="noopener noreferrer"&gt;PMM-12013&lt;/a&gt; : rds_exporter unreliable for large deployments which generate gaps in the gathered metrics and some improvement fix done here at PMM-11727&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.35.0&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12349" target="_blank" rel="noopener noreferrer"&gt;PMM-12349&lt;/a&gt; : ReplicaSet Summary shows wrong data when a node is gone&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.40.1&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12631" target="_blank" rel="noopener noreferrer"&gt;PMM-12631&lt;/a&gt; : Route of /logs.zip crashes with &lt;code&gt;reflect: call of reflect.Value.NumField on string&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.40.1&lt;/em&gt;
&lt;em&gt;Fixed version: 2.41.0 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-12738" target="_blank" rel="noopener noreferrer"&gt;PMM-12738&lt;/a&gt; : File certificate.conf is required, but not mentioned anywhere in helm charts but in fact, it should not be required.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.40.1&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-xtrabackup"&gt;Percona XtraBackup&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-2860" target="_blank" rel="noopener noreferrer"&gt;PXB-2860&lt;/a&gt; : Xtrabackup keeps locking table even using –tables-exclude and –lock-ddl-per-table.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.33-28&lt;/em&gt;
&lt;em&gt;Fixed version: 8.0.34-29&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3168" target="_blank" rel="noopener noreferrer"&gt;PXB-3168&lt;/a&gt; : Under high write load, backup fails with “log block numbers mismatch” error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.33-28, 8.0.34-29&lt;/em&gt;
&lt;em&gt;Fixed version: 8.0.35-30, 8.2&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3079" target="_blank" rel="noopener noreferrer"&gt;PXB-3079&lt;/a&gt; : Prepare skips rollback on encrypted tables and completes successfully if the keyring plugin is not loaded.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.33-27&lt;/em&gt;
&lt;em&gt;Fixed version: 8.0.34-29&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-3147" target="_blank" rel="noopener noreferrer"&gt;PXB-3147&lt;/a&gt; : Xtrabackup failed to execute query ‘DO innodb_redo_log_consumer_register(“PXB”); if sql_mode=’ANSI_QUOTES’ is used.’ This results in the Xtrabackup execution failure.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.33-28&lt;/em&gt;
&lt;em&gt;Fixed version: 8.0.35-30, 8.2&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PXB-2954" target="_blank" rel="noopener noreferrer"&gt;PXB-2954&lt;/a&gt; : Xtrabackup failing with “[ERROR] [MY-011825] [Xtrabackup] innodb_init(): Error occurred” to prepare in case of orphan ibd&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 8.0.28-21&lt;/em&gt;
&lt;em&gt;Fixed version: 8.0.32-26&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="percona-kubernetes-operator"&gt;Percona Kubernetes Operator&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-404" target="_blank" rel="noopener noreferrer"&gt;K8SPG-404&lt;/a&gt; : Upgrade from percona PostgreSQL Operator 1.3 to 1.4 is ending up with a cluster without any replicas.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 1.4.0&lt;/em&gt;
&lt;em&gt;Fixed version: 1.5.0 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-420" target="_blank" rel="noopener noreferrer"&gt;K8SPG-420&lt;/a&gt; : Ending up in multiple shared repo after cluster pause and unpause.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 1.4.0&lt;/em&gt;
&lt;em&gt;Fixed version: 1.5.0 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-435" target="_blank" rel="noopener noreferrer"&gt;K8SPG-435&lt;/a&gt; : Pod is recreated when /tmp is filled&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.2.0&lt;/em&gt;
&lt;em&gt;Fixed version: 2.3.0 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-443" target="_blank" rel="noopener noreferrer"&gt;K8SPG-443&lt;/a&gt; : Only english locale is installed, missing other languages support in Postgres&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.2.0&lt;/em&gt;
&lt;em&gt;Fixed version: 2.3.0 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/K8SPG-453" target="_blank" rel="noopener noreferrer"&gt;K8SPG-453&lt;/a&gt; : pg_stat_monitor hangs primary instance and it’s impossible to disable it&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Reported Affected Version/s: 2.2.0&lt;/em&gt;
&lt;em&gt;Fixed version: 2.3.0 [Pending Release]&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, &lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;How to Report Bugs, Improvements, New Feature Requests for Percona Products&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the most up-to-date information, be sure to follow us on &lt;a href="https://twitter.com/percona" target="_blank" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/percona" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://www.facebook.com/Percona?fref=ts" target="_blank" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Quick References:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net" target="_blank" rel="noopener noreferrer"&gt;Percona JIRA&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://bugs.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL Bug Report&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/2019/06/12/report-bugs-improvements-new-feature-requests-for-percona-products/" target="_blank" rel="noopener noreferrer"&gt;Report a Bug in a Percona Product&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona Forums&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Aaditya Dubey</author>
      <category>Opensource</category>
      <category>PMM</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2023/12/BugReportNovember2023_hu_9f029881e6ed430d.jpg"/>
      <media:content url="https://percona.community/blog/2023/12/BugReportNovember2023_hu_955695dd594e0750.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.41 preview release</title>
      <link>https://percona.community/blog/2023/12/06/preview-release/</link>
      <guid>https://percona.community/blog/2023/12/06/preview-release/</guid>
      <pubDate>Wed, 06 Dec 2023 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.41 preview release Hello folks! Percona Monitoring and Management (PMM) 2.41 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-241-preview-release"&gt;Percona Monitoring and Management 2.41 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.41 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;To see the full list of changes, check out the &lt;a href="https://pmm-release-branch-pr-1182.onrender.com/release-notes/2.41.0.html" target="_blank" rel="noopener noreferrer"&gt;PMM 2.41 Release Notes&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker-installation"&gt;PMM server Docker installation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server with Docker instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.41.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-5997.tar.gz" target="_blank" rel="noopener noreferrer"&gt;Download&lt;/a&gt; the latest pmm2-client release candidate tarball for 2.41.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To install pmm2-client package, enable testing repository via Percona-release:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Install pmm2-client package for your OS via Package Manager.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-moitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server as a VM instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.41.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.41.0.ova file&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server hosted at AWS Marketplace instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-0a04085f4c721e913&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us on the [Percona Community Forums](&lt;a href="https://forums.percona.com/]" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/]&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Ondrej Patocka</author>
      <category>PMM</category>
      <category>Release</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>The Importance of Anti-Affinity in Kubernetes</title>
      <link>https://percona.community/blog/2023/11/30/anti-affinity-in-kubernetes/</link>
      <guid>https://percona.community/blog/2023/11/30/anti-affinity-in-kubernetes/</guid>
      <pubDate>Thu, 30 Nov 2023 00:00:00 UTC</pubDate>
      <description>Last week, I embarked on the task of deploying our Percona Operator for MongoDB in Kubernetes. After completing the deployment process, I noticed that the status of the Custom Resource Definition for Percona Server for MongoDB was still displaying as ‘initializing’ and two of our Pods remained in a Pending state.</description>
      <content:encoded>&lt;p&gt;Last week, I embarked on the task of deploying our &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt; in Kubernetes. After completing the deployment process, I noticed that the status of the Custom Resource Definition for Percona Server for MongoDB was still displaying as ‘initializing’ and two of our Pods remained in a &lt;strong&gt;Pending&lt;/strong&gt; state.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;edithpuclla@Ediths-MBP % kubectl get perconaservermongodbs.psmdb.percona.com -n mongodb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME ENDPOINT STATUS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db my-db-psmdb-db-mongos.mongodb.svc.cluster.local initializing 4m58s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;edithpuclla@Ediths-MBP % kubectl get pods -n mongodb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db-cfg-0 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 109m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db-cfg-1 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 108m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db-cfg-2 0/2 Pending &lt;span class="m"&gt;0&lt;/span&gt; 107m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db-mongos-0 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 106m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db-mongos-1 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 106m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db-rs0-0 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 109m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db-rs0-1 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 108m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-db-psmdb-db-rs0-2 0/2 Pending &lt;span class="m"&gt;0&lt;/span&gt; 107m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;my-op-psmdb-operator-77b75bbc7c-qd9ls 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 118m&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Upon further inspection of the pod in &lt;strong&gt;pending&lt;/strong&gt; status, I discovered a clear indicator of the error:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl describe pod my-db-psmdb-db-cfg-2 -n mongodb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Events:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Type Reason Age From Message
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ---- ------ ---- ---- -------
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Normal NotTriggerScaleUp 3m53s &lt;span class="o"&gt;(&lt;/span&gt;x62 over 13m&lt;span class="o"&gt;)&lt;/span&gt; cluster-autoscaler pod didn&lt;span class="s1"&gt;'t trigger scale-up:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s1"&gt; Warning FailedScheduling 3m27s (x4 over 13m) default-scheduler 0/2 nodes are available: 2 node(s) didn'&lt;/span&gt;t match pod anti-affinity rules. preemption: 0/2 nodes are available: &lt;span class="m"&gt;2&lt;/span&gt; No preemption victims found &lt;span class="k"&gt;for&lt;/span&gt; incoming pod..&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I took a closer look at the YAML configuration of our CRD in the &lt;strong&gt;Replsets&lt;/strong&gt; section, particularly drawn to the &lt;strong&gt;Affinity&lt;/strong&gt; subsection.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl describe perconaservermongodbs.psmdb.percona.com my-db-psmdb-db -n mongodb&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here’s what I discovered:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/11/affinity-01.png" alt="Affinity" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Affinity&lt;/strong&gt; and &lt;strong&gt;Anti-Affinity&lt;/strong&gt; are key parts of the scheduling process in Kubernetes, and both focus on ensuring that Pods are correctly assigned to Nodes in the cluster. You can configure a Pod to run on a specific node or group of nodes. There are several ways to achieve this, &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" target="_blank" rel="noopener noreferrer"&gt;nodeSelector&lt;/a&gt; is the simplest way to constrain Pods to nodes with specific labels. Affinity and anti-affinity expand the types of constraints you can define and give you more flexibility.&lt;/p&gt;
&lt;p&gt;Let’s explore what it means to specify anti-affinity rules for the ReplicaSets.&lt;/p&gt;
&lt;p&gt;The key &lt;strong&gt;kubernetes.io/hostname&lt;/strong&gt; is a well-known label in Kubernetes that is automatically assigned to each node in the cluster. It usually holds the value of the node’s hostname.
When used as a topology key in anti-affinity rules, it implies that the rule should consider the hostname of the nodes. In simpler terms, it’s telling Kubernetes to not to schedule the pods of this ReplicaSet on the same physical or virtual host (node).&lt;/p&gt;
&lt;p&gt;If we review our cluster, it has two nodes for installing the Percona Operator for MongoDB.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;edithpuclla@Ediths-MBP ~ % kubectl get nodes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME STATUS ROLES AGE VERSION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-mongo-operator-test-default-pool-7c118de9-b9vc Ready &lt;none&gt; 68m v1.27.4-gke.900
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-mongo-operator-test-default-pool-7c118de9-ts16 Ready &lt;none&gt; 68m v1.27.4-gke.900&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In the context of databases like MongoDB, High availability is often achieved through replication, ensuring that the database can continue to operate even if one or more nodes fail. Within a MongoDB Replica Set, there are multiple copies of the data, and these copies are hosted on different Replica Set members. The default HA MongoDB topology is a 3-member Replica Set. &lt;strong&gt;Percona Operator for MongoDB&lt;/strong&gt; deploys MongoDB in the same topology by default. With anti-affinity set to kubernetes.io/hostname, that means at least 3 Kubernetes worker nodes are needed to deploy MongoDB.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/11/affinity-02_hu_9f71c4f3e029f3f6.png 480w, https://percona.community/blog/2023/11/affinity-02_hu_f22f9a32d98909e3.png 768w, https://percona.community/blog/2023/11/affinity-02_hu_5830a796c8ada964.png 1400w"
src="https://percona.community/blog/2023/11/affinity-02.png" alt="Affinity" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We created the minimum three nodes that the &lt;strong&gt;Percona Operator for MongoDB&lt;/strong&gt; needs. We see that we don’t have the error with Antiaffinity in the Pods because each Pod was located appropriately in different nodes. Now our operator and our database were deployed correctly.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;edithpuclla@Ediths-MBP ~ % kubectl get nodes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME STATUS ROLES AGE VERSION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-mongo-operator-test-default-pool-e4e024a8-1dj3 Ready &lt;none&gt; 76s v1.27.4-gke.900
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-mongo-operator-test-default-pool-e4e024a8-d6j2 Ready &lt;none&gt; 74s v1.27.4-gke.900
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gke-mongo-operator-test-default-pool-e4e024a8-jkkr Ready &lt;none&gt; 76s v1.27.4-gke.900&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If we list all the resources in our namespace, we can see that all pods are running properly and all the resources have been created.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;edithpuclla@Ediths-MBP ~ % kubectl get all -n mongodb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-db-psmdb-db-cfg-0 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 4m40s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-db-psmdb-db-cfg-1 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 4m2s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-db-psmdb-db-cfg-2 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 3m20s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-db-psmdb-db-mongos-0 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 2m56s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-db-psmdb-db-mongos-1 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 2m39s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-db-psmdb-db-rs0-0 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 4m39s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-db-psmdb-db-rs0-1 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 3m59s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-db-psmdb-db-rs0-2 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 3m28s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pod/my-op-psmdb-operator-77b75bbc7c-q2rqh 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 6m47s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/my-db-psmdb-db-cfg ClusterIP None &lt;none&gt; 27017/TCP 4m40s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/my-db-psmdb-db-mongos ClusterIP 10.72.17.115 &lt;none&gt; 27017/TCP 2m56s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/my-db-psmdb-db-rs0 ClusterIP None &lt;none&gt; 27017/TCP 4m39s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY UP-TO-DATE AVAILABLE AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;deployment.apps/my-op-psmdb-operator 1/1 &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt; 6m47s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME DESIRED CURRENT READY AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;replicaset.apps/my-op-psmdb-operator-77b75bbc7c &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt; 6m47s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/my-db-psmdb-db-cfg 3/3 4m41s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/my-db-psmdb-db-mongos 2/2 2m58s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/my-db-psmdb-db-rs0 3/3 4m40s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In conclusion, affinity and anti-affinity in Kubernetes are tools for strategically placing pods in a cluster to optimize factors such as performance, availability, and compliance, which are critical for the smooth and efficient operation of containerized applications, also setting up anti-affinity rules like &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/constraints.html#affinity-and-anti-affinity" target="_blank" rel="noopener noreferrer"&gt;failure-domain.beta.kubernetes.io/zone&lt;/a&gt; in Kubernetes is a key strategy for keeping clusters running, especially in production environments. This approach spreads pods across different availability zones, which means if one zone has an issue, the others can keep the system running. It’s a smart way to ensure your cluster can handle unexpected outages, making it a popular choice for those who need their Kubernetes setups to be reliable and available at all times.&lt;/p&gt;
&lt;p&gt;Learn more about our &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt;, and if you have questions or comments, you can write to us on our &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona Community Forum&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Kubernetes</category>
      <category>Mongodb</category>
      <media:thumbnail url="https://percona.community/blog/2023/11/affinity-intro_hu_329445018cc2184f.jpg"/>
      <media:content url="https://percona.community/blog/2023/11/affinity-intro_hu_57483ac446857898.jpg" medium="image"/>
    </item>
    <item>
      <title>Day 02: The Kubernetes Application Lifecycle</title>
      <link>https://percona.community/blog/2023/11/20/day-02-the-kubernetes-application-lifecycle/</link>
      <guid>https://percona.community/blog/2023/11/20/day-02-the-kubernetes-application-lifecycle/</guid>
      <pubDate>Mon, 20 Nov 2023 00:00:00 UTC</pubDate>
      <description>If you are in the world of application development, you know that every application has a lifecycle. An application lifecycle refers to the stages that our application goes through from initial planning, building, deployment, monitoring, and maintenance in different environments where our application can be executed.</description>
      <content:encoded>&lt;p&gt;If you are in the world of application development, you know that every application has a lifecycle. An application lifecycle refers to the stages that our application goes through from initial planning, building, deployment, monitoring, and maintenance in different environments where our application can be executed.&lt;/p&gt;
&lt;p&gt;On the other hand, the &lt;strong&gt;Kubernetes Application Lifecycle&lt;/strong&gt; refers exclusively to applications deployed and managed in Kubernetes clusters. This differs from the normal application lifecycle because Kubernetes introduces new principles, practices, and tools for managing applications on containers.&lt;/p&gt;
&lt;p&gt;In this blog post, we will talk about these phases &lt;strong&gt;Day 0&lt;/strong&gt;, &lt;strong&gt;Day 1&lt;/strong&gt; and &lt;strong&gt;Day 2&lt;/strong&gt; in the lifecycle of an
application in Kubernetes and we will focus specifically on the phase of &lt;strong&gt;Day 2&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/11/day2_hu_8d3b899e02dcbc.png 480w, https://percona.community/blog/2023/11/day2_hu_60931a122356f562.png 768w, https://percona.community/blog/2023/11/day2_hu_9a82e534692d6bbc.png 1400w"
src="https://percona.community/blog/2023/11/day2.png" alt="day02" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Image 1&lt;/strong&gt;: Day 0, Day 1 and Day 2 in the Kubernetes Application Lifecycle&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Day 0&lt;/strong&gt; refers to the preparation stage before deploying applications in Kubernetes. It’s the stage of identifying goals, planning the infrastructure. Ensuring that the development team has knowledge about Kubernetes and best practices. It’s a stage for investment in training. And the evaluation of the application components to determine which are suitable for use within &lt;strong&gt;containers&lt;/strong&gt; and &lt;strong&gt;Kubernetes&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Day 1&lt;/strong&gt; is the stage that involves deploying the application in Kubernetes clusters and the creation of Kubernetes resources: deployments, pods, services. Additionally, it includes configuration management and the implementation of basic monitoring following the decisions made on &lt;strong&gt;Day 0&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Finally &lt;strong&gt;Day 2&lt;/strong&gt;, our application is already running in Kubernetes clusters by reaching this stage. Day 2 refers to the management, monitoring, and optimization of our Kubernetes clusters over the long term.&lt;/p&gt;
&lt;p&gt;Day 2 involves:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Gathering information from our Kubernetes clusters through monitoring and logging.&lt;/li&gt;
&lt;li&gt;Scaling our application, either horizontally or vertically.&lt;/li&gt;
&lt;li&gt;Application of security best practices and compliance with policies.&lt;/li&gt;
&lt;li&gt;Establishing backups and recovery processes to protect our data and application from future disasters.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Day 2, activities focus on sustainability, efficiency, and long-term continuous improvement to ensure the stability of our application and meet customer expectations.&lt;/p&gt;
&lt;p&gt;Let’s see how &lt;a href="https://www.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt; takes charge of &lt;strong&gt;Day 2&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For Percona, a company specializing in the management of open source databases like MySQL, PostgreSQL, MongoDB, Day 2 refers to the ongoing efforts to ensure that database systems are running efficiently securely, and in alignment with business objectives.&lt;/p&gt;
&lt;p&gt;Here are some examples of how Percona handles this phase:&lt;/p&gt;
&lt;p&gt;To achieve Performance Monitoring, if you use our &lt;a href="https://www.percona.com/software/percona-kubernetes-operators" target="_blank" rel="noopener noreferrer"&gt;Percona Kubernetes Operators&lt;/a&gt;, you can integrate it with Percona Monitoring and Management (PMM) to check the performance of your databases in real time. Monitor query execution times, resource utilization, and server health. PMM helps to identify bottlenecks and inefficiencies, allowing for timely optimization and tuning.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/11/pmm.png" alt="pmm" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Image 2: this is what the PMM Dashboard interface looks like when monitoring your database resources.&lt;/p&gt;
&lt;p&gt;If we discuss data protection and disaster recovery, using &lt;a href="https://docs.percona.com/percona-xtrabackup/innovation-release/" target="_blank" rel="noopener noreferrer"&gt;Percona XtraBackup&lt;/a&gt;, an open-source backup utility for MySQL-based servers. In that case, you can ensure that your database remains fully accessible during scheduled maintenance periods.&lt;/p&gt;
&lt;p&gt;As for scaling strategy and high availability, adopting solutions such as &lt;a href="https://www.percona.com/mysql/software/percona-xtradb-cluster" target="_blank" rel="noopener noreferrer"&gt;Percona XtraDB Cluster&lt;/a&gt; or &lt;a href="https://www.percona.com/mysql/software/percona-server-for-mysql" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt; enables us to secure the database and efficiently manage increased workloads, all while keeping downtime to a minimum.&lt;/p&gt;
&lt;p&gt;These were just some examples of what &lt;strong&gt;Percona does for Day 2&lt;/strong&gt; to maintain tasks crucial for the business and that relies on databases to keep critical applications and services running.&lt;/p&gt;
&lt;p&gt;​​Are you interested in learning more about Kubernetes or need assistance with your cloud-native strategy? With Percona Kubernetes Operators, you can manage database workloads on any supported Kubernetes cluster running in private, public, hybrid, or multi-cloud environments. They are 100% open source, free from vendor lock-in, usage restrictions, and expensive contracts, and include enterprise-ready features by default. Learn more about &lt;a href="https://www.percona.com/software/percona-kubernetes-operators" target="_blank" rel="noopener noreferrer"&gt;Percona Kubernetes Operators&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Kubernetes</category>
      <category>Operators</category>
      <category>Percona</category>
      <media:thumbnail url="https://percona.community/blog/2023/11/day2_hu_b9de33838cf601de.jpg"/>
      <media:content url="https://percona.community/blog/2023/11/day2_hu_40f874d434ec4f3d.jpg" medium="image"/>
    </item>
    <item>
      <title>Data On Kubernetes</title>
      <link>https://percona.community/blog/2023/11/10/data-on-kubernetes/</link>
      <guid>https://percona.community/blog/2023/11/10/data-on-kubernetes/</guid>
      <pubDate>Fri, 10 Nov 2023 00:00:00 UTC</pubDate>
      <description>If you’ve attended one of the Kubecon talks or related events, you’ve likely encountered the phrase Data on Kubernetes. To understand what this means, let’s explore some fundamental concepts related to Kubernetes, workload, stateless, and stateful applications.</description>
      <content:encoded>&lt;p&gt;If you’ve attended one of the Kubecon talks or related events, you’ve likely encountered the phrase &lt;strong&gt;Data on Kubernetes&lt;/strong&gt;.
To understand what this means, let’s explore some fundamental concepts related to &lt;strong&gt;Kubernetes&lt;/strong&gt;, &lt;strong&gt;workload&lt;/strong&gt;, &lt;strong&gt;stateless&lt;/strong&gt;, and &lt;strong&gt;stateful&lt;/strong&gt; applications.&lt;/p&gt;
&lt;h2 id="kubernetes-workload-stateless-and-stateful-applications"&gt;Kubernetes, workload, stateless and stateful applications&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; is a &lt;strong&gt;container orchestration&lt;/strong&gt; tool that has already become an industry standard. When we talk about “container orchestration”, we are referring to the automated management and coordination of containers using Kubernetes.&lt;/p&gt;
&lt;p&gt;Now, let’s explore what a workload is in the context of Kubernetes. A workload represents an application running on Kubernetes. An application may consist of a single component or multiple components working together. These components are packaged into containers operating within a group of &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/" target="_blank" rel="noopener noreferrer"&gt;Pods&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There are two types of workloads, depending on the nature of the application: Stateless and Stateful.&lt;/p&gt;
&lt;p&gt;In a stateless application, the client session data is not stored on the server. This is because the application doesn’t need to retain past interactions to function. However, in a stateful application, storing client session data is essential as it is necessary for subsequent interactions within the application.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/11/steteless-and-stateful_hu_d4dca7f1ecb66dcd.png 480w, https://percona.community/blog/2023/11/steteless-and-stateful_hu_60e3c79bfb338538.png 768w, https://percona.community/blog/2023/11/steteless-and-stateful_hu_bf0a78cdbbceecff.png 1400w"
src="https://percona.community/blog/2023/11/steteless-and-stateful.png" alt="steteless-and-stateful" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now, we are already familiar with Kubernetes, workloads, stateless and stateful applications, and we also understand that Pods are responsible for managing these types of workloads.&lt;/p&gt;
&lt;h2 id="built-in-workload-resources-in-kubernetes"&gt;Built-in Workload Resources in Kubernetes&lt;/h2&gt;
&lt;p&gt;In a Kubernetes cluster, we can have thousands of Pods, and we don’t need to directly manage them individually. Instead, we utilize &lt;a href="https://kubernetes.io/docs/concepts/workloads/" target="_blank" rel="noopener noreferrer"&gt;workload resources&lt;/a&gt; to manage a group of Pods and choose what workload resource depends on the type of workload we are dealing with, Stateless or Stateful.&lt;/p&gt;
&lt;p&gt;For example, if we have stateless applications, we can use the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" target="_blank" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" target="_blank" rel="noopener noreferrer"&gt;ReplicaSet&lt;/a&gt; resources, which are well-suited for this type of workflow. On the other hand, the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" target="_blank" rel="noopener noreferrer"&gt;StatefulSet&lt;/a&gt; resource allows us to run Pods that need to maintain state.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Data on Kubernetes&lt;/em&gt; refers to the management and storage of data within the Kubernetes ecosystem. Kubernetes provides a robust framework for handling data, making it a versatile platform for both &lt;em&gt;stateless and stateful&lt;/em&gt; applications while ensuring data durability, availability, and security.&lt;/p&gt;
&lt;h2 id="the-challenge"&gt;The challenge&lt;/h2&gt;
&lt;p&gt;Kubernetes was initially designed to run stateless applications. However, the number of stateful applications running on Kubernetes has increased significantly. There are many challenges when it comes to running applications with state in Kubernetes, such as data management strategies, volume persistence, and others. According to the &lt;a href="https://dok.community/wp-content/uploads/2021/10/DoK_Report_2021.pdf" target="_blank" rel="noopener noreferrer"&gt;2021 Data on Kubernetes report&lt;/a&gt; of more than 500 executives and technology leaders, 90% believe it is ready for stateful workloads, and a large majority (70%) are running them in production, with databases topping the list. This gives rise to initiatives aimed at standardizing the requirements for managing stateful applications on Kubernetes.&lt;/p&gt;
&lt;p&gt;This is how &lt;a href="https://community.cncf.io/data-on-kubernetes/" target="_blank" rel="noopener noreferrer"&gt;Data on the Kubernetes Community&lt;/a&gt; emerges. The Data on Kubernetes (DoKc) community was established in spring 2020. It is an openly governed group of curious and experienced practitioners, drawing inspiration from the Cloud Native Computing Foundation (CNCF) and the Apache Software Foundation. They aim to help create and improve techniques for using Kubernetes with data.&lt;/p&gt;
&lt;p&gt;There are several organizations that are part of the Data on Kubernetes community, and Percona is part of it as well. &lt;a href="https://www.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt; is adding efforts in DoKC Operator SIG(Special Interest Groups), where we discuss gaps in information around K8s operators for the industry-at-large &amp; co-creates projects to fill the gap. Watch the &lt;a href="https://www.youtube.com/watch?v=TmDdkBPW_hI" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Database Operator Landscape&lt;/a&gt; panel discussion to learn more about the community efforts in Data on Kubernetes.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/11/dok.png" alt="dok" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Data on Kubernetes is a crucial concept in the Kubernetes ecosystem. Kubernetes was initially designed for the stateless application, now faces the challenge of managing stateful workloads, and is more notable in databases. The Data on Kubernetes (DoKc) community has emerged to address these challenges and standardize the management of stateful applications, drawing inspiration from industry standards like CNCF and Apache Software Foundation.&lt;/p&gt;
&lt;p&gt;If you want to be part of them, you are welcome to join &lt;a href="https://community.cncf.io/data-on-kubernetes/" target="_blank" rel="noopener noreferrer"&gt;DoKC&lt;/a&gt;. Also, check this outstanding &lt;a href="https://www.youtube.com/watch?v=TmDdkBPW_hI" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Database Operators Landscape&lt;/a&gt;, where members of DoKC talk about operations for data workloads on Kubernetes.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Kubernetes</category>
      <category>DoK</category>
      <category>Databases</category>
      <media:thumbnail url="https://percona.community/blog/2023/11/steteless-and-stateful_hu_521986dedeb9a949.jpg"/>
      <media:content url="https://percona.community/blog/2023/11/steteless-and-stateful_hu_f4c812c7cccc0443.jpg" medium="image"/>
    </item>
    <item>
      <title>Exploring Kubernetes Operators</title>
      <link>https://percona.community/blog/2023/11/03/kubernetes-operators/</link>
      <guid>https://percona.community/blog/2023/11/03/kubernetes-operators/</guid>
      <pubDate>Fri, 03 Nov 2023 00:00:00 UTC</pubDate>
      <description>The concept of Kubernetes Operators was introduced around 2016 by the CoreOS Linuxdevelopment team. They were in search of a solution to improve automated container management within Kubernetes, primarily with the goal of incorporating operational expertise directly into the software.</description>
      <content:encoded>&lt;p&gt;The concept of &lt;strong&gt;Kubernetes Operators&lt;/strong&gt; was introduced around 2016 by the &lt;a href="https://en.wikipedia.org/wiki/Container_Linux" target="_blank" rel="noopener noreferrer"&gt;CoreOS Linux&lt;/a&gt;development team. They were in search of a solution to improve automated container management within Kubernetes, primarily with the goal of incorporating operational expertise directly into the software.&lt;/p&gt;
&lt;p&gt;According to the &lt;a href="https://www.cncf.io/blog/2022/06/15/kubernetes-operators-what-are-they-some-examples/#:~:text=K8s%20Operators%20are%20controllers%20for,Custom%20Resource%20Definitions%20%28CRD%29." target="_blank" rel="noopener noreferrer"&gt;Cloud Native Computing Foundation&lt;/a&gt;, &lt;strong&gt;“Operators are software extensions that use custom resources to manage applications and their components”.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes is designed for automation, offering essential automation features. It can automatically deploy and manage workloads. The definition provided by CNCF regarding operators, as mentioned above, highlights the flexibility we have to customize the automation capabilities made possible by Kubernetes Operators using custom resources.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/11/k8s-01_hu_e56aaa8dbc6d4a1.png 480w, https://percona.community/blog/2023/11/k8s-01_hu_bb3ea29d96e519d2.png 768w, https://percona.community/blog/2023/11/k8s-01_hu_fa5fccab6c97cf60.png 1400w"
src="https://percona.community/blog/2023/11/k8s-01.png" alt="kubernetes-operators" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In Kubernetes, certain applications require manual attention as Kubernetes can not autonomously manage them. This is especially the case with databases. This is where Operators come into play.
Databases are complex entities that also have complex database operations and features that Kubernetes itself may not inherently understand. While deploying a database in Kubernetes manually isn’t a problem, the true strength of operators shines during &lt;a href="https://thenewstack.io/cloud-native-day-2-operations-why-this-begins-on-day-0/" target="_blank" rel="noopener noreferrer"&gt;Day 2 operations&lt;/a&gt;, which include tasks such as backups, failover, and scaling. Operators automate these manual tasks for applications within Kubernetes.&lt;/p&gt;
&lt;p&gt;The main challenge that arises when implementing &lt;strong&gt;containerized databases&lt;/strong&gt; is the problem of &lt;strong&gt;data persistence&lt;/strong&gt;. This is the challenge for containers in general, and it is more critical in the context of databases despite ongoing advances in container maturity. Kubernetes operators are designed to address this gap. While it is possible to use Kubernetes resources like Persistent Volume Claims (PVCs) without operators, operators simplify the process by providing a higher level of abstraction and automation.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/11/k8s-02.png" alt="kubernetes-operators" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;It is possible to create new operators using the &lt;strong&gt;Kubernetes operator pattern&lt;/strong&gt; concept. This allows you to extend cluster behavior without modifying the Kubernetes code by linking controllers to one or more custom resources. These Operators use and extend the Kubernetes API, a key component within the Kubernetes architecture, with the essential concepts for users to interact with the Kubernetes cluster. They create &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#:~:text=A%20custom%20resource%20is%20an,resources%2C%20making%20Kubernetes%20more%20modular." target="_blank" rel="noopener noreferrer"&gt;custom resources&lt;/a&gt; to add new functionality according to the needs of an application to be flexible and scalable. This is how we automate workloads using Kubernetes Operators.&lt;/p&gt;
&lt;p&gt;One of the primary benefits of operators is the &lt;strong&gt;automation&lt;/strong&gt; of repetitive tasks that are often managed by human operators, eliminating errors in application lifecycle management.&lt;/p&gt;
&lt;h3 id="conclusion"&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;In this article, we explored an overview of what Kubernetes Operators are; we saw why they are necessary and the benefits of using them. I hope you have gained a general understanding of why Kubernetes Operators are valuable.&lt;/p&gt;
&lt;p&gt;If you want to know more about Kubernetes operators designed specifically for databases, you can visit the &lt;a href="https://www.percona.com/software/percona-kubernetes-operators" target="_blank" rel="noopener noreferrer"&gt;Percona website&lt;/a&gt;, where you will find Kubernetes operators created by Percona for &lt;strong&gt;MongoDB&lt;/strong&gt;, &lt;strong&gt;PostgreSQL&lt;/strong&gt;, and &lt;strong&gt;MySQL&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Are you considering creating your own operator? Start by using the &lt;strong&gt;Operator-SDK&lt;/strong&gt;. Additionally, you can watch &lt;a href="https://www.linkedin.com/in/sergeypronin/" target="_blank" rel="noopener noreferrer"&gt;Sergey Pronin’s&lt;/a&gt; (Group Product Manager At Percona) talk at the DoK Community about Migrating MongoDB to Kubernetes, where he discusses the reasons why Percona created an Operator for MongoDB.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>CNCF</category>
      <category>Kubernetes</category>
      <category>Operators</category>
      <category>Databases</category>
      <media:thumbnail url="https://percona.community/blog/2023/11/k8s-01_hu_ed80426dd6e0a7aa.jpg"/>
      <media:content url="https://percona.community/blog/2023/11/k8s-01_hu_88a1c819ea89bc3f.jpg" medium="image"/>
    </item>
    <item>
      <title>Building and Running Percona Everest From Source Code</title>
      <link>https://percona.community/blog/2023/10/30/building-and-running-percona-everest-from-source-code/</link>
      <guid>https://percona.community/blog/2023/10/30/building-and-running-percona-everest-from-source-code/</guid>
      <pubDate>Mon, 30 Oct 2023 00:00:00 UTC</pubDate>
      <description>Digging deeper into the architecture of an open source product</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Digging deeper into the architecture of an open source product&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Recently, Percona team &lt;a href="https://www.percona.com/blog/announcing-the-alpha-release-of-percona-everest-an-open-source-private-dbaas/" target="_blank" rel="noopener noreferrer"&gt;announced&lt;/a&gt; the public alpha version of a new open source product – Percona Everest. It allows you to create database clusters on Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;I have installed Percona Everest several times and tried its features. Standard installation is very simple and &lt;a href="https://docs.percona.com/everest/quickstart-guide/qs-overview.html" target="_blank" rel="noopener noreferrer"&gt;takes a few minutes&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But, to understand the product deeper, I came up with the idea to explore repositories, build and run Percona Everest from source.&lt;/p&gt;
&lt;p&gt;In this post, I will explain what I did step by step and what components and frameworks are used in the development of Percona Everest.&lt;/p&gt;
&lt;h2 id="architecture-components-and-tools"&gt;Architecture, components, and tools&lt;/h2&gt;
&lt;p&gt;At the top level, we have two components or tools on the user side: Percona Everest App and everestctl CLI tool.&lt;/p&gt;
&lt;p&gt;The Percona Everest App is a basic application that provides a web interface for database creation and management functions. Percona Everest App can be installed on your computer or remote server. The App consists of two major components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Frontend is a browser-based application providing a web interface for managing clusters and interacting with backend APIs. It is developed with React and TypeScript.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Backend API that process requests from frontend, interact with Kubernetes clusters and databases. It is developed on Golang and PostgreSQL as a database.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;everestctl is a CLI tool for provisioning of Percona Everest on Kubernetes clusters. It is used to install Percona Everest components such as database operators on the Kubernetes cluster. It is developed in Golang and is provided as a ready-made executable, but, in this post, we will also build it from source code.&lt;/p&gt;
&lt;p&gt;Remember that normally, when you install Percona Everest following the instructions in the documentation, the frontend and backend are built and integrated into a single container and run as a single unit.&lt;/p&gt;
&lt;p&gt;Let’s get started with our experiments.&lt;/p&gt;
&lt;h2 id="frontend-installation-and-launch"&gt;Frontend installation and launch&lt;/h2&gt;
&lt;p&gt;Percona Everest Frontend is developed using the Bit framework.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://bit.dev/" target="_blank" rel="noopener noreferrer"&gt;Bit&lt;/a&gt; is an open source toolchain for the development of composable software using React library and TypeScript.&lt;/p&gt;
&lt;p&gt;Bit is used by about 100K developers, 250+ community plugins and has 16K+ stars on &lt;a href="https://github.com/teambit/bit" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="clone-the-frontend-repository"&gt;Clone the Frontend repository&lt;/h3&gt;
&lt;p&gt;You need to clone a repository with the Percona Everest frontend:
&lt;a href="https://github.com/percona/percona-everest-frontend" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-everest-frontend&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git clone git@github.com:percona/percona-everest-frontend.git&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="install-bit"&gt;Install Bit&lt;/h3&gt;
&lt;p&gt;Bit requires the npm package manager to install. &lt;a href="https://www.npmjs.com/" target="_blank" rel="noopener noreferrer"&gt;npm&lt;/a&gt; is a popular registry of JavaScript packages and libraries. The npm registry contains over 800,000 code packages. Open source developers use npm to share software. Installing npm on your operating system is straightforward. I’m sure you can handle it. It installs along with Node.js.&lt;/p&gt;
&lt;p&gt;Open the Percona Everest Frontend source directory and run the following commands.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;npm i -g @teambit/bvm&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bvm install 1.0.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bit install --recurring-install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/everest-frontend_hu_68b14b415811cb5f.png 480w, https://percona.community/blog/2023/10/everest-frontend_hu_424f64337a05a1ad.png 768w, https://percona.community/blog/2023/10/everest-frontend_hu_7ad91859b8444a16.png 1400w"
src="https://percona.community/blog/2023/10/everest-frontend.png" alt="Percona Everest Frontend" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="launching-the-frontend-application"&gt;Launching the frontend application&lt;/h3&gt;
&lt;p&gt;Moving forward, we have two options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Run the frontend application using Bit.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build a ready application and copy it to the backend.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Percona Everest frontend repository contains versioning branches &lt;code&gt;release-[version]&lt;/code&gt; and the current version in development in the main branch. We will run the latest dev version from main.&lt;/p&gt;
&lt;p&gt;Let’s run the command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bit run everest --skip-watch&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As a result, we can open &lt;code&gt;localhost:3000&lt;/code&gt; in the browser.&lt;/p&gt;
&lt;p&gt;The Frontend is now built, and we can move on to the Backend.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/everest-front-run_hu_aa37abd3d7484a40.png 480w, https://percona.community/blog/2023/10/everest-front-run_hu_a370f3cbf540ff96.png 768w, https://percona.community/blog/2023/10/everest-front-run_hu_3f9e560d07d7cdcd.png 1400w"
src="https://percona.community/blog/2023/10/everest-front-run.png" alt="Percona Everest Frontend Run" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/everest-front-run-result_hu_17d3378a57f0175f.png 480w, https://percona.community/blog/2023/10/everest-front-run-result_hu_688777866a4175f5.png 768w, https://percona.community/blog/2023/10/everest-front-run-result_hu_111db07964af1301.png 1400w"
src="https://percona.community/blog/2023/10/everest-front-run-result.png" alt="Percona Everest Frontend Result" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="additional-information"&gt;Additional information&lt;/h3&gt;
&lt;p&gt;There is the other way to build a frontend to work with backend. You will need two commands:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bit snap --build&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bit artifacts percona.apps/everest --out-dir build&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this case, the build will be done to the folder:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;build/percona.apps_everest/artifacts/apps/react-common-js/everest/public/&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You need to copy all the files to the &lt;code&gt;public/dist&lt;/code&gt; folder of the backend repository. We will talk about backend in the next section.&lt;/p&gt;
&lt;p&gt;The installation process may change over time, so I recommend to keep track of the up-to-date commands in the files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;README.md&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Makefile&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;CI/CD of GitHub configuration, file &lt;code&gt;.github/workflows/ci.yml&lt;/code&gt; in the repository.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="backend"&gt;Backend&lt;/h2&gt;
&lt;p&gt;So we’ve launched Frontend, and now it shows an error because it sends requests to the Backend API, and we don’t have it yet.&lt;/p&gt;
&lt;p&gt;We will need to clone the repository with Percona Everest Backend&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/percona-everest-backend" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-everest-backend&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Percona Everest Backend is developed in Golang using &lt;a href="https://echo.labstack.com/" target="_blank" rel="noopener noreferrer"&gt;the Echo framework&lt;/a&gt;. &lt;a href="https://github.com/labstack/echo" target="_blank" rel="noopener noreferrer"&gt;The Echo repository&lt;/a&gt; has over 26k stars on GitHub.&lt;/p&gt;
&lt;p&gt;Generally, it is an API that interacts with the frontend, processing requests and sending them to the Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;Let’s get it up and running.&lt;/p&gt;
&lt;h3 id="run-postgresql-locally"&gt;Run PostgreSQL locally&lt;/h3&gt;
&lt;p&gt;You need Docker to run it. I hope you have &lt;a href="https://www.docker.com/" target="_blank" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; installed.&lt;/p&gt;
&lt;p&gt;Let’s run one of the two commands in the repository directory:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make local-env-up&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose up --detach --remove-orphans&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/backend-docker-pg_hu_6ae47758e8022e26.png 480w, https://percona.community/blog/2023/10/backend-docker-pg_hu_232da1c9da0ee991.png 768w, https://percona.community/blog/2023/10/backend-docker-pg_hu_de306eb13379f6fe.png 1400w"
src="https://percona.community/blog/2023/10/backend-docker-pg.png" alt="Percona Everest Backend" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/backend-docker-pg-desktop_hu_b27590def9fdf2a8.png 480w, https://percona.community/blog/2023/10/backend-docker-pg-desktop_hu_938fe5c5c95eb98b.png 768w, https://percona.community/blog/2023/10/backend-docker-pg-desktop_hu_7578521bab7d3420.png 1400w"
src="https://percona.community/blog/2023/10/backend-docker-pg-desktop.png" alt="Percona Everest Backend" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Using Docker for this process will be replaced by Kubernetes. You can see the YAML manifest in the file:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;/deploy/quickstart-k8s.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="launch-the-go-app"&gt;Launch the Go app&lt;/h3&gt;
&lt;p&gt;We have two options, I use:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go run cmd/main.go&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;But you can also use:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make run-debug&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Starting with version 0.4.0, you will need to add the SECRETS_ROOT_KEY environment variable before starting the application. The secret key must be used on restarts if you do not start from the scratch.
&lt;code&gt;export SECRETS_ROOT_KEY=$(openssl rand -base64 32)&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/10/backend-go-run.png" alt="Percona Everest Backend Go Run" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now we can open the localhost:3000 in the browser again and check that the backend is running. But we see that we have no Kubernetes clusters connected and configured.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/backend-go-run-result_hu_3b8a07d031fb452d.png 480w, https://percona.community/blog/2023/10/backend-go-run-result_hu_e3aa160714bb3f48.png 768w, https://percona.community/blog/2023/10/backend-go-run-result_hu_2b2f784c46c5079.png 1400w"
src="https://percona.community/blog/2023/10/backend-go-run-result.png" alt="Percona Everest Backend Go Result" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="everestctl-and-kubernetes-cluster"&gt;Everestctl and Kubernetes cluster&lt;/h2&gt;
&lt;p&gt;Another important component of Percona Everest is everestctl. &lt;a href="https://github.com/percona/percona-everest-cli/" target="_blank" rel="noopener noreferrer"&gt;everestctl&lt;/a&gt; is a CLI tool responsible for provisioning Percona Everest on Kubernetes clusters.&lt;/p&gt;
&lt;p&gt;We will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes cluster&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build and run everestctl&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="preparation-of-the-kubernetes-cluster"&gt;Preparation of the Kubernetes Cluster&lt;/h3&gt;
&lt;p&gt;You can use Kubernetes Cluster on AWS, Google Cloud, or minikube.&lt;/p&gt;
&lt;p&gt;The Percona Everest documentation says:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;You must have a publicly accessible Kubernetes cluster to use Percona Everest. EKS or GKE is recommended, as it may be difficult to make it work with local installations of Kubernetes such as minikube, kind, k3d, or similar products.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The documentation provides instructions on how to run the test cluster&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/everest/quickstart-guide/eks.html" target="_blank" rel="noopener noreferrer"&gt;Create Kubernetes cluster on Amazon Elastic Kubernetes Service (EKS)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/everest/quickstart-guide/gke.html" target="_blank" rel="noopener noreferrer"&gt;Create Kubernetes cluster on Google Kubernetes Engine (GKE)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this post, we use &lt;a href="https://minikube.sigs.k8s.io/docs/start/" target="_blank" rel="noopener noreferrer"&gt;minikube&lt;/a&gt; which is preliminarily installed.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/percona-everest-backend/blob/main/Makefile" target="_blank" rel="noopener noreferrer"&gt;The Makefile&lt;/a&gt; of &lt;a href="https://github.com/percona/percona-everest-backend" target="_blank" rel="noopener noreferrer"&gt;the Percona Everest Backend repository&lt;/a&gt; contains the &lt;code&gt;make k8s&lt;/code&gt; command to start the cluster in minikube.&lt;/p&gt;
&lt;p&gt;Let’s open the directory of the backend repository and launch minikube:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make k8s-macos&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make k8s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/everest-minikube_hu_19e62a166c161ceb.png 480w, https://percona.community/blog/2023/10/everest-minikube_hu_36d59f467a0af4e9.png 768w, https://percona.community/blog/2023/10/everest-minikube_hu_cc29da85142e255f.png 1400w"
src="https://percona.community/blog/2023/10/everest-minikube.png" alt="Percona Everest Backend Minikube" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As a result, I see in the console this message:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I also see that I have a minikube cluster with three nodes. It is time to install Percona Everest components using everestctl.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;➜ percona-everest-backend git:(main) ✗ kubectl get nodes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME STATUS ROLES AGE VERSION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube Ready control-plane 3m44s v1.26.3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube-m02 Ready &lt;none&gt; 3m22s v1.26.3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube-m03 Ready &lt;none&gt; 3m3s v1.26.3&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="build-everestctl"&gt;Build everestctl&lt;/h3&gt;
&lt;p&gt;Let’s clone the repository: &lt;a href="https://github.com/percona/percona-everest-cli/" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/percona-everest-cli/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Open the repository folder and run build:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make build&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;everestctl is built in the binary file &lt;code&gt;/bin/everest&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Grant execution privileges&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;chmod +x ./bin/everest&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s run the Percona Everest installation:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;./bin/everest install operators&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;That command will start the installation wizard.&lt;/p&gt;
&lt;p&gt;I will leave all default values, just pressing Enter. Otherwise, you can experiment with the settings, it is your choice.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/everest-wizards_hu_a904831db6cc2923.png 480w, https://percona.community/blog/2023/10/everest-wizards_hu_f0cbde7b620730b4.png 768w, https://percona.community/blog/2023/10/everest-wizards_hu_ab37b446973d79a1.png 1400w"
src="https://percona.community/blog/2023/10/everest-wizards.png" alt="Percona Everest Wizard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As a result, the following processes will run on the cluster:&lt;/p&gt;
&lt;p&gt;Creating namespace percona-everest.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Installing Operator Lifecycle Manager (OLM).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installing &lt;a href="https://github.com/percona/everest-catalog" target="_blank" rel="noopener noreferrer"&gt;Percona OLM Catalog&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installing Percona Operators for the databases selected in the wizard.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installing &lt;a href="https://github.com/percona/everest-operator" target="_blank" rel="noopener noreferrer"&gt;everest-operator&lt;/a&gt; operator.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creating services and roles.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That’s it! We’ve installed Percona Everest completely. You can open it in a browser and create a database.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/everest-finish-start_hu_906bcca62f6d5dec.png 480w, https://percona.community/blog/2023/10/everest-finish-start_hu_b003b5e66c04e34d.png 768w, https://percona.community/blog/2023/10/everest-finish-start_hu_efa0208f4c53c3.png 1400w"
src="https://percona.community/blog/2023/10/everest-finish-start.png" alt="Percona Everest Start" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/everest-create-db_hu_f50e69a221bac503.png 480w, https://percona.community/blog/2023/10/everest-create-db_hu_8e1f19bf1d420a5a.png 768w, https://percona.community/blog/2023/10/everest-create-db_hu_1fcc40e7ebfee3cf.png 1400w"
src="https://percona.community/blog/2023/10/everest-create-db.png" alt="Percona Everest Create DB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="whats-next"&gt;What’s next?&lt;/h2&gt;
&lt;p&gt;Try creating databases with different configurations.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/everest-dbs_hu_6d0c46e8e381c830.png 480w, https://percona.community/blog/2023/10/everest-dbs_hu_d10fbd6316d4d763.png 768w, https://percona.community/blog/2023/10/everest-dbs_hu_4a62d6fdf8ccb0ed.png 1400w"
src="https://percona.community/blog/2023/10/everest-dbs.png" alt="Percona Everest Create DB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Repeat the installation with a different cluster or settings.&lt;/p&gt;
&lt;p&gt;If you face any problems or have ideas on how to improve components, create Issues on GitHub in the appropriate repositories.&lt;/p&gt;
&lt;h3 id="stop-and-remove-percona-everest"&gt;Stop and remove Percona Everest&lt;/h3&gt;
&lt;p&gt;Once you finished your experiments, you can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Stop Frontend - stop Bit running by pressing CTRL+C in the console.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stop Backend API - stop Golang script by pressing CTRL+C in the console.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stop and remove PostgreSQL in Docker - run &lt;code&gt;make local-env-down&lt;/code&gt; in the backend repository, or use Docker Desktop to stop.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Remove Kubernetes cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="updates"&gt;Updates&lt;/h3&gt;
&lt;p&gt;Every day, developers make changes to the code and publish to repositories on GitHub.&lt;/p&gt;
&lt;p&gt;You can stop the component, pull the changes with &lt;code&gt;git pull&lt;/code&gt;, and start a new version. It’s just for experimentation and development. Some versions of components will not be compatible; uninstall all components and start over using the appropriate versions. Detailed instructions on how to upgrade will appear later.&lt;/p&gt;
&lt;p&gt;You can see changes to the build process or parameters in the repositories.&lt;/p&gt;
&lt;h3 id="couple-of-useful-commands"&gt;Couple of useful commands&lt;/h3&gt;
&lt;p&gt;List of databases&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl -n percona-everest get db&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;List of pods&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl -n percona-everest get pods&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="conclusion"&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;I hope you were able to make it this far, and it was interesting for you.&lt;/p&gt;
&lt;p&gt;I’d love if you leave your feedback in the comments.&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>Percona Everest</category>
      <category>Kubernetes</category>
      <category>Opensource</category>
      <category>DBaaS</category>
      <media:thumbnail url="https://percona.community/blog/2023/10/everest-cover_hu_cf1bc6ca93ed5516.jpg"/>
      <media:content url="https://percona.community/blog/2023/10/everest-cover_hu_e8628b81a62303b.jpg" medium="image"/>
    </item>
    <item>
      <title>Kubernetes Community Days UK: Keynote Cilium and eBPF</title>
      <link>https://percona.community/blog/2023/10/24/kcduk-cilium-ebpf/</link>
      <guid>https://percona.community/blog/2023/10/24/kcduk-cilium-ebpf/</guid>
      <pubDate>Tue, 24 Oct 2023 00:00:00 UTC</pubDate>
      <description>This week, at Kubernetes Community Days UK in London. Liz Rice, Chief Open Source Officer at Isovalent, delivered a keynote on Cilium, eBPF, and the new feature of Cilium: Mutual Authentication.</description>
      <content:encoded>&lt;p&gt;This week, at &lt;a href="https://community.cncf.io/events/details/cncf-kcd-uk-presents-kubernetes-community-days-uk-2023/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Community Days UK&lt;/a&gt; in London. &lt;strong&gt;Liz Rice&lt;/strong&gt;, Chief Open Source Officer at Isovalent, delivered a keynote on &lt;strong&gt;Cilium, eBPF&lt;/strong&gt;, and the new feature of &lt;strong&gt;Cilium: Mutual Authentication&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/10/kcduk-01_hu_bd6faf10c7fc4fca.jpg 480w, https://percona.community/blog/2023/10/kcduk-01_hu_aa53236b59c4188b.jpg 768w, https://percona.community/blog/2023/10/kcduk-01_hu_13ae809b73043ef5.jpg 1400w"
src="https://percona.community/blog/2023/10/kcduk-01.jpg" alt="lizrice-keynote-01" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Figure 1&lt;/strong&gt;. Liz Rice Keynote KCD UK, London. Tuesday 17, 2023&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cilium.io/" target="_blank" rel="noopener noreferrer"&gt;Cilium&lt;/a&gt; is an &lt;strong&gt;eBPF-powered open source&lt;/strong&gt;, cloud native solution for delivering, securing, and observing network connectivity between workloads.&lt;/p&gt;
&lt;p&gt;eBPF is a technology that allows us to create modules to modify the behavior of the Linux kernel, but why would we want to change the Linux kernel?&lt;/p&gt;
&lt;p&gt;Some use cases for observability, security, and networking require tracking and monitoring our application, but we don’t want to constantly modify our application with these changes. It’s better to add a program that can observe the behavior of our application from the kernel.&lt;/p&gt;
&lt;p&gt;But changing the Linux kernel can be, well, hard.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/10/kcduk-02.png" alt="addfea-to-the-kernel" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Figure 2&lt;/strong&gt;. Adding features to the kernel (cartoon by Vadim Shchekoldin, Isovalent)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;However, eBPF enables you to modify the kernel’s behavior without directly altering the kernel itself. It might sound unconventional, but eBPF makes this possible through the creation of programs for the Linux kernel. The Linux kernel accepts eBPF programs that can be loaded and unloaded as needed.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/10/kcduk-03.png" alt="addingfeatures-to-the-kernel-with-ebpf" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Figure 3. Adding kernel features with eBPF (cartoon by Vadim Shchekoldin, Isovalent)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To ensure that these eBPF programs written by us are secure, there is a mechanism in place that allows these programs to be verified safe for execution. This can be seen in &lt;a href="https://ebpf.io/what-is-ebpf/#ebpf-safety" target="_blank" rel="noopener noreferrer"&gt;eBPF verification and security&lt;/a&gt;. You don’t have to restart the kernel to deploy or remove eBPF applications, which makes eBPF one of the technology tools of the moment.&lt;/p&gt;
&lt;p&gt;Liz also announced that Cilium recently graduated from CNCF. This means Cilium is considered stable and has been successfully used in production environments.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/10/kcduk-04.png" alt="cncf-project-maturity-levels" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Figure 4&lt;/strong&gt;. &lt;a href="https://www.cncf.io/project-metrics/" target="_blank" rel="noopener noreferrer"&gt;CNCF Project Maturity Levels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;After understanding what eBPF is, let’s move on to the actual topic of Liz’s keynote. She spoke about &lt;strong&gt;Mutual Authentication with Cilium&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Mutual Authentication with Cilium was the last significant feature missing from Cilium Service Mesh. It’s a somewhat more complex topic related to mTLS (Mutual transport layer security).&lt;/p&gt;
&lt;p&gt;mTLS is a mechanism that ensures the authenticity, integrity, and confidentiality of data exchanged between two entities in the network.&lt;/p&gt;
&lt;p&gt;In &lt;a href="https://isovalent.com/blog/post/cilium-release-114/" target="_blank" rel="noopener noreferrer"&gt;Cilium 1.14&lt;/a&gt;, one of the most significant releases, Cilium introduces support for a feature that many developers have requested: mutual authentication. This feature simplifies the process of achieving mutual authentication between two workloads. It now only requires adding two lines of code to the YAML in the Cilium Network Policy to authenticate communication between two workloads.&lt;/p&gt;
&lt;p&gt;Slightly more complex, isn’t it? Let’s explore Mutual Authentication with Cilium in a second blog post very soon. We’ll also examine how this is related to Kubernetes and why it matters when running databases on Kubernetes.&lt;/p&gt;
&lt;p&gt;Check what the other &lt;a href="https://www.cncf.io/projects/" target="_blank" rel="noopener noreferrer"&gt;Graduated and Incubating Projects are at CNCF&lt;/a&gt;, and don´t forget to subscribe to our &lt;a href="https://percona.community/blog/" target="_blank" rel="noopener noreferrer"&gt;Percona Community Blog&lt;/a&gt; to read more about Open Source, CNCF Projects, and Database.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Events</category>
      <category>Kubernetes</category>
      <media:thumbnail url="https://percona.community/blog/2023/10/kcduk-01_hu_1fbc1ab2e4b72516.jpg"/>
      <media:content url="https://percona.community/blog/2023/10/kcduk-01_hu_92c2c0c5c7a1073a.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.40 preview release</title>
      <link>https://percona.community/blog/2023/10/03/preview-release/</link>
      <guid>https://percona.community/blog/2023/10/03/preview-release/</guid>
      <pubDate>Tue, 03 Oct 2023 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.40 preview release Hello folks! Percona Monitoring and Management (PMM) 2.40 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-240-preview-release"&gt;Percona Monitoring and Management 2.40 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.40 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;To see the full list of changes, check out the &lt;a href="https://pmm-doc-pr-1139.onrender.com/release-notes/2.40.0.html" target="_blank" rel="noopener noreferrer"&gt;PMM 2.40 Release Notes&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker-installation"&gt;PMM server Docker installation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server with Docker instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.40.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-5830.tar.gz" target="_blank" rel="noopener noreferrer"&gt;Download&lt;/a&gt; the latest pmm2-client release candidate tarball for 2.40.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To install pmm2-client package, enable testing repository via Percona-release:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Install pmm2-client package for your OS via Package Manager.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-moitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server as a VM instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.40.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.40.0.ova file&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server hosted at AWS Marketplace instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-09895e9b605f14cbc&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us on the [Percona Community Forums](&lt;a href="https://forums.percona.com/]" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/]&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <category>Releases</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Open Source And Recession – An Economic Outlook</title>
      <link>https://percona.community/blog/2023/09/14/open-source-and-recession-an-economic-outlook/</link>
      <guid>https://percona.community/blog/2023/09/14/open-source-and-recession-an-economic-outlook/</guid>
      <pubDate>Thu, 14 Sep 2023 00:00:00 UTC</pubDate>
      <description>Initially, I dropped an idea of writing this blog by reckoning the amount of time and energy required to perform research. However, my mentor Ramona Gerum motivated me and provided inputs to write this blog. This would not have been possible without her help. Thank you very much Ramona.</description>
      <content:encoded>&lt;p&gt;Initially, I dropped an idea of writing this blog by reckoning the amount of time and energy required to perform research. However, my mentor Ramona Gerum motivated me and provided inputs to write this blog. This would not have been possible without her help. Thank you very much Ramona.&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In October 2022, the term recession gained popularity on Google trends. This happened as many economists already predicted a slowdown, and it started flashing on news channels world-wide. In the following year, many giant corporations, such as Google, Microsoft, Amazon and others, announced massive layoffs; this was followed by other chain of events from stock market crashes to high inflation rates across the globe. This development reinforced people’s belief about recession.&lt;/p&gt;
&lt;p&gt;Majority of organisations who ran several rounds of layoffs claimed to have overstaffing issues as the hiring was done to anticipate the upcoming tide of work that never came, and they did not fire any technical/software experts. In such cases, open source products often rise as an alternative to proprietary ones. This blog will focus on the effects of a recession on open source products.&lt;/p&gt;
&lt;h2 id="some-important-related-to-economics"&gt;Some important related to economics&lt;/h2&gt;
&lt;p&gt;As this blog focuses on recession, it is important to understand some economic terms to understand this blog properly.&lt;/p&gt;
&lt;h3 id="gdpgross-domestic-product"&gt;GDP(Gross Domestic Product):&lt;/h3&gt;
&lt;p&gt;This is the gross monetary value of goods and services produced within a country. Every country produces various products, such as food, heavy machinery, oil and others. Those commodities would give some value upon selling them in the market. The gross total of the values of all such goods is GDP.&lt;/p&gt;
&lt;p&gt;Generally, it is calculated on an annual basis.&lt;/p&gt;
&lt;h3 id="gdp-growth"&gt;GDP Growth:&lt;/h3&gt;
&lt;p&gt;The change in GDP compared to the previous GDP figure is GDP growth. In general, economists calculate them on a quarterly and yearly basis as they represent the economic position of a country and predict its growth in the near and distant future.&lt;/p&gt;
&lt;h3 id="developed-countries"&gt;Developed countries:&lt;/h3&gt;
&lt;p&gt;This one is a little complex as there are more than one key factors that are decision-makers. For a country to become a developed one, all the factors from GDP to standard of living and HDI(Human Development Index) should be taken into consideration. In terms of economics, countries with a GDP of 12,000 USD(some experts put it at 24,000 USD) per capita(person) are considered a part of the emerged world.&lt;/p&gt;
&lt;h3 id="boom"&gt;Boom:&lt;/h3&gt;
&lt;p&gt;A period of good economic growth: rising number jobs, increasing wages, increased profitability. This is characterized as a short stint of wealth creation and accumulation.&lt;/p&gt;
&lt;h3 id="recession"&gt;Recession:&lt;/h3&gt;
&lt;p&gt;A period when economic growth is very low or nil. In this period, higher unemployment, lower demand in the job market, higher inflation and layoffs are commonly observed in such a period.&lt;/p&gt;
&lt;p&gt;In economics, negative GDP growth for 2 consecutive quarters is perceived as a recession, and if the condition of recession persists for 3 years, it is called an economic depression. However, economists analyze other indicators as well to decide if the recession prevails or not.&lt;/p&gt;
&lt;h3 id="inflation-rate"&gt;Inflation rate:&lt;/h3&gt;
&lt;p&gt;An average increase in percentage in the gross price of various commodities from the previously obtained number. For example, if the milk was being sold at 1.5 USD in 2018, and in 2019, it is being sold at 1.55 USD, the price rise is nearly 3.33% that is the inflation rate of milk for a year. Usually, the gross percentage is considered to determine the condition of a nation. It is one of the important indicators.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ideally, 3-4% inflation rate is considered to be a healthy one.&lt;/li&gt;
&lt;li&gt;Higher than 5% affects the livelihood of people and many people would be pushed below the poverty line.&lt;/li&gt;
&lt;li&gt;While it is lower than 2%, it should be considered that the economy is in a reverse gear, which is not good for the growth of any country.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="third-world-countries"&gt;Third-world countries:&lt;/h3&gt;
&lt;p&gt;This term is often used in a pejorative manner; it was originally used for those countries who did not prefer to align themselves with either the USSR(socialist) or the USA(capitalist) in the era of the cold war that surfaced after the second world war. However, as they did not choose any side, they failed to grow like developed countries, hence the term “third-world countries” became synonymous with underdeveloped or developing countries.&lt;/p&gt;
&lt;h2 id="causes-of-a-recession"&gt;Causes of a recession&lt;/h2&gt;
&lt;p&gt;A recession is caused by any disruption in an existing established economic flow. As a country grows, various channels are established through which money flows from sources to sources, and that is how money keeps flowing through the system. Whenever the rotation of money is affected, it gradually affects the economy of a country.&lt;/p&gt;
&lt;p&gt;For instance, consider the below existing channel.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Farmers grow crop&lt;/li&gt;
&lt;li&gt;During crop harvesting seasons, farmers employ additional people&lt;/li&gt;
&lt;li&gt;Farmers sell goods to wholesalers&lt;/li&gt;
&lt;li&gt;Wholesalers sell it further to retail stores and business houses&lt;/li&gt;
&lt;li&gt;Retail stores sell it consumers, and business houses produce goods for consumer&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here, if a country is affected by a drought and there is no other method of farming available, people in the above hierarchy from farmers to employees of retail stores and business houses will become jobless for that year and the flow of money gets disrupted. This is to note that food is an essential commodity, and due to crop failure, the demand for food items exponentially rises and the prices of food items also skyrocket.&lt;/p&gt;
&lt;p&gt;Here, just to understand, effects of recession will not be visible immediately after the drought; it takes time to surface. Although the above one is just taken as an example, this is not a completely hypothetical scenario.&lt;/p&gt;
&lt;p&gt;As we saw a classical example of drought, there are various phenomena that can drive the economy to recession. It completely depends on the economical condition of a country.&lt;/p&gt;
&lt;h3 id="in-the-third-world-countries"&gt;In the third-world countries:&lt;/h3&gt;
&lt;p&gt;This group is already facing a number of internal problems and depends on prosperous countries. They may or may not have adequate resources, however if they have, they do not know how to utilise them efficiently. Also, due to internal clashes, poverty and economic instability, they witness recessions very frequently. Some evident factors for recession in such countries are&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Drought&lt;/li&gt;
&lt;li&gt;Heavy rainfall&lt;/li&gt;
&lt;li&gt;Wars&lt;/li&gt;
&lt;li&gt;Epidemic&lt;/li&gt;
&lt;li&gt;International sanctions&lt;/li&gt;
&lt;li&gt;Political instability&lt;/li&gt;
&lt;li&gt;Uncontrollable inflation&lt;/li&gt;
&lt;li&gt;Corruption and scams&lt;/li&gt;
&lt;li&gt;Strikes&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="in-the-developed-countries"&gt;In the developed countries:&lt;/h3&gt;
&lt;p&gt;The developed world is often perceived as the utopia as processes are well-established there. The scope of recession appears limited, however this is just one side of the coin. There is a dialogue in the movie Wall Street: “Greed is good! Greed is progress!….”. Why I quoted this here is because greed is hardwired in the human gene, which propels humans to earn more money and exploit the system’s loopholes. Over the period, it causes an economic slowdown. Some of the driving factors for recession in such countries are&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Policy loopholes&lt;/li&gt;
&lt;li&gt;Stock market crash&lt;/li&gt;
&lt;li&gt;Bad loans&lt;/li&gt;
&lt;li&gt;Wars&lt;/li&gt;
&lt;li&gt;Epidemic&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Also, any economic slowdowns in influential countries resonate world-wide. Considering the classical case of the great recession of 2008, it started due to the housing bubble that resulted in the subprime mortgage crisis. In other words, due to sudden hikes in the prices of houses, banks started approving loan amounts with cheaper mortgage assets/commodities. For example, the loan amount is 100K USD, while the mortgaged item is worth 10K USD only. Also, people with poor credit rating got loans. This resulted into a burst of a bubble, which was an impetus to economic turmoil, millions of job losses and closure of businesses.&lt;/p&gt;
&lt;h2 id="repetitions-and-durations-of-recessions"&gt;Repetitions And Durations Of Recessions&lt;/h2&gt;
&lt;p&gt;As described in the last paragraph, reasons for recessions may vary from country-to-country, also recessions in various countries may not necessarily coincide. Hence, the stint and repetition pattern for every recession are different.&lt;/p&gt;
&lt;p&gt;Earlier, it was believed that recessions and booms are followed by each other. However, over the period, the political structure and law system in developed countries have evolved in such a way that their economies may stay resilient against recession.&lt;/p&gt;
&lt;p&gt;Taking the case of the US, before the 1980s, the period of economic slowdowns were too frequent, nearly 3-5 years. However, since the 80s, the gap between the 2 recessions has significantly widened.&lt;/p&gt;
&lt;p&gt;The snippet below taken from &lt;a href="https://www.investopedia.com/articles/economics/08/past-recessions.asp" target="_blank" rel="noopener noreferrer"&gt;Investopedia&lt;/a&gt; shows the period of recession(“NBER Recessions” and “Length of Recession(Months)”) in the USA.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/09/opensource-01.jpeg" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The condition of recession may not prevail across multiple countries at the same time, having said that, the great recession of 2008 was the first case of global recession, which cannot be considered as an exceptional one as it was the beginning of the era of globalisation. New waves of recessions will have global impact.&lt;/p&gt;
&lt;h2 id="performance-of-open-source-during-and-after-recessions"&gt;Performance of open source during and after recessions&lt;/h2&gt;
&lt;p&gt;During every recession, it has been observed that organisations lean towards less costly products, so that they can cut down the cost and maintain their profitability. Open source software emerges as a boon for them because they can save licensing cost for the product cost.&lt;/p&gt;
&lt;p&gt;The amount required to acquire the license is very high. During a recession period, it is not at all an affordable option for companies, having said that, companies may run without new development work for a certain period and may eliminate some staff members, however platforms and databases are very crucial; hence, they cannot be eliminated as it is tantamount to shutting down a company.&lt;/p&gt;
&lt;p&gt;Deploying open source products in an environment is the most viable option in such cases as they may continue running their businesses.&lt;/p&gt;
&lt;p&gt;A survey was conducted by Linux foundations on OSS(Open Source Software) in which nearly 431 people, who were from “Fortune 500 group of companies”, working from middle-level to top-level management participated. They were asked various questions about open source and its benefits. When they were asked about adopting open source products, their answers are as mentioned in the chart below.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/09/opensource-02.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://project.linuxfoundation.org/hubfs/LF%20Research/Measuring%20the%20Economic%20Value%20of%20Open%20Source%20-%20Report.pdf?hsLang=en" target="_blank" rel="noopener noreferrer"&gt;Source – Page 10&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;By analysing the chart, we can see surges in demand for open source in both the dot-com bubble burst(2001) and the great recession (2008). Besides, a sharp rise from 2016 can be noticed as well. However, it can be seen that the demand for open source has always been moderate.&lt;/p&gt;
&lt;p&gt;In this section, it can be seen that open source software is resilient against recessions and keeps on flourishing during and after odd times as well.&lt;/p&gt;
&lt;h2 id="can-open-source-keeps-recessions-at-bay"&gt;Can Open Source Keeps Recessions At Bay?&lt;/h2&gt;
&lt;p&gt;This is a really interesting question that requires further research. Since the last few years, companies have become more optimistic about open source software products, such as MySQL, PostgreSQL, Kubernetes and so on. This is because these products provide all those features that meet industry standards and requirements. Over the period, open source databases, OS and web/app servers have evolved by working with various industries, and now, they meet all the requirements that companies give. Due to which, open source successfully carved a niche in the market.&lt;/p&gt;
&lt;p&gt;The noticeable thing is that organisations do not need to pay a penny to use these products. Also, if one can afford to hire a small team of developers, the company comes up with its own version of the product. In case this is not the viable option, there are communities and a number of small-scale to medium-scale companies that provide support of open source.&lt;/p&gt;
&lt;p&gt;If we can focus on the cost of products, it would give a more clear view on open source’s ability to stem recession.&lt;/p&gt;
&lt;h3 id="the-case-of-2008s-great-recession"&gt;The case of 2008’s great recession:&lt;/h3&gt;
&lt;p&gt;We all have heard tales of 2008 recessions: heavy pay cuts, job losses, suicides, loss of property and so on and on and on. As I mentioned in this blog earlier with an example, a disruption in the flow of money causes recession, and in the case of 2008, banks sanctioned loans that cannot be repaid. Due to which, banks were defaulted and it had badly hit the economy of the USA, which affected the whole world.&lt;/p&gt;
&lt;p&gt;The most peculiar thing the &lt;a href="https://www.washingtonpost.com/business/economy/a-guide-to-the-financial-crisis--10-years-later/2018/09/10/114b76ba-af10-11e8-a20b-5f4f84429666_story.html" target="_blank" rel="noopener noreferrer"&gt;Washington Post&lt;/a&gt; highlighted about its impact and losses.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Loss in the global economy – nearly 2 trillion USD&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The US stock market loss – around 8 trillion USD&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Gross individual losses – around 9.8 trillion USD&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="a-hypothetical-scenario-with-an-open-source-in-2008"&gt;A hypothetical scenario with an open source in 2008:&lt;/h3&gt;
&lt;p&gt;Had it changed anything if organisations had placed open source products instead of proprietary software products back in 2008? As we can understand, the stock market loss and individual losses cannot be recovered due to this, because it has no relation with the product in the backend or frontend.&lt;/p&gt;
&lt;p&gt;To understand this part, we have to check out some revenues. The revenue of Microsoft was nearly 52 billion USD in 2008, and if we start looking into revenues of other proprietary software producing organisations, their collective revenue may hardly reach half of the global economy loss(2 trillion USD). If we consider all the losses incurred due to proprietary products only, even in that case, open source could not have staved off the recession as the number itself is very huge.&lt;/p&gt;
&lt;p&gt;As per Statitsa, the gross revenue of open source services was 32 billion USD in 2022. Considering this figure, it could have definitely helped mitigate the situation up to certain extent in 2008, and many of jobs could have been saved and many organisations might have avoided closures.&lt;/p&gt;
&lt;h3 id="open-sources-role-after-2008"&gt;Open source’s role after 2008:&lt;/h3&gt;
&lt;p&gt;Due to loopholes in policy, the situation of economic turbulence arose, hence the US government took corrective actions and introduced a number of overhauls in economic policies, which made the economy more stable and immune to unwanted slowdowns.&lt;/p&gt;
&lt;p&gt;Along with these things, open source gained popularity and percolated into different industries. Many companies have moved away from proprietary products and adopted open source products as they found them cost-effective with all the required features available. Of course, a penny saved is a penny earned. However, it seems no proper analysis was made to notice the economical impact of open source’s entry in the market. Also upon looking at the potential of open source a number of giant corporations, such as Google, Microsoft and others, have started contributing to the open source community.&lt;/p&gt;
&lt;p&gt;But, if those people who contribute to open source are actually getting benefited? The answer lies in the below survey result.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/09/opensource-03.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://project.linuxfoundation.org/hubfs/LF%20Research/Measuring%20the%20Economic%20Value%20of%20Open%20Source%20-%20Report.pdf?hsLang=en" target="_blank" rel="noopener noreferrer"&gt;Source – Page 13&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.cnbc.com/2013/06/11/pimco-sees-60-chance-of-global-recession-in-35-years.html" target="_blank" rel="noopener noreferrer"&gt;Many economists predicted recession in the US between 2015-2018&lt;/a&gt;, which did not actually happen. Also, while I am writing this blog, experts have already announced a recession in 2023-24, which is yet to come true. Though I firmly believe that open source is pushing the recession back, it is difficult to prove it in absence of surveys and data. I hope some experts may pick this topic and make a detailed analysis on this part; I am sure it will give a great push to the open source community.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Booms and recessions were repeating on regular intervals earlier, however over the period, due to economic advancements, recessions became infrequent and started repeating after several years. It is not necessary that all countries may face economic slowdown at the same time, also reasons for slowdowns may differ as well. During such a period, organizations have to bear loss and fall in revenues, however open source becomes more adoptive during recessions. Also, it is possible that open source is keeping recessions away, but there is no data to back this claim.&lt;/p&gt;</content:encoded>
      <author>Ninad Shah</author>
      <category>Opensource</category>
      <category>Community</category>
      <media:thumbnail url="https://percona.community/blog/2023/09/opensource-cover_hu_6cb7591bae270152.jpeg"/>
      <media:content url="https://percona.community/blog/2023/09/opensource-cover_hu_e5306d1727cda1b5.jpeg" medium="image"/>
    </item>
    <item>
      <title>Dolphie, your real-time MySQL monitoring assistant</title>
      <link>https://percona.community/blog/2023/08/22/dolphie-your-real-time-mysql-monitoring-assistant/</link>
      <guid>https://percona.community/blog/2023/08/22/dolphie-your-real-time-mysql-monitoring-assistant/</guid>
      <pubDate>Tue, 22 Aug 2023 00:00:00 UTC</pubDate>
      <description>For as long as I can remember, Innotop has been the go-to terminal tool for real-time MySQL monitoring. It is an invaluable addition to any DBA’s toolkit, but unfortunately, it’s not really actively maintained these days, except for addressing critical issues, and it hasn’t kept pace with the evolving capabilities of modern terminals. With no viable alternatives except for InnotopGo, which is also no longer actively maintained and limited to MySQL 8 (while many still use 5.7), I decided to build my own in Python.</description>
      <content:encoded>&lt;p&gt;For as long as I can remember, &lt;a href="https://github.com/innotop/innotop" target="_blank" rel="noopener noreferrer"&gt;Innotop&lt;/a&gt; has been the go-to terminal tool for real-time MySQL monitoring. It is an invaluable addition to any DBA’s toolkit, but unfortunately, it’s not really actively maintained these days, except for addressing critical issues, and it hasn’t kept pace with the evolving capabilities of modern terminals. With no viable alternatives except for &lt;a href="https://github.com/lefred/innotopgo" target="_blank" rel="noopener noreferrer"&gt;InnotopGo&lt;/a&gt;, which is also no longer actively maintained and limited to MySQL 8 (while many still use 5.7), I decided to build my own in Python.&lt;/p&gt;
&lt;center&gt;I call it, Dolphie&lt;/center&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/08/dolphie-150.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Initially, I relied on Python’s Rich package for the user interface. However, I recently stumbled upon &lt;a href="https://textual.textualize.io" target="_blank" rel="noopener noreferrer"&gt;Textual&lt;/a&gt; a few months ago, and it piqued my interest. It’s a framework that extends the capabilities of Rich, opening up a world of possibilities in the terminal. After experimenting with it for a few days, it inspired me to redevelop Dolphie with it, and I’ve been thoroughly pleased with the results. It has allowed me to showcase many of the features that will be displayed in this blog post!&lt;/p&gt;
&lt;h3 id="getting-started"&gt;Getting started&lt;/h3&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_dashboard_processlist_hu_24be22e07edc4bf4.png 480w, https://percona.community/blog/2023/08/dolphie_dashboard_processlist_hu_df05cb1e26fb3c6b.png 768w, https://percona.community/blog/2023/08/dolphie_dashboard_processlist_hu_92b0f1475efd055e.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_dashboard_processlist.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;When you first start Dolphie, you’ll be greeted with a dashboard displaying various important MySQL metrics, along with a sparkline below it to measure the QPS (Queries per second) + process list. There are multiple ways to manipulate the process list, such as changing how it sorts, filtering by user/host/query text/database/time, killing threads, and much more.&lt;/p&gt;
&lt;p&gt;There are currently four panels that can be toggled interchangeably for display:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Dashboard&lt;/li&gt;
&lt;li&gt;Process list&lt;/li&gt;
&lt;li&gt;Replication/Replicas&lt;/li&gt;
&lt;li&gt;Graph Metrics&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A big perk of transitioning to Textual is the integration of graphs. It’s as if I’ve incorporated a mini-PMM (Percona Monitoring and Management) right into Dolphie! The switches you see can be toggled on and off to display or hide their corresponding metrics on the graph.&lt;/p&gt;
&lt;h4 id="buffer-pool-requests-graph--replication-panel"&gt;Buffer Pool Requests Graph + Replication Panel&lt;/h4&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_buffer_pool_hu_fdce19cbdc265c45.png 480w, https://percona.community/blog/2023/08/dolphie_buffer_pool_hu_988946439dee2f04.png 768w, https://percona.community/blog/2023/08/dolphie_buffer_pool_hu_62e296398fe406a0.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_buffer_pool.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="checkpoint-graph"&gt;Checkpoint Graph&lt;/h4&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_checkpoint_hu_5033305ee10d6583.png 480w, https://percona.community/blog/2023/08/dolphie_checkpoint_hu_74a6e4c86ec95af9.png 768w, https://percona.community/blog/2023/08/dolphie_checkpoint_hu_5a698e4c73d34dcb.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_checkpoint.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="redo-logs-graph"&gt;Redo Logs Graph&lt;/h4&gt;
&lt;p&gt;How are your redo logs performing? Dolphie shows you how much data is being written per second, the active count of redo logs (MySQL 8 only), and how much data is being written to it per hour (inspired by &lt;a href="https://www.percona.com/blog/how-to-calculate-a-good-innodb-log-file-size" target="_blank" rel="noopener noreferrer"&gt;this&lt;/a&gt; blog post)
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_redo_log_hu_9910b81fe35285ca.png 480w, https://percona.community/blog/2023/08/dolphie_redo_log_hu_6b9afd6973d0a0db.png 768w, https://percona.community/blog/2023/08/dolphie_redo_log_hu_72f024a19a7ff856.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_redo_log.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="dml-graph"&gt;DML Graph&lt;/h4&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_dml_hu_75d0ff8eec2097c8.png 480w, https://percona.community/blog/2023/08/dolphie_dml_hu_994150f7053e54e3.png 768w, https://percona.community/blog/2023/08/dolphie_dml_hu_e93fcf1a3f10c47b.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_dml.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="thread-data"&gt;Thread data&lt;/h4&gt;
&lt;p&gt;Dolphie lets you display a thread’s information with an explanation of its query along + transaction history
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_thread_details_hu_8f8c1fb75755d18c.png 480w, https://percona.community/blog/2023/08/dolphie_thread_details_hu_b496012bf78e57c4.png 768w, https://percona.community/blog/2023/08/dolphie_thread_details_hu_63f9f4bd3b0b4058.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_thread_details.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="kill-threads"&gt;Kill threads&lt;/h4&gt;
&lt;p&gt;Dolphie lets you terminate threads using a selected option. Notice how it autocompletes the input for you. This is a feature across the board. It will autocomplete any input that it can
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_kill_threads_by_parameters_hu_c4f31ce433898534.png 480w, https://percona.community/blog/2023/08/dolphie_kill_threads_by_parameters_hu_964130d68a4a7dfb.png 768w, https://percona.community/blog/2023/08/dolphie_kill_threads_by_parameters_hu_96107756d31feeb6.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_kill_threads_by_parameters.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="quick-switch-host"&gt;Quick switch host&lt;/h4&gt;
&lt;p&gt;After using Dolphie extensively myself, I realized the need to simplify host switching. I found myself restarting it frequently just to change the host. This feature saves all the hosts you’ve connected to, allowing for autocomplete when you want to switch
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_quick_host_switch_hu_5d3c6a08487285f0.png 480w, https://percona.community/blog/2023/08/dolphie_quick_host_switch_hu_169ea628728d9861.png 768w, https://percona.community/blog/2023/08/dolphie_quick_host_switch_hu_da537a303647e77c.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_quick_host_switch.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="error-log"&gt;Error log&lt;/h4&gt;
&lt;p&gt;In MySQL 8, I was delighted to see that the error log was in performance_schema. Of course, I had to support it! It has switches to toggle on/off event types and search functionality
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dolphie_error_log_hu_29ea1ccb044dab49.png 480w, https://percona.community/blog/2023/08/dolphie_error_log_hu_5a3b5f1623d4935c.png 768w, https://percona.community/blog/2023/08/dolphie_error_log_hu_5cc6d6bdaa55ba72.png 1400w"
src="https://percona.community/blog/2023/08/dolphie_error_log.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="errant-transactions"&gt;Errant transactions&lt;/h4&gt;
&lt;p&gt;The Replicas panel will let you know if your replicas have any errant transactions and what they are
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/08/dolphie_errant_transaction.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;These are just some of the features that Dolphie has. There are many more that I haven’t covered, which you can discover for yourself and try out!&lt;/p&gt;
&lt;p&gt;If you’d like to try Dolphie, it’s just a pip away:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pip install dolphie&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I’m open to feedback and suggestions so don’t be a stranger :) If you’d like to contribute to the project, I’d be delighted to have you!&lt;/p&gt;
&lt;p&gt;You can find Dolphie on its &lt;a href="https://github.com/charles-001/dolphie" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Charles Thompson</author>
      <category>Dev</category>
      <category>MySQL</category>
      <category>Monitoring</category>
      <category>Python</category>
      <media:thumbnail url="https://percona.community/blog/2023/08/dolphie_header_hu_c50d81109e2ceaa2.jpeg"/>
      <media:content url="https://percona.community/blog/2023/08/dolphie_header_hu_1bafd2a4bddf608e.jpeg" medium="image"/>
    </item>
    <item>
      <title>PMM Client on Raspberry Pi 4</title>
      <link>https://percona.community/blog/2023/08/15/pmm-client-on-raspberry-pi-4/</link>
      <guid>https://percona.community/blog/2023/08/15/pmm-client-on-raspberry-pi-4/</guid>
      <pubDate>Tue, 15 Aug 2023 00:00:00 UTC</pubDate>
      <description>This will be the third in my series of Percona Products on a Raspberry Pi. My previous posts:</description>
      <content:encoded>&lt;p&gt;This will be the third in my series of Percona Products on a Raspberry Pi. My previous posts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2019/08/01/how-to-build-a-percona-server-stack-on-a-raspberry-pi-3/" target="_blank" rel="noopener noreferrer"&gt;How to Build a Percona Server “Stack” on a Raspberry Pi 3+
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2022/04/05/percona-server-raspberry-pi/" target="_blank" rel="noopener noreferrer"&gt;Raspberry Pi Bullseye Percona Server 64bit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Before I get started I would like to thank &lt;a href="https://www.percona.com/blog/compiling-a-percona-monitoring-and-management-v2-client-in-arm-raspberry-pi-3/" target="_blank" rel="noopener noreferrer"&gt;guriandoro&lt;/a&gt;, for the work he did in compiling the PMM Client tools in his 2021 blog.&lt;/p&gt;
&lt;p&gt;My regular readers know how much I love the Raspberry Pi and MySQL. I have several hobby projects that collect data, and MySQL on Pi is great&lt;br&gt;
solution!&lt;/p&gt;
&lt;p&gt;I recently decided that I needed to be able to monitor my two MySQL Server.&lt;/p&gt;
&lt;p&gt;The steps below should work on the Raspberry Pi 3+ and 4.&lt;/p&gt;
&lt;h3 id="what-better-tool-to-use-than-pmm"&gt;What better tool to use than PMM&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;In this blog we will be using PMM Client 2.27 and PMM Server version 2.39.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Important Note:&lt;/strong&gt; Your Raspberry Pi must be on Raspbian Bullseye.&lt;/p&gt;
&lt;p&gt;I will assume if you have made it this far into the BLOG post, then you already have a running PMM Server. If not, you can read about installing a PMM Server &lt;a href="https://www.percona.com/software/pmm/quickstart" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="add-the-pmm-user-to-your-mysql-server"&gt;Add the pmm user to your MySQL Server&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE USER 'pmm'@'127.0.0.1' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD, BACKUP_ADMIN ON *.* TO 'pmm'@'localhost';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="the-following-steps-should-be-run-on-your-raspberry-pi"&gt;The following steps should be run on your Raspberry Pi&lt;/h2&gt;
&lt;hr&gt;
&lt;h3 id="download-the-pre-complied-pmm-client-tools"&gt;Download the pre-complied pmm-client tools&lt;/h3&gt;
&lt;p&gt;These were complied based on 2.27 source code. You can see the steps I followed to compile the tools &lt;a href="https://github.com/cetanhota/pi4-arm-pmm-client" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wget https://github.com/cetanhota/pi4-arm-pmm-client/archive/refs/heads/main.zip
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;unzip main.zip&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Once we have the client unzipped its a good idea to verify that the tools will work on your Raspberry Pi.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd ~/pi4-arm-pmm-client-main/raspbian-bullseye
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;./pmm-agent --version&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You should get output like below.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ProjectName: pmm-agent
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ProjectName: pmm-agent
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Version: 2.27.0-84-g78b8519-dirty
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PMMVersion: 2.27.0-84-g78b8519-dirty
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Timestamp: 2023-08-16 00:57:34 (UTC)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FullCommit: 78b85198ad1e2c319d4012ef5ae338f389e2000a
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Branch: main&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If the pmm-agent displayed info like above, then you good to keep moving down the document. If you ran into issues please post in the comments and we can see what can be done to get you working. The only problem I ran in to was that the root user could not execute the client tools.&lt;/p&gt;
&lt;h2 id="setup-install-directories"&gt;Setup install directories&lt;/h2&gt;
&lt;p&gt;We need to create the install directory and move files to the correct locations.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona/pmm2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona/pmm2/tools
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona/pmm2/config
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona/pmm2/exporters
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona/exporters
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona/exporters/collectors/textfile-collector/high-resolution
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona/exporters/collectors/textfile-collector/medium-resolution
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /usr/local/percona/exporters/collectors/textfile-collector/low-resolution&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="copy-files-into-directories"&gt;Copy files into directories&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd ~/pi4-arm-pmm-client-main/raspbian-bullseye
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo cp pmm-admin pmm-agent /usr/local/bin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo cp node_exporter vmagent /usr/local/percona/pmm2/exporters&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="configure-the-pmm-client"&gt;Configure the PMM Client&lt;/h2&gt;
&lt;p&gt;Its time to configure and test the PMM Client for the MySQL server where we are install PMM Client.&lt;/p&gt;
&lt;p&gt;Please make sure you have the following three items:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;PMM Server Hostname or IP Address&lt;/li&gt;
&lt;li&gt;PMM Server admin ID&lt;/li&gt;
&lt;li&gt;PMM Server admin Password&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo pmm-agent setup --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml\
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --server-address=&lt;YOUR-PMM-Server&gt; --server-insecure-tls\
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --server-username=&lt;YOURADMIN&gt; --server-password=&lt;YOURPASSWORD&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now that the PMM Client has been configured, lets move on to starting the PMM Client.&lt;/p&gt;
&lt;h2 id="start-the-pmm-client"&gt;Start the PMM Client&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sudo pmm-agent --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Verify by reviewing the client is running correctly. If all is well you should see output like:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.294-04:00] Loading configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml. component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.295-04:00] Using /usr/local/percona/pmm2/exporters/node_exporter component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.295-04:00] Using /usr/local/percona/pmm2/exporters/mysqld_exporter component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.295-04:00] Using /usr/local/percona/pmm2/exporters/mongodb_exporter component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.295-04:00] Using /usr/local/percona/pmm2/exporters/postgres_exporter component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.295-04:00] Using /usr/local/percona/pmm2/exporters/proxysql_exporter component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.295-04:00] Using /usr/local/percona/pmm2/exporters/rds_exporter component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.295-04:00] Using /usr/local/percona/pmm2/exporters/azure_exporter component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.296-04:00] Using /usr/local/percona/pmm2/exporters/vmagent component=main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.296-04:00] Starting... component=client
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.296-04:00] Starting local API server on http://127.0.0.1:7777/ ... component=local-server/JSON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.296-04:00] Connecting to https://admin:***@192.168.1.127:443/ ... component=client
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.298-04:00] Started. component=local-server/JSON
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.320-04:00] Connected to 192.168.1.127:443. component=client
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.320-04:00] Establishing two-way communication channel ... component=client
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO[2023-08-15T17:00:34.341-04:00] Two-way communication channel established in 21.349589ms. Estimated clock drift: -84.399159ms. component=client&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;At this point press ctl+c to stop the pmm-agent.&lt;/p&gt;
&lt;p&gt;If you want to start the pmm-agent again and running the background use this command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo nohup pmm-agent --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml &amp;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You may choose a different startup process for the PMM Client based on personal preference.&lt;/p&gt;
&lt;h2 id="add-pt-summary-to-the-pmm-client"&gt;Add pt-summary to the PMM Client&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd /usr/local/percona/pmm2/tools
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo wget percona.com/get/pt-summary
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chmod ugo+x pt-summary&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s see summary in action:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/node-summary_hu_d2c3c656f60a8a9f.png 480w, https://percona.community/blog/2023/08/node-summary_hu_92a27a6cfcf3cff1.png 768w, https://percona.community/blog/2023/08/node-summary_hu_48671787c34a28c9.png 1400w"
src="https://percona.community/blog/2023/08/node-summary.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="add-your-raspberry-pi-mysql-server-to-pmm-server"&gt;Add your Raspberry Pi MySQL Server to PMM Server&lt;/h2&gt;
&lt;p&gt;I am sure most of you know how to add a server to your existing PMM Server but if you have not done this before here is what you need to do.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Connect to your PMM Server.&lt;/li&gt;
&lt;li&gt;Use pmm-admin command to add server for monitoring.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/usr/local/percona/pmm2/bin/pmm-admin add mysql --service-name=&lt;MYSQL SERVER NAME&gt; --server-insecure-tls --server-url=https://&lt;YOUR ADMIN&gt;:&lt;YOUR PASSWORD&gt;@localhost --username=&lt;DB USER&gt; --password=&lt;DB PASSWORD&gt; --host=&lt;MYSQL SERVER NAME&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="pmm-server-screen-shoot"&gt;PMM Server Screen Shoot&lt;/h3&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/pmm-view2_hu_f272ad7e1c2e61e8.png 480w, https://percona.community/blog/2023/08/pmm-view2_hu_87b4f8ea44657c53.png 768w, https://percona.community/blog/2023/08/pmm-view2_hu_1e0b1a2173273e7d.png 1400w"
src="https://percona.community/blog/2023/08/pmm-view2.png" alt="image" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="in-closing"&gt;In Closing&lt;/h2&gt;
&lt;p&gt;I really loved working on this little project. I hope you find some use for the information.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>MySQL</category>
      <category>PMM</category>
      <category>DIY</category>
      <media:thumbnail url="https://percona.community/blog/2023/08/fresh-raspi_hu_10d8e4f704b9d08a.jpg"/>
      <media:content url="https://percona.community/blog/2023/08/fresh-raspi_hu_6724276b654ff0db.jpg" medium="image"/>
    </item>
    <item>
      <title>DoKC Operator SIG Update</title>
      <link>https://percona.community/blog/2023/08/14/dokc-operator-sig-update/</link>
      <guid>https://percona.community/blog/2023/08/14/dokc-operator-sig-update/</guid>
      <pubDate>Mon, 14 Aug 2023 00:00:00 UTC</pubDate>
      <description>Before our meeting, we started with a question to begin the morning: What board game or tabletop game have you played that you would recommend to others?</description>
      <content:encoded>&lt;p&gt;Before our meeting, we started with a question to begin the morning: &lt;strong&gt;What board game or tabletop game have you played that you would recommend to others?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.linkedin.com/in/itamar-marom/" target="_blank" rel="noopener noreferrer"&gt;Itamar Marom&lt;/a&gt; suggests that Catan as a good board game, which takes a lot of time, super annoying when you lose, but generally a lot of fun. So highly suggested!&lt;/p&gt;
&lt;p&gt;Other members like &lt;a href="https://www.linkedin.com/in/hugh-lashbrooke/" target="_blank" rel="noopener noreferrer"&gt;Hugh Lashbrooke&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/berkeleybob2105/" target="_blank" rel="noopener noreferrer"&gt;Robert Hodges&lt;/a&gt; prefer Monopoly, where you must reach a higher level of monopoly awareness to enjoy it fully. The idea is that you can make pretty much any deal with anyone else within the rules of monopoly. That’s when it gets fun.&lt;/p&gt;
&lt;p&gt;Terraforming Mars and Meadow are other favorite games for members of DoK.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/dok-game_hu_ad464c34addcee32.png 480w, https://percona.community/blog/2023/08/dok-game_hu_3dcb72086a2fa28.png 768w, https://percona.community/blog/2023/08/dok-game_hu_4b488f857233f4db.png 1400w"
src="https://percona.community/blog/2023/08/dok-game.png" alt="dokc-game" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Let’s get into our agenda for this meeting.&lt;/p&gt;
&lt;p&gt;For &lt;strong&gt;our first agenda&lt;/strong&gt;, &lt;a href="https://www.linkedin.com/in/itamar-marom/" target="_blank" rel="noopener noreferrer"&gt;Itamar Marom&lt;/a&gt; (From AppsFlyer and DoKC Ambassador) proposes joining more data operators’ maintainers and developers in the community.&lt;/p&gt;
&lt;p&gt;Itamar was analyzing the data technology map and saw that some workloads are very common and act differently from what is found in SIG Operator (Special Interest Group) in the case of Spark and Kafka. It would be nice to see the maintainers and hear their opinions, especially in meetings like our one with Google, where they have very different and exciting use cases.&lt;/p&gt;
&lt;p&gt;We’ve had folks from these communities present at our virtual meetups. Bringing them to the SIG to learn, share, and find ways to collaborate would benefit the broader DoK ecosystem.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/kafka-spark_hu_1827450969142890.png 480w, https://percona.community/blog/2023/08/kafka-spark_hu_93948a3fd66ce4d8.png 768w, https://percona.community/blog/2023/08/kafka-spark_hu_90a5f03aee3edfbe.png 1400w"
src="https://percona.community/blog/2023/08/kafka-spark.png" alt="dokc-game" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.linkedin.com/in/jimhalfpenny/" target="_blank" rel="noopener noreferrer"&gt;Jim Halfpenny&lt;/a&gt; from &lt;strong&gt;Stackable&lt;/strong&gt; mentions that they have several operators for open source projects, including Apache Kafka, Apache NiFi, Apache Superset, Trino, and more. And they will love input from the community on the direction the operators should take. His aim is that our operators should play nicely with others!&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.linkedin.com/in/mklogan/" target="_blank" rel="noopener noreferrer"&gt;Melissa Logan&lt;/a&gt; was part of several discussions with the Argo community, and maybe there’s a chance they could join us as well.&lt;/p&gt;
&lt;p&gt;Itamar will prepare a proposal for this project in which the group can collaborate and will share it soon.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;second item on our agenda&lt;/strong&gt; was &lt;strong&gt;Carrier Hardening and Security Project&lt;/strong&gt; Update by &lt;a href="https://www.linkedin.com/in/berkeleybob2105/" target="_blank" rel="noopener noreferrer"&gt;Robert Hodges&lt;/a&gt; (DoKC Ambassador).&lt;/p&gt;
&lt;p&gt;This project is a guide to establishing a baseline for secure data management on Kubernetes by fortifying the database operators. The guide aims to identify the typical attack surfaces that exist for databases running on Kubernetes. It will establish a collection of best practices for enhancing their security using operators.&lt;/p&gt;
&lt;p&gt;Robert is working on an August 12 talk at &lt;a href="https://www.dataconla.com/" target="_blank" rel="noopener noreferrer"&gt;DataConLA&lt;/a&gt; on the topic: Tips for Sleeping Well with State-of-the-Art Data Management. It will contain the framing of the operator hardening guide.&lt;/p&gt;
&lt;p&gt;Robert is playing around with a couple of approaches to divide up the problem space. One of them is to deal with security concerns as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The database itself - E.g., setting passwords safely.&lt;/li&gt;
&lt;li&gt;Kubernetes - Security it from outside attackers - E.g., encrypted client connections.&lt;/li&gt;
&lt;li&gt;Data outside Kubernetes - Object storage used for backups, logs forwarded to log management systems.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/08/dataconla.png" alt="dokc-game" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Robert Will share a draft of the talk in the &lt;strong&gt;#sig-operator&lt;/strong&gt; channel for discussion.&lt;/p&gt;
&lt;p&gt;The last topic on our agenda is ArgoCD and general CI/CD compatibility for operators by Robert Hodges.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/08/argo_hu_9b2e75bcd0723565.png 480w, https://percona.community/blog/2023/08/argo_hu_57ea539f04d83654.png 768w, https://percona.community/blog/2023/08/argo_hu_2525015a282057a8.png 1400w"
src="https://percona.community/blog/2023/08/argo.png" alt="dokc-game" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Robert created a public repository &lt;a href="https://github.com/Altinity/argocd-examples-clickhouse" target="_blank" rel="noopener noreferrer"&gt;Argocd-examples-clickhouse&lt;/a&gt;, example of ArgoCD application definitions for ClickHouse analytic applications.&lt;/p&gt;
&lt;p&gt;In this project, you’ll find ArgoCD applications and instructions to stand up a full analytic stack based on ClickHouse in a Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;Following suggestions from community members, a new Slack channel #topic-devops was created 🎉 .
This is a channel to talk about CI/CD integration, specific solutions like ArgoCD &amp; Flux, etc.&lt;/p&gt;
&lt;p&gt;To learn more about our meetings, join the &lt;a href="https://dok.community/" target="_blank" rel="noopener noreferrer"&gt;Data on Kubernetes&lt;/a&gt;. An Open Community for Data on Kubernetes. We host weekly live meetups where technologists share their stories, wisdom, and practical advice for running data on Kubernetes.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>DoK</category>
      <category>Kubernetes</category>
      <category>Operators</category>
      <media:thumbnail url="https://percona.community/blog/2023/08/dok-game_hu_c4e7255be0a9266e.jpg"/>
      <media:content url="https://percona.community/blog/2023/08/dok-game_hu_22de3756eda1290a.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.38 preview release</title>
      <link>https://percona.community/blog/2023/06/30/preview-release/</link>
      <guid>https://percona.community/blog/2023/06/30/preview-release/</guid>
      <pubDate>Fri, 30 Jun 2023 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.38 preview release Hello folks! Percona Monitoring and Management (PMM) 2.38 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-238-preview-release"&gt;Percona Monitoring and Management 2.38 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.38 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;To see the full list of changes, check out the &lt;a href="https://pmm-doc-pr-1081.onrender.com/release-notes/2.38.0.html" target="_blank" rel="noopener noreferrer"&gt;PMM 2.38 Release Notes&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker-installation"&gt;PMM server Docker installation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server with Docker instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.38.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; To use the DBaaS functionality during the PMM preview release, add the following environment variable when starting PMM server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PERCONA_TEST_DBAAS_PMM_CLIENT=perconalab/pmm-client:2.38.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/el9/pmm2-client/pmm2-client-latest-5607.tar.gz" target="_blank" rel="noopener noreferrer"&gt;Download&lt;/a&gt; the latest pmm2-client release candidate tarball for 2.38.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To install pmm2-client package, enable testing repository via Percona-release:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Install pmm2-client package for your OS via Package Manager.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-moitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server as a VM instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.38.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.38.0.ova file&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Run PMM Server hosted at AWS Marketplace instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-09895e9b605f14cbc&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us on the [Percona Community Forums](&lt;a href="https://forums.percona.com/]" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/]&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <category>Release</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona University in Peru 2023</title>
      <link>https://percona.community/blog/2023/06/20/percona-and-data-on-kubernetes-meetup/</link>
      <guid>https://percona.community/blog/2023/06/20/percona-and-data-on-kubernetes-meetup/</guid>
      <pubDate>Tue, 20 Jun 2023 00:00:00 UTC</pubDate>
      <description>Peru is a country in South America home to a section of the Amazon rainforest and Machu Picchu ⛰️, an ancient Incan city high in the Andes mountains. Percona decided to hold the first event of Percona University 2023 in Lima, the capital of Peru, on June 10.</description>
      <content:encoded>&lt;p&gt;&lt;strong&gt;Peru&lt;/strong&gt; is a country in South America home to a section of the &lt;strong&gt;Amazon&lt;/strong&gt; rainforest and &lt;strong&gt;Machu Picchu&lt;/strong&gt; ⛰️, an ancient Incan city high in the Andes mountains. &lt;a href="https://www.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt; decided to hold the first event of &lt;strong&gt;Percona University 2023 in Lima&lt;/strong&gt;, the capital of Peru, on June 10.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/percona-university-is-back-in-business/" target="_blank" rel="noopener noreferrer"&gt;Percona University&lt;/a&gt; is a series of free technical events organized by Percona in various cities around the world since 2013. &lt;strong&gt;Percona&lt;/strong&gt; uses these events to share its unbiased expertise on open source databases with the community, users, and organizations. The last &lt;a href="https://percona.community/events/percona-university-istanbul-2022/" target="_blank" rel="noopener noreferrer"&gt;Percona University was in 2022 in Istanbul, Turkey&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Something charming was designed for this first session of Percona University; there were all kinds of stickers and prizes at the end of the event.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/06/pup-stikers-01.jpeg" alt="pup-stikers-01" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The talks were related to &lt;strong&gt;open source databases&lt;/strong&gt; and &lt;strong&gt;kubernetes&lt;/strong&gt;. Let’s summarize them:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Let’s start with &lt;a href="https://www.linkedin.com/in/peterzaitsev?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAAQH8EBHFDyKi6meRnMSE5FNzSJilakYJQ&amp;lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BdslLban%2BQgGG1jwigOsRaQ%3D%3D" target="_blank" rel="noopener noreferrer"&gt;Peter Zaitsev&lt;/a&gt;, who talked about the &lt;a href="https://docs.google.com/presentation/d/12d27qQN0EIh3v-ssoZwzSR6ulXg_EcuO/edit#slide=id.p7" target="_blank" rel="noopener noreferrer"&gt;Cloud of Serfdom vs. the Cloud of Freedom&lt;/a&gt;; why open source will win in the Cloud Age, spoke about the relationship between Cloud and Open Source, showing the historical changes in the impact of the &lt;strong&gt;Cloud on Open Source&lt;/strong&gt; and examining the current state of affairs, and advocating for a specific approach to using Cloud and Open Source together.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Peter’s&lt;/strong&gt; next talk was about &lt;a href="https://docs.google.com/presentation/d/1AFjeTePOWYRyap1lmLa84kcfB0klBSQx/edit#slide=id.p1" target="_blank" rel="noopener noreferrer"&gt;17 reasons to migrate to MySQL 8&lt;/a&gt;
MySQL 8 was different from previous major MySQL versions, as it underwent significant changes from its initial release in 2018 to the version in 2023. Many features were introduced through subsequent minor version releases. In the presentation, Peter focused on the most important Modern MySQL 8 features that had emerged since the initial release of &lt;strong&gt;MySQL 8&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-peter-02_hu_9ffb46fbb29c64ac.jpeg 480w, https://percona.community/blog/2023/06/pup-peter-02_hu_20b372dfb682ee8a.jpeg 768w, https://percona.community/blog/2023/06/pup-peter-02_hu_2223c556753348c4.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-peter-02.jpeg" alt="pup-peter-02" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The next on the agenda was &lt;a href="https://www.linkedin.com/in/mvillegascuellar?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAYosmwB_V8dLwgDO5dFwIsOtx_BSTIwYXA&amp;lipi=urn%3Ali%3Apage%3Ad_flagship3_search_srp_all%3BgaPel2l9SCeCz6vJLRN4Fw%3D%3D" target="_blank" rel="noopener noreferrer"&gt;Michael Villegas&lt;/a&gt;. He talked about Useful tools from the &lt;a href="https://docs.google.com/presentation/d/1NX2c_DS9ussvc6VZmFT-4-wk28SIuKVs/edit#slide=id.p1" target="_blank" rel="noopener noreferrer"&gt;Percona Toolkit for DBAs&lt;/a&gt;. This talk was aimed at DBAs and developers of any level of experience responsible for MySQL database administration. They learned how to use some very useful tools within the &lt;a href="https://www.percona.com/software/database-tools/percona-toolkit" target="_blank" rel="noopener noreferrer"&gt;Percona Toolkit&lt;/a&gt; to solve common problems within their databases. Michael showed these tools allowed them to save time and effort in resolving these issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-michael-03_hu_6d2ca38bdd96eb82.jpeg 480w, https://percona.community/blog/2023/06/pup-michael-03_hu_f3e0fd56b9697070.jpeg 768w, https://percona.community/blog/2023/06/pup-michael-03_hu_8f47b9cad5420bc0.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-michael-03.jpeg" alt="pup-michael-03" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;After the following talk, we had a coffee break provided by Percona, it was an opportunity to chat with Peter and network with other attendees.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-breakfast-04_hu_4a7a74c79cedf854.jpeg 480w, https://percona.community/blog/2023/06/pup-breakfast-04_hu_184a0148ed7d4042.jpeg 768w, https://percona.community/blog/2023/06/pup-breakfast-04_hu_2c4055d61bd924e5.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-breakfast-04.jpeg" alt="pup-breakfast-04" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The next talk was by &lt;strong&gt;Peter Zaitsev&lt;/strong&gt; about &lt;strong&gt;Advanced MySQL optimization and troubleshooting using PMM&lt;/strong&gt;. Optimizing MySQL performance and troubleshooting MySQL problems are two of the most critical and challenging tasks for MySQL DBAs. The databases power their applications need to handle changing traffic workloads while remaining responsive and stable, ensuring an excellent user experience. Additionally, DBAs are expected to find cost-efficient solutions to these issues. In the presentation, Peter Zaitsev showed advanced options of &lt;a href="https://docs.percona.com/percona-monitoring-and-management/index.html" target="_blank" rel="noopener noreferrer"&gt;PMM version 2&lt;/a&gt;, which allowed DBAs to address these challenges.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-peter-05_hu_fefc2436b57d5f82.jpeg 480w, https://percona.community/blog/2023/06/pup-peter-05_hu_5895ad050b152d.jpeg 768w, https://percona.community/blog/2023/06/pup-peter-05_hu_3517c8432dd7766e.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-peter-05.jpeg" alt="pup-peter-05" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The next talk was for &lt;a href="https://www.linkedin.com/in/edithpuclla/" target="_blank" rel="noopener noreferrer"&gt;Edith Puclla&lt;/a&gt; (me). I did an &lt;a href="https://docs.google.com/presentation/d/1URi6oNC3fZKd2mCAZ3CGZ_CTkAkzIHWW/edit#slide=id.p1" target="_blank" rel="noopener noreferrer"&gt;Introduction to Kubernetes Operators&lt;/a&gt;. I provided a simplified overview of &lt;strong&gt;Kubernetes Operators&lt;/strong&gt;, focusing on making the concept understandable for those new to the subject. I started explaining Kubernetes and why they are relevant in managing applications, then we go for Kubernetes Operators example, the reason behind that, and the benefits of using Operators.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-edith-06_hu_f2bb8901a1843d9d.jpeg 480w, https://percona.community/blog/2023/06/pup-edith-06_hu_f305f7e3ba90b609.jpeg 768w, https://percona.community/blog/2023/06/pup-edith-06_hu_ecf04830c9d69e95.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-edith-06.jpeg" alt="pup-edith-06" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Our last talk was about &lt;a href="https://docs.google.com/presentation/d/10mzZu-N_mv_4zpD3-6LVXN0Lv01ws7tn/edit#slide=id.p1" target="_blank" rel="noopener noreferrer"&gt;Deep Dive Into Query Performance&lt;/a&gt; by &lt;strong&gt;Peter Zaitsev&lt;/strong&gt;. In this presentation, Peter explored this seemingly simple aspect of working with databases in detail. Peter answered questions like when you should focus on tuning specific queries or when it is better to focus on tuning the database (or just getting a bigger box). Peter also showed other ways to minimize user facing response time, such as parallel queries, asynchronous queries, queueing complex work, and as often misunderstood response time killers such as overloaded networks, stolen CPU, and even limits imposed by this pesky speed of light.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-peter-07_hu_13b4389dcec0ce0e.jpeg 480w, https://percona.community/blog/2023/06/pup-peter-07_hu_d635423cdd48e59.jpeg 768w, https://percona.community/blog/2023/06/pup-peter-07_hu_d8bfb6f5dc2c53d4.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-peter-07.jpeg" alt="pup-peter-07" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The event was well received. Many graduates, professionals, students, and open source enthusiasts attended the event.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-team-08_hu_4afbf356772a0a8d.jpeg 480w, https://percona.community/blog/2023/06/pup-team-08_hu_144a2184947ca643.jpeg 768w, https://percona.community/blog/2023/06/pup-team-08_hu_9559ed30d6102448.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-team-08.jpeg" alt="pup-team-08" /&gt;&lt;/figure&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-public-09_hu_ff65e456e954f1d1.jpeg 480w, https://percona.community/blog/2023/06/pup-public-09_hu_d7819a2af8585871.jpeg 768w, https://percona.community/blog/2023/06/pup-public-09_hu_f625fc05749c0336.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-public-09.jpeg" alt="pup-public-09" /&gt;&lt;/figure&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-all-10_hu_528cb9e19c574754.jpeg 480w, https://percona.community/blog/2023/06/pup-all-10_hu_9f19bd400c95e0f9.jpeg 768w, https://percona.community/blog/2023/06/pup-all-10_hu_3593470cd690b953.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-all-10.jpeg" alt="pup-all-10" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;And also nice to share moments with Peter and his fans.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-lunch-11_hu_e9236907e399530d.jpeg 480w, https://percona.community/blog/2023/06/pup-lunch-11_hu_bcf683a07ce6cdfa.jpeg 768w, https://percona.community/blog/2023/06/pup-lunch-11_hu_c0c90cbd3ad1344e.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-lunch-11.jpeg" alt="pup-lunch-11" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We also thank &lt;a href="https://www.ue.edu.pe/" target="_blank" rel="noopener noreferrer"&gt;ESAN University&lt;/a&gt; for providing us with the venue for the event.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/pup-team-12_hu_13fe475e57a3882e.jpeg 480w, https://percona.community/blog/2023/06/pup-team-12_hu_a882affa6ee40274.jpeg 768w, https://percona.community/blog/2023/06/pup-team-12_hu_3ab57b75a01d11c0.jpeg 1400w"
src="https://percona.community/blog/2023/06/pup-team-12.jpeg" alt="pup-team-12" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Don’t miss our next &lt;a href="https://learn.percona.com/percona-university-istanbul-2022" target="_blank" rel="noopener noreferrer"&gt;Percona University event, in Istanbul&lt;/a&gt;!&lt;/p&gt;
&lt;p&gt;See you at &lt;strong&gt;Percona University Peru in 2024&lt;/strong&gt;!&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Events</category>
      <category>Kubernetes</category>
      <category>Toolkit</category>
      <category>Operators</category>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/blog/2023/06/pup-all-10_hu_96ec1fce07252097.jpeg"/>
      <media:content url="https://percona.community/blog/2023/06/pup-all-10_hu_4e47862a67581e0e.jpeg" medium="image"/>
    </item>
    <item>
      <title>Data on Kubernetes Meetup May 23</title>
      <link>https://percona.community/blog/2023/06/01/percona-and-data-on-kubernetes-meetup/</link>
      <guid>https://percona.community/blog/2023/06/01/percona-and-data-on-kubernetes-meetup/</guid>
      <pubDate>Thu, 01 Jun 2023 00:00:00 UTC</pubDate>
      <description>Percona has started to participate in Data on Kubernetes (DoK) meetings about Kubernetes Operators. These meetings are an initiative of DoK meetups that spotlight DoK case studies. In this blog post series, I will summarize the topics covered in each meeting.</description>
      <content:encoded>&lt;p&gt;&lt;strong&gt;Percona&lt;/strong&gt; has started to participate in &lt;strong&gt;Data on Kubernetes&lt;/strong&gt; (DoK) meetings about &lt;strong&gt;Kubernetes Operators&lt;/strong&gt;.
These meetings are an initiative of DoK meetups that spotlight DoK case studies. In this blog post series, I will summarize the topics covered in each meeting.&lt;/p&gt;
&lt;p&gt;On May 23, very interesting topics were discussed on the agenda. Let’s begin to summarize it.&lt;/p&gt;
&lt;p&gt;We start with a new project proposal, which is called: &lt;a href="https://docs.google.com/document/d/1CJeFtNpDSyaPoPWvimwMFt5s1g2Zj2Ppg_DJX7nVurk/edit#" target="_blank" rel="noopener noreferrer"&gt;Distributed Systems Operator Interface (DSOI)&lt;/a&gt;. It is proposed by &lt;strong&gt;Adheip Singh&lt;/strong&gt; from DataInfra, &lt;strong&gt;Nitish Tiwari&lt;/strong&gt; from Parseable, and &lt;strong&gt;Itamar Marom&lt;/strong&gt; from AppsFlyer.&lt;/p&gt;
&lt;p&gt;This project is a set of best practices for building Kubernetes operators for distributed systems. The spec defines standard practices that can help define custom resources (CR). It consists of Kubernetes-native &lt;strong&gt;CRDs&lt;/strong&gt; and specs and is not bound to any specific application. There are already two operators built using this set of practices:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/parseablehq/operator" target="_blank" rel="noopener noreferrer"&gt;Parseable Kubernetes Operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/datainfrahq/pinot-operator" target="_blank" rel="noopener noreferrer"&gt;Control Plane For Apache Pinot&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you want to contribute, send proposals, join &lt;a href="https://launchpass.com/datainfra-workspace" target="_blank" rel="noopener noreferrer"&gt;datainfra-workspace&lt;/a&gt; or raise bugs in the GitHub repository of &lt;a href="https://github.com/datainfrahq/dsoi-spec/issues" target="_blank" rel="noopener noreferrer"&gt;DSOI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/dok-datainfra_hu_7b0c7765b89587f3.jpeg 480w, https://percona.community/blog/2023/06/dok-datainfra_hu_341b3d3b21b105f5.jpeg 768w, https://percona.community/blog/2023/06/dok-datainfra_hu_107d85156bd1ca95.jpeg 1400w"
src="https://percona.community/blog/2023/06/dok-datainfra.jpeg" alt="Panel Discussion" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As the second item on the agenda, we have an update about the &lt;a href="https://docs.google.com/document/d/1tbm44jC1qf6kAf9qje5V-UhaXG-AlGud9nhMaoPN6mU/edit#heading=h.fjdgqyupbu03" target="_blank" rel="noopener noreferrer"&gt;DoK Operator SIG Project Proposal - Security &amp; Hardening Guide&lt;/a&gt;. This project is proposed by &lt;strong&gt;Robert Hodges&lt;/strong&gt;, Altinity Inc.&lt;/p&gt;
&lt;p&gt;This project is a guide to establishing a baseline for secure data management on Kubernetes by fortifying the database operators. The guide aims to identify the typical attack surfaces that exist for databases running on Kubernetes. &lt;strong&gt;It will establish a collection of best practices for enhancing their security through the utilization of operators&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Robert mentioned that he connected to TAG Security, which in turn led to a link to BadRobot, which is a scanner that checks operators for excessive privileges. Also, Robert presented to DoK Bay Area last week to introduce the problem of operator security.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/06/dok-security-hardering_hu_3f51e5babc38029f.jpeg 480w, https://percona.community/blog/2023/06/dok-security-hardering_hu_2a38fc6f49ac0f71.jpeg 768w, https://percona.community/blog/2023/06/dok-security-hardering_hu_95fe32da725a235a.jpeg 1400w"
src="https://percona.community/blog/2023/06/dok-security-hardering.jpeg" alt="Panel Discussion" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;For the Operator security &amp; hardening guide, we can raise issues on &lt;a href="https://github.com/dokc/sig-operator" target="_blank" rel="noopener noreferrer"&gt;sig-operator&lt;/a&gt;. They are currently seeking volunteers and contributors for their project; Find &lt;a href="https://github.com/dokc/sig-operator/tree/main/operator-security-hardening" target="_blank" rel="noopener noreferrer"&gt;operator-security-hardening&lt;/a&gt; project on GitHub, or feel free to write Robert Hodges &lt;strong&gt;&lt;a href="mailto:rhodges@altinity.com"&gt;rhodges@altinity.com&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Finally, we have an update of the Operator Feature Matrix (OFM)&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Operator Feature Matrix (OFM)&lt;/strong&gt; is a project from the Data on Kubernetes Community to create a standardized and vendor-neutral feature matrix for various Kubernetes operators that manage stateful workloads. This project is proposed by &lt;strong&gt;Alvaro Hernandez&lt;/strong&gt;, and it is definitely a good project to contribute if you are looking to improve the end-user experience with the use of workloads in Kubernetes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CloudNativePG&lt;/strong&gt; project sent feedback to improve OFM. CloudNativePG is the Kubernetes operator that covers the full lifecycle of a highly available PostgreSQL database cluster. Planning to create a website for end-user adoption&lt;/p&gt;
&lt;p&gt;There are other (non-Postgres) technologies, like Apache Druid, jumping on OFM. This is a work in progress.&lt;/p&gt;
&lt;p&gt;The end of June is being considered for a 1.0 freeze, before which it is required to get as much feedback as possible.
If you are interested, feedback can be as simple as opening an issue to discuss something; or sending a PR requesting improvements (or both). Feel free to do it on &lt;a href="https://github.com/dokc/operator-feature-matrix" target="_blank" rel="noopener noreferrer"&gt;OFM GitHub Repo&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>DoK</category>
      <category>Opensource</category>
      <category>CNCF</category>
      <category>Kubernetes</category>
      <category>Operators</category>
      <media:thumbnail url="https://percona.community/blog/2023/06/dok-intro_hu_551fc8233fc2d418.jpg"/>
      <media:content url="https://percona.community/blog/2023/06/dok-intro_hu_9e64bcec089fa010.jpg" medium="image"/>
    </item>
    <item>
      <title>​What experts said at Kubecon about Data on Kubernetes</title>
      <link>https://percona.community/blog/2023/05/31/what-experts-said-at-kubecon-about-data-on-kubernetes/</link>
      <guid>https://percona.community/blog/2023/05/31/what-experts-said-at-kubecon-about-data-on-kubernetes/</guid>
      <pubDate>Wed, 31 May 2023 00:00:00 UTC</pubDate>
      <description>Melissa Logan, managing director of Data on Kubernetes (DoK), led one of the best panels I’ve been to at a conference at Kubecon EU in Amsterdam about challenges with and the state of the art of running databases on Kubernetes.</description>
      <content:encoded>&lt;p&gt;&lt;strong&gt;Melissa Logan&lt;/strong&gt;, managing director of &lt;strong&gt;Data on Kubernetes&lt;/strong&gt; (DoK), led one of the &lt;a href="https://www.youtube.com/watch?v=TmDdkBPW_hI&amp;t=313s" target="_blank" rel="noopener noreferrer"&gt;best panels I’ve been to at a conference at Kubecon EU&lt;/a&gt; in Amsterdam about challenges with and the state of the art of running databases on Kubernetes.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/05/01-pd-intro.jpeg" alt="Panel Discussion" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This panel united the &lt;strong&gt;Data on Kubernetes Community Operator SIG&lt;/strong&gt; and &lt;strong&gt;Kubernetes Storage SIG&lt;/strong&gt; to discuss key features of Kubernetes database operators. &lt;strong&gt;Xing Yang&lt;/strong&gt; from VMware, &lt;strong&gt;Sergey Pronin&lt;/strong&gt; from Percona, and &lt;strong&gt;Álvaro Hernández&lt;/strong&gt; from OnGres came together to discuss what works, what doesn’t, and where the industry is going. They also presented a feature matrix to help end users compare many database Operators.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/05/02-pd-panel-discution.jpeg" alt="Panel Discussion" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;If you are new to the topic of Kubernetes Operators, I wrote a blog post about &lt;a href="https://percona.community/blog/2022/10/13/learning-kubernetes-operators-with-percona-operator-for-mongodb/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Operators in a nutshell&lt;/a&gt;, you can read the first part of this article.&lt;/p&gt;
&lt;p&gt;Let’s start by summarizing the challenges the panelists mentioned when running &lt;strong&gt;stateful&lt;/strong&gt; applications on &lt;strong&gt;Kubernetes&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Some Operators have certain limitations, and there are security concerns. Database users always think about data encryption: is the data safe? What happens if the node goes down? What happens if we lose the storage?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Operator model is very extensible and flexible, which is great, but on the other hand, there are so many Operators, and it becomes a challenge to choose the right one for our use cases.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;People who are developing Operators also find challenges because every database has its native way of doing backups, but if you want to support more than one type of database, then it’s more challenging to find a generic way.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="does-the-framework-capture-what-is-needed-for-data-workloads-well"&gt;Does the framework capture what is needed for data workloads well?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;There is a capability model for Operators that classify them into five levels. There is room for improvement in this model. It can be improved to build test compatibility and more objective measures of these capability levels if you look at level five, the top one for data workloads.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/05/03-pd-capability-models.jpeg" alt="Panel Discussion" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Security is the number one criterion that people use to evaluate Operators. How are they addressing security in Kubernetes Operators?&lt;/li&gt;
&lt;li&gt;Users want to ensure that the Operator does not get a lot of privileges or does not interfere with other tenants in the Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;Now users are looking for more sophisticated ways with their existing security key-value storage. They can be sure they are safe.&lt;/li&gt;
&lt;li&gt;The framework should provide ransomware protection, so when you back up your databases, you also want to have one immutable copy to provide your protection and recover from that.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="now-lets-talk-about-solutions"&gt;Now Let’s talk about solutions&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;DoK&lt;/strong&gt; started an &lt;strong&gt;Operator Special Interest Group&lt;/strong&gt; (SIG), and community members have been meeting to discuss how as a group and as an industry, to collaborate to come up with solutions for some of the challenges end users face. The Operator SIG works with the Storage Technical Advisory Group (TAG), Storage SIG, and Security SIG.&lt;/p&gt;
&lt;p&gt;According to data on Kubernetes’s (DoK) &lt;a href="https://dok.community/wp-content/uploads/2022/10/DoK_Report_2022.pdf" target="_blank" rel="noopener noreferrer"&gt;first report&lt;/a&gt;, 70% of responses are running workloads in production. More data workloads are running on Kubernetes, so it is essential to know what works well. Sharing knowledge and leveraging that expertise is vital at this stage.&lt;/p&gt;
&lt;p&gt;There are things that SIG Operators detail in a &lt;a href="https://docs.google.com/document/d/1Uyk5qQ4KhpI-YnLdG72V66dO9Hxv_kqTK_CrMHS9EFc/edit#heading=h.nxcx7r52ocev" target="_blank" rel="noopener noreferrer"&gt;document&lt;/a&gt;, like common patterns and features used when running databases on Kubernetes, best practices, the criteria for running a good operator, why observability is so important in the cloud-native environment and security.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Operator Feature Matrix&lt;/strong&gt; is a big initiative to help end users find operators based on different criteria to choose what they need. It is a project to compare different Operators. They are starting with database Operators, and one project is already defined: &lt;a href="https://github.com/dokc/operator-feature-matrix/tree/main/postgres" target="_blank" rel="noopener noreferrer"&gt;Postgres Operator Feature Matrix&lt;/a&gt;. Feel free to contribute to &lt;a href="https://github.com/dokc/operator-feature-matrix" target="_blank" rel="noopener noreferrer"&gt;OFM&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;New to Kubernetes Operators and databases? Check out &lt;a href="https://www.percona.com/software/percona-kubernetes-operators" target="_blank" rel="noopener noreferrer"&gt;Percona’s Operators&lt;/a&gt; for MySQL, PostgreSQL, and MongoDB.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Kubeconeu</category>
      <category>Opensource</category>
      <category>CNCF</category>
      <category>Kubernetes</category>
      <category>DoK</category>
      <category>Operators</category>
      <media:thumbnail url="https://percona.community/blog/2023/05/01-pd-intro_hu_19ef74926ec419b2.jpeg"/>
      <media:content url="https://percona.community/blog/2023/05/01-pd-intro_hu_c928a8c7630a9410.jpeg" medium="image"/>
    </item>
    <item>
      <title>Easy Way to Start Contributing to Open Source With PMM Documentation</title>
      <link>https://percona.community/blog/2023/05/18/easy-way-to-start-contributing-to-open-source-with-pmm-documentation/</link>
      <guid>https://percona.community/blog/2023/05/18/easy-way-to-start-contributing-to-open-source-with-pmm-documentation/</guid>
      <pubDate>Thu, 18 May 2023 00:00:00 UTC</pubDate>
      <description>If you are a user of Percona Monitoring and Management and noticed any typo or inaccurate information in its documentation, you can easily correct it yourself in the repository following detailed instructions in README.md. But if you are not experienced in open source contributions, you may still feel uneasy about following those steps. This post is for you! We will walk through the main steps with pictures and explanations.</description>
      <content:encoded>&lt;p&gt;If you are a user of Percona Monitoring and Management and noticed any typo or inaccurate information in its &lt;a href="https://docs.percona.com/percona-monitoring-and-management/index.html" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, you can easily correct it yourself in the &lt;a href="https://github.com/percona/pmm-doc" target="_blank" rel="noopener noreferrer"&gt;repository&lt;/a&gt; following detailed instructions in &lt;a href="https://github.com/percona/pmm-doc#readme" target="_blank" rel="noopener noreferrer"&gt;README.md&lt;/a&gt;. But if you are not experienced in open source contributions, you may still feel uneasy about following those steps. This post is for you! We will walk through the main steps with pictures and explanations.&lt;/p&gt;
&lt;h2 id="create-a-fork"&gt;Create a Fork&lt;/h2&gt;
&lt;p&gt;First, you need to create a fork from the main repository to your account. In the top-right corner of the page, click &lt;strong&gt;Fork - Create a new fork&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/contribution2_hu_94d1252e7873a71a.jpg 480w, https://percona.community/blog/2023/05/contribution2_hu_77c7ef72837f51a7.jpg 768w, https://percona.community/blog/2023/05/contribution2_hu_3031673fbd200a2b.jpg 1400w"
src="https://percona.community/blog/2023/05/contribution2.jpg" alt="Contribution" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="build-documentation-with-docker"&gt;Build Documentation With Docker&lt;/h2&gt;
&lt;p&gt;The easiest way is to build documentation with Docker. If you don’t have it installed, download it from the Docker official website and follow the instructions. The process of installation is quick, and it is no more difficult than the installation of any other app.&lt;/p&gt;
&lt;p&gt;Open your fork on GitHub and clone that repository to your local environment.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git clone git@github.com:{user-name}/pmm-doc.git&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/contribution1_hu_c45b801f2d40d62c.jpg 480w, https://percona.community/blog/2023/05/contribution1_hu_f71352ebd639c2cb.jpg 768w, https://percona.community/blog/2023/05/contribution1_hu_f589238117718bb0.jpg 1400w"
src="https://percona.community/blog/2023/05/contribution1.jpg" alt="Contribution" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Change directory to &lt;strong&gt;pmm-doc&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;cd pmm-doc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;To check how our edits will look like, we need to build documentation for live previewing. Run:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;docker run --rm -v $(pwd):/docs -p 8000:8000 perconalab/pmm-doc-md mkdocs serve --dev-addr=0.0.0.0:8000&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Wait until you see &lt;code&gt;INFO - Start detecting changes&lt;/code&gt;. When the documentation is ready to work with, it will be available at &lt;a href="http://0.0.0.0:8000/" target="_blank" rel="noopener noreferrer"&gt;http://0.0.0.0:8000&lt;/a&gt; in your browser, and it will reflect all changes that you make locally.&lt;/p&gt;
&lt;h2 id="make-changes"&gt;Make Changes&lt;/h2&gt;
&lt;p&gt;In a new Terminal tab, create a new branch and make your changes. Save them, create a commit, and push it to your fork.&lt;/p&gt;
&lt;p&gt;Create a pull request to the main repository. You will also need to sign the CLA, so we could merge your changes.&lt;/p&gt;
&lt;p&gt;You did it! Congratulations! Now wait for the feedback from the Percona team. If there is no problem with your PR, it will be merged into the main repository.&lt;/p&gt;
&lt;h2 id="next-steps"&gt;Next Steps&lt;/h2&gt;
&lt;p&gt;To make further changes, you need to keep your repository up-to-date with the upstream one. There are several ways to do it. You can find the information &lt;a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The simplest way is to do it using the GitHub interface. Just click on &lt;strong&gt;Sync fork&lt;/strong&gt; and then &lt;strong&gt;Update branch&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/contribution3_hu_56b6b21ef294a916.jpg 480w, https://percona.community/blog/2023/05/contribution3_hu_190d31d23ea9722e.jpg 768w, https://percona.community/blog/2023/05/contribution3_hu_e84d49da02568d17.jpg 1400w"
src="https://percona.community/blog/2023/05/contribution3.jpg" alt="Contribution" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;After that, you will be able to update your local repository with &lt;code&gt;git pull&lt;/code&gt; command.&lt;/p&gt;
&lt;p&gt;If you face any problems with contributions to Percona repositories, don’t hesitate to contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; or ask your question on the &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona Forum&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <category>Opensource</category>
      <category>Documentation</category>
      <media:thumbnail url="https://percona.community/blog/2023/05/PMM-Doc-Contribute_hu_405740408eff0264.jpg"/>
      <media:content url="https://percona.community/blog/2023/05/PMM-Doc-Contribute_hu_7c2bbccc235458aa.jpg" medium="image"/>
    </item>
    <item>
      <title>​My Experience at Kubecon Europe in Amsterdam</title>
      <link>https://percona.community/blog/2023/05/11/experience-at-kubecon-europe-in-amsterdam/</link>
      <guid>https://percona.community/blog/2023/05/11/experience-at-kubecon-europe-in-amsterdam/</guid>
      <pubDate>Thu, 11 May 2023 00:00:00 UTC</pubDate>
      <description>Kubecon is the most significant event focused on the Kubernetes ecosystem. It takes place once a year in North America, Europe, and Asia. It is a perfect opportunity to learn from experts, meet friends, grow your network, and attend talks at a varied technical level and meetings focused on CNCF communities. This time I attended Kubecon in Amsterdam. The theme for this version of Kubecon was: community-in-bloom because we are still healing from COVID, and people are getting back to feeling comfortable participating in events.</description>
      <content:encoded>&lt;p&gt;&lt;strong&gt;Kubecon&lt;/strong&gt; is the most significant event focused on the &lt;strong&gt;Kubernetes&lt;/strong&gt; ecosystem. It takes place once a year in North America, Europe, and Asia. It is a perfect opportunity to learn from experts, meet friends, grow your network, and attend talks at a varied technical level and meetings focused on CNCF communities.
This time I attended Kubecon in Amsterdam. The theme for this version of Kubecon was: &lt;strong&gt;community-in-bloom&lt;/strong&gt; because we are still healing from COVID, and people are getting back to feeling comfortable participating in events.&lt;/p&gt;
&lt;p&gt;It is not my first time attending &lt;strong&gt;Kubecon&lt;/strong&gt;; this is my fourth time! What was different this time was that &lt;a href="https://www.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt;, the company I work for, sponsored Kubecon. This means we also had a booth at the event to share what we do at Percona.&lt;/p&gt;
&lt;p&gt;Yessss!! Percona had a booth in Kubecon, Amsterdam.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/01-percona-kubecon_hu_c1ca5fc462feb3ca.jpg 480w, https://percona.community/blog/2023/05/01-percona-kubecon_hu_84ba9093843a0bf7.jpg 768w, https://percona.community/blog/2023/05/01-percona-kubecon_hu_3fd2fbf0a8ed07bb.jpg 1400w"
src="https://percona.community/blog/2023/05/01-percona-kubecon.jpg" alt="Percona At Kubecon" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Percona is a 100% &lt;strong&gt;remote&lt;/strong&gt; company, and &lt;strong&gt;Kubecon&lt;/strong&gt; was the opportunity to meet part of the team I work with daily.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/02-percona-team_hu_6a1b9d4a83eb6b5f.JPG 480w, https://percona.community/blog/2023/05/02-percona-team_hu_9001b6a047c40d27.JPG 768w, https://percona.community/blog/2023/05/02-percona-team_hu_145c32af2f2c18a4.JPG 1400w"
src="https://percona.community/blog/2023/05/02-percona-team.JPG" alt="Percona Team" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We started by meeting with friends in one of the most popular places in &lt;strong&gt;Amsterdam&lt;/strong&gt;. I met people from different tech communities and &lt;strong&gt;CNCF ambassadors&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/03-kubecon-happyhour_hu_afb248e4c85d7a87.jpeg 480w, https://percona.community/blog/2023/05/03-kubecon-happyhour_hu_cce5ceb33f1eab7e.jpeg 768w, https://percona.community/blog/2023/05/03-kubecon-happyhour_hu_640e67a17e3019b3.jpeg 1400w"
src="https://percona.community/blog/2023/05/03-kubecon-happyhour.jpeg" alt="Kubecon Happy Hour" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;If you are in Amsterdam and don’t ride a bicycle, it is an incomplete experience&lt;/em&gt;. We rode a bike to the convention center and collected the event badge.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/04-bicicle_hu_cf11f52540367ec6.jpg 480w, https://percona.community/blog/2023/05/04-bicicle_hu_86d206f37bc97a7d.jpg 768w, https://percona.community/blog/2023/05/04-bicicle_hu_951c1fa69f4379d8.jpg 1400w"
src="https://percona.community/blog/2023/05/04-bicicle.jpg" alt="Amsterdam Bicycle" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;After that, there were three intense days, many activities, meetings, and sessions.&lt;/p&gt;
&lt;h2 id="community"&gt;Community&lt;/h2&gt;
&lt;p&gt;I met with several members of the &lt;strong&gt;Docker&lt;/strong&gt; and &lt;strong&gt;CNCF&lt;/strong&gt; communities. This is the meeting of the Docker captains and various members of the Docker team. Such an amazing experience to know them in person.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/05/05-docker-community.jpeg" alt="Docker Captains" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;And this is the group of photos that all the CNCF ambassadors had at Kubecon. There are 155 CNCF ambassadors around the world and contributors and advocates of the CNCF ecosystem.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/06-cncf-ambassadors_hu_7700f94bcf2d2c3.JPG 480w, https://percona.community/blog/2023/05/06-cncf-ambassadors_hu_b09ae86575012c0d.JPG 768w, https://percona.community/blog/2023/05/06-cncf-ambassadors_hu_1c7a7e407b0aeb33.JPG 1400w"
src="https://percona.community/blog/2023/05/06-cncf-ambassadors.JPG" alt="CNCF ambassadors breakfast" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="lightning-talks"&gt;Lightning Talks&lt;/h2&gt;
&lt;p&gt;I attended a large part of the lightning sessions, which were very inspiring. Each speaker has less than 5 minutes to explain a specific topic.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Kevin Patrick&lt;/strong&gt; talked about the &lt;a href="https://www.youtube.com/watch?v=eAoC1ordaXQ" target="_blank" rel="noopener noreferrer"&gt;Armada as a Sandbox project in the CNCF&lt;/a&gt;, in which the principal goal is enabling batch processing across multiple Kubernetes clusters. Kevin made an introduction to Armada and showed the integration with &lt;a href="https://airflow.apache.org/" target="_blank" rel="noopener noreferrer"&gt;Apache Airflow&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The next talk was about &lt;a href="https://www.youtube.com/watch?v=BDA7atvmnV4" target="_blank" rel="noopener noreferrer"&gt;The CNCF Board Game Rules&lt;/a&gt;, where &lt;strong&gt;Peter O’Neill&lt;/strong&gt; made an abstract about the world of the CNCF and imagined it as a role-playing game (RPG) board game. It was pretty nice for Peter to show us the adventure of CNCF with games.&lt;/p&gt;
&lt;p&gt;This was one of my favorite lightning talks: [A Beginners Guide to Conference Speaking] (&lt;a href="https://www.youtube.com/watch?v=jCz9QPrJ6Eo%29with" target="_blank" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=jCz9QPrJ6Eo)with&lt;/a&gt; &lt;strong&gt;Paula Kennedy&lt;/strong&gt;. She showed a friendly way to prepare proposals for CFPs and advice that will help us through the process.&lt;/p&gt;
&lt;p&gt;Another good lightning session was about the &lt;a href="https://www.youtube.com/watch?v=Wn0S6CTXGS4" target="_blank" rel="noopener noreferrer"&gt;Power-Aware Scheduling in Kubernetes&lt;/a&gt; with &lt;strong&gt;Yuan Chen&lt;/strong&gt; from Apple. In this talk, Yan gave an overview of a new scheduler feature to support power-aware scheduling in Kubernetes and how it can help safely increase server hardware and data center infrastructure size and improve resource utilization and workload reliability for Kubernetes clusters.&lt;/p&gt;
&lt;p&gt;Another of my favorites was &lt;a href="https://www.youtube.com/watch?v=Kp6GQjZixPE" target="_blank" rel="noopener noreferrer"&gt;Talking to Kubernetes with Rust&lt;/a&gt; with &lt;strong&gt;James Laverack&lt;/strong&gt;, where he showed how to interact with Kubernetes in Rust.&lt;/p&gt;
&lt;h2 id="kubernetes-operators-panel-discussion"&gt;Kubernetes Operators Panel Discussion&lt;/h2&gt;
&lt;p&gt;I also attended a &lt;strong&gt;Panel Discussion about Kubernetes Operators&lt;/strong&gt;.
In this panel discussion, &lt;strong&gt;Xing Yang&lt;/strong&gt;, &lt;strong&gt;Melissa Logan&lt;/strong&gt;, &lt;strong&gt;Sergey Pronin&lt;/strong&gt;, and &lt;strong&gt;Alvaro Hernandez&lt;/strong&gt; talked about the challenges that final users have when running data workloads in Kubernetes Operators. They also shared about the need to fulfill the process of data workloads.
Check out this fantastic &lt;a href="https://www.youtube.com/watch?v=TmDdkBPW_hI&amp;list=PLj6h78yzYM2PyrvCoOii4rAopBswfz1p7&amp;index=184" target="_blank" rel="noopener noreferrer"&gt;talk&lt;/a&gt; and learn more about &lt;strong&gt;Kubernetes Operators&lt;/strong&gt;.
A very curious and interesting fact is that most of the assistants use data workload with Kubernetes Operators&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/07-kuberentes-operators_hu_71012bd382d2162e.JPG 480w, https://percona.community/blog/2023/05/07-kuberentes-operators_hu_416aefb6a1340926.JPG 768w, https://percona.community/blog/2023/05/07-kuberentes-operators_hu_cae55e1fb1827ff2.JPG 1400w"
src="https://percona.community/blog/2023/05/07-kuberentes-operators.JPG" alt="Kubernetes Operator Panel Discussion" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="ebpf"&gt;eBPF&lt;/h2&gt;
&lt;p&gt;The last talk I attended was an &lt;strong&gt;eBPF&lt;/strong&gt; talk with &lt;strong&gt;Liz Rice&lt;/strong&gt; (Chief Open Source Officer, Savant)
In this talk, Liz shows a demo about how Cilium and its ClusterMesh feature can take care of many aspects of connectivity across multiple clusters in a cloud-agnostic way. Check this &lt;a href="https://www.youtube.com/watch?v=fJiuqRY5Oi4&amp;t=22s" target="_blank" rel="noopener noreferrer"&gt;talk in CNCF YouTube Channel&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/08-ebpf_hu_f32e6846c31f46c8.jpg 480w, https://percona.community/blog/2023/05/08-ebpf_hu_7ba6da22255833a.jpg 768w, https://percona.community/blog/2023/05/08-ebpf_hu_fa07e96002c796a4.jpg 1400w"
src="https://percona.community/blog/2023/05/08-ebpf.jpg" alt="Kubecon eBPF" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="clossign-kubecon-amsterdam"&gt;Clossign Kubecon Amsterdam&lt;/h2&gt;
&lt;p&gt;Finally, in Percona, we closed the event with a raffle to take home an Atari’; the expectations were relatively high and very fun, and many Percona lovers came to participate in the raffle.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/05/09-raffle_hu_6ebda76d1a389918.jpg 480w, https://percona.community/blog/2023/05/09-raffle_hu_e02825ea1325de47.jpg 768w, https://percona.community/blog/2023/05/09-raffle_hu_ac3f6b8018f60f59.jpg 1400w"
src="https://percona.community/blog/2023/05/09-raffle.jpg" alt="Percona Raffle" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;You can check in which &lt;a href="https://www.percona.com/events" target="_blank" rel="noopener noreferrer"&gt;events&lt;/a&gt; Percona is going to be in the next months&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Events</category>
      <category>Opensource</category>
      <category>CNCF</category>
      <category>Kubernetes</category>
      <media:thumbnail url="https://percona.community/blog/2023/05/00-kubeconeu-intro_hu_70f7290cee15e82a.jpg"/>
      <media:content url="https://percona.community/blog/2023/05/00-kubeconeu-intro_hu_1dc5373850d16e87.jpg" medium="image"/>
    </item>
    <item>
      <title>PostgreSQL: Query Optimization With Python and PgBouncer</title>
      <link>https://percona.community/blog/2023/04/25/postgresql-query-optimization-with-python-and-pgbouncer/</link>
      <guid>https://percona.community/blog/2023/04/25/postgresql-query-optimization-with-python-and-pgbouncer/</guid>
      <pubDate>Tue, 25 Apr 2023 00:00:00 UTC</pubDate>
      <description> Database application by Nick Youngson CC BY-SA 3.0 Pix4free</description>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/04/database-application.jpg" alt="Database application" /&gt;&lt;figcaption&gt;Database application by Nick Youngson CC BY-SA 3.0 Pix4free&lt;/figcaption&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;A few months ago I wrote a few blog posts on how to generate test data for your database project using Python, which you can find on the Percona blog and the Community blog:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/blog/how-to-generate-test-data-for-mysql-with-python/" target="_blank" rel="noopener noreferrer"&gt;How To Generate Test Data for MySQL with Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/blog/how-to-generate-test-data-for-mongodb-with-python/" target="_blank" rel="noopener noreferrer"&gt;How To Generate Test Data for MongoDB With Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://percona.community/blog/2023/01/09/how-to-generate-test-data-for-your-database-project-with-python/" target="_blank" rel="noopener noreferrer"&gt;How To Generate Test Data for Your Database Project With Python&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The basic idea is to create a script that uses &lt;a href="https://github.com/joke2k/faker" target="_blank" rel="noopener noreferrer"&gt;Faker&lt;/a&gt;, a Python library for generating fake data, and what the script does is&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Divide the whole process into every CPU core available by implementing multiprocessing&lt;/li&gt;
&lt;li&gt;The script will generate a total of 60 thousand records, divided by the number of CPU cores minus one&lt;/li&gt;
&lt;li&gt;Each set of records is stored in a Pandas DataFrame, then concatenated into a single DataFrame&lt;/li&gt;
&lt;li&gt;The DataFrame is inserted into the database using Pandas’ &lt;code&gt;to_sql&lt;/code&gt; method, and pymongo’s &lt;code&gt;insert_many&lt;/code&gt; method&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;How can the script be optimized? Instead of generating the data, storing it in a DataFrame, and then inserting it into the database, you can make every CPU core insert the data while generating it without storing it elsewhere before running the corresponding SQL statements. Multiprocessing is implemented to use every CPU core available but you also need to configure a connection pool for your PostgreSQL server.&lt;/p&gt;
&lt;p&gt;Through this blog post, you will learn how to install and configure PgBouncer with Python to implement a connection pool for your application.&lt;/p&gt;
&lt;h2 id="pgbouncer"&gt;PgBouncer&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.pgbouncer.org/" target="_blank" rel="noopener noreferrer"&gt;PgBouncer&lt;/a&gt; is a PostgreSQL connection pooler. Any target application can be connected to PgBouncer as if it were a PostgreSQL server, and PgBouncer will create a connection to the actual server, or it will reuse one of its existing connections.&lt;/p&gt;
&lt;p&gt;The aim of PgBouncer is to lower the performance impact of opening new connections to PostgreSQL.&lt;/p&gt;
&lt;h3 id="installation"&gt;Installation&lt;/h3&gt;
&lt;p&gt;Ir you’re an Ubuntu user, you can install PgBouncer from the repositories:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt install pgbouncer -y&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If not available in the repositories, you can follow the instructions below for both Debian and Ubuntu as mentioned in the Scaleway documentation&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create the &lt;code&gt;apt&lt;/code&gt; repository configuration file&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" &gt; /etc/apt/sources.list.d/pgdg.list'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Import the repository signing key&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Update the &lt;code&gt;apt&lt;/code&gt; package manager&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt update&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="4"&gt;
&lt;li&gt;Install PgBouncer using &lt;code&gt;apt&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt install pgbouncer -y&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="configuration"&gt;Configuration&lt;/h3&gt;
&lt;p&gt;After installing PgBouncer, edit the configuration files, as stated in the Scaleway &lt;a href="https://www.scaleway.com/en/docs/tutorials/install-pgbouncer/" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Set up the PostgreSQL server details in &lt;code&gt;/etc/pgbouncer/pgbouncer.ini&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;database_name = host=localhost port=5432 dbname=database_name&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You may also want to set &lt;code&gt;listen_addr&lt;/code&gt; to &lt;code&gt;*&lt;/code&gt; if you want to to listen to TCP connections on all addresses or set a list of IP addresses.&lt;/p&gt;
&lt;p&gt;Default &lt;code&gt;listen_port&lt;/code&gt; is &lt;code&gt;6432&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;From &lt;a href="https://www.compose.com/articles/how-to-pool-postgresql-connections-with-pgbouncer/" target="_blank" rel="noopener noreferrer"&gt;this article&lt;/a&gt; by Abdullah Alger, the settings &lt;code&gt;max_client_conn&lt;/code&gt; and &lt;code&gt;default_pool_size&lt;/code&gt;, the former refers to the number of applications that will make connections and the latter is how many server connections per database. The defaults are set at &lt;code&gt;100&lt;/code&gt; and &lt;code&gt;20&lt;/code&gt;, respectively.&lt;/p&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Edit the &lt;code&gt;/etc/pgbouncer/userlist.txt&lt;/code&gt; file and add your PostgreSQL credentials&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;“username” “password”&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Add the IP address of the PgBouncer server to the PostgreSQL &lt;code&gt;pg_hba.conf&lt;/code&gt; file&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;host all all PGBOUNCER_IP/NETMASK trust&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;By default, PgBouncer comes with &lt;code&gt;trust&lt;/code&gt; authentication method. The trust method can be used in a development environment but is not recommended for production. For production, &lt;code&gt;hba&lt;/code&gt; authentication is recommended.&lt;/p&gt;
&lt;ol start="4"&gt;
&lt;li&gt;After configuring PgBouncer, restart both the PostgreSQL and PgBouncer services&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo systemctl reload postgresql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo systemctl reload pgbouncer&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For more information about additional configuration options, check the PgBouncer &lt;a href="https://www.pgbouncer.org/config.html" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="python"&gt;Python&lt;/h2&gt;
&lt;h3 id="requirements"&gt;Requirements&lt;/h3&gt;
&lt;h4 id="dependencies"&gt;Dependencies&lt;/h4&gt;
&lt;p&gt;Make sure all the dependencies are installed before creating the Python script that will generate the data for your project.&lt;/p&gt;
&lt;p&gt;You can create a &lt;code&gt;requirements.txt&lt;/code&gt; file with the following content:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tqdm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;faker
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;psycopg2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or if you’re using Anaconda, create an &lt;code&gt;environment.yml&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;name: percona
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;dependencies:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - python=3.10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - tqdm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - faker
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - psycopg2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can change the Python version as this script has been proven to work with these versions of Python: 3.7, 3.8, 3.9, 3.10, and 3.11.&lt;/p&gt;
&lt;p&gt;Run the following command if you’re using &lt;code&gt;pip&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pip install -r requirements.txt&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or run the following statement to configure the project environment when using Anaconda:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;conda env create -f environment.yml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="database"&gt;Database&lt;/h4&gt;
&lt;p&gt;Now that you have the dependencies installed, you must create a database named &lt;code&gt;company&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Log into PostgreSQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo su postgres
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ psql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create the &lt;code&gt;company&lt;/code&gt; database:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;create database company;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;And create the &lt;code&gt;employees&lt;/code&gt; table:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;create table employees(
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; id serial primary key,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fist_name varchar(50) not null,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; last_name varchar(50) not null,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; job varchar(100) not null,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; address varchar(200) not null,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; city varchar(100) not null,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; email varchar(50) not null
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;);&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="inserting-data"&gt;Inserting Data&lt;/h3&gt;
&lt;p&gt;Now it’s time to create the Python script that will generate the data and insert it into the database.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;from multiprocessing import Pool, cpu_count
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;import psycopg2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;from tqdm import tqdm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;from faker import Faker
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;fake = Faker()
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;num_cores = cpu_count() - 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;def insert_data(arg):
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; x = int(60000/num_cores)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; print(x)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; with psycopg2.connect(database="database_name", user="user", password="password", host="localhost", port="6432") as conn:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; with conn.cursor() as cursor:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; for i in tqdm(range(x), desc="Inserting Data"):
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sql = "INSERT INTO employees (first_name, last_name, job, address, city, email) VALUES (%s, %s, %s, %s, %s, %s)"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; val = (fake.first_name(), fake.last_name(), fake.job(), fake.address(), fake.city(), fake.email())
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; cursor.execute(sql, val)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;if __name__=="__main__":
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; with Pool() as pool:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pool.map(insert_data, range(num_cores))&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;At first, the multiprocessing pool is created, and configured to use all available CPU cores minus one. Each core will call the &lt;code&gt;insert_data()&lt;/code&gt; function.&lt;/p&gt;
&lt;p&gt;On each call to the function, a connection to the database will be established through the default port (6432) of PgBouncer, meaning that the application will open a number of connections equal to &lt;code&gt;num_cores&lt;/code&gt;, a variable that contains the number of CPU cores being used.&lt;/p&gt;
&lt;p&gt;Then, the data will be generated with Faker and inserted into the database by executing the corresponding SQL statements.&lt;/p&gt;
&lt;p&gt;In a CPU with 16 cores, the number of records inserted into the database on each call to the function will be equal to 60 thousand divided by 15, that is 4 thousand SQL statements executed.&lt;/p&gt;
&lt;p&gt;This way you can modify the script and optimize it by configuring a connection pool with PgBouncer.&lt;/p&gt;</content:encoded>
      <author>Mario García</author>
      <category>PostgreSQL</category>
      <category>Python</category>
      <media:thumbnail url="https://percona.community/blog/2023/04/database-application_hu_8c3750ef5226c2f2.jpg"/>
      <media:content url="https://percona.community/blog/2023/04/database-application_hu_4464c2ce00914a53.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.37 preview release</title>
      <link>https://percona.community/blog/2023/04/20/preview-release/</link>
      <guid>https://percona.community/blog/2023/04/20/preview-release/</guid>
      <pubDate>Thu, 20 Apr 2023 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.37 preview release Hello folks! Percona Monitoring and Management (PMM) 2.37 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-237-preview-release"&gt;Percona Monitoring and Management 2.37 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.37 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;You can find the Release Notes &lt;a href="https://pmm-2-37-0-pr-1043.onrender.com/release-notes/2.37.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker-installation"&gt;Percona Monitoring and Management server docker installation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.37.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; In order to use the DBaaS functionality during the Percona Monitoring and Management preview release, you should add the following environment variable when starting PMM server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PERCONA_TEST_DBAAS_PMM_CLIENT=perconalab/pmm-client:2.37.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client release candidate tarball for 2.37 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-5256.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.37.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.37.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-013c92f3d0c727b8f&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us in &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <category>Release</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>FerretDB - A Quick Look</title>
      <link>https://percona.community/blog/2023/04/12/ferretdb-a-quick-look/</link>
      <guid>https://percona.community/blog/2023/04/12/ferretdb-a-quick-look/</guid>
      <pubDate>Wed, 12 Apr 2023 00:00:00 UTC</pubDate>
      <description>There is an old saying that what looks like a duck and quacks like a duck is probably a duck. But what looks like MongoDB and acts like MongoDB could be FerretDB! To greatly simplify the technology behind this project, FerretDB speaks, or quacks, MongoDB but stores the data in PostgreSQL. PostgreSQL has had a rich JSON data environment for years and FerrtDB takes advantage of this capability. This is a truly Open Source MongoDB alternative and was released under the Apache 2.0 license.</description>
      <content:encoded>&lt;p&gt;There is an old saying that what looks like a duck and quacks like a duck is probably a duck. But what looks like MongoDB and acts like MongoDB could be FerretDB! To greatly simplify the technology behind this project, FerretDB speaks, or quacks, MongoDB but stores the data in PostgreSQL. PostgreSQL has had a rich JSON data environment for years and FerrtDB takes advantage of this capability. This is a truly Open Source MongoDB alternative and was released under the Apache 2.0 license.&lt;/p&gt;
&lt;p&gt;FerretDB has been in development for a while, but they &lt;a href="https://blog.ferretdb.io/ferretdb-1-0-ga-opensource-mongodb-alternative/" target="_blank" rel="noopener noreferrer"&gt;announced&lt;/a&gt; the first Generally Available Release of their product recently.&lt;/p&gt;
&lt;p&gt;In the announcement is a quick “How To Get Started” section which details how to get FerretDB running with the help of Docker. As can be seen below, this is a very simple process.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker run -d --rm --name ferretdb -p 27017:27017 ghcr.io/ferretdb/all-in-one
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Unable to find image 'ghcr.io/ferretdb/all-in-one:latest' locally
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;latest: Pulling from ferretdb/all-in-one
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;f1f26f570256: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1c04f8741265: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;dffc353b86eb: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;18c4a9e6c414: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;81f47e7b3852: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;5e26c947960d: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;a2c3dc85e8c3: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;17df73636f01: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;713535cdf17c: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;52278a39eea2: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;4ded87da67f6: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;05fae4678312: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;56b4f4aeea2d: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;68c486387c4f: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;5eb3eee800a9: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;8e5dd809e820: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;d3e85fce5b45: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;e6810cdbd43b: Pull complete
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Digest: sha256:072312577c1daf469ac77d09284a638dea98b63f4f4334fd54959324847b93aa
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Status: Downloaded newer image for ghcr.io/ferretdb/all-in-one:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;58f00a86bad172674479f3663563af274e0dd3d15249029a403d0c85039b7ab5&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now that FerretDB is ready, we can use the MondoDB shell to speak to it.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker exec -it ferretdb mongosh&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Current Mongosh Log ID: 6435963392d12db06bdb7ecc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&amp;serverSelectionTimeoutMS=2000&amp;appName=mongosh+1.8.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Using MongoDB: 6.0.42
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Using Mongosh: 1.8.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(the remaining output was omitted for brevity)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Entering some very basic MongoDB commands work as expected. Well, for the most part.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; db
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; show collections;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; db.createCollection('test');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{ ok: 1 }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; show collections;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; db.test.insert({name: "Dave", state: "Texas"});
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DeprecationWarning: Collection.insert() is deprecated. Use insertOne, insertMany, or bulkWrite.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; acknowledged: true,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; insertedIds: { '0': ObjectId("6435ac52c4a22ac27f30e2a2") }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; db.test.insertOne({name: "Dave", state: "Texas"});
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; acknowledged: true,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; insertedId: ObjectId("6435ac5dc4a22ac27f30e2a3")
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; db.test.find();
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; _id: ObjectId("6435ac52c4a22ac27f30e2a2"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: 'Dave',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; state: 'Texas'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; _id: ObjectId("6435ac5dc4a22ac27f30e2a3"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: 'Dave',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; state: 'Texas'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I expected that the inset on the deprecated ’insert’ command would not create a document but I was wrong. It took me a moment to realize that the ‘insert’ and ‘insertOne’ commands both worked after looking at the different ObjectIds.
But what do we know about the server itself? Issuing a serverStatus commands confirms we are talking to the FerretDB server.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; db.runCommand({serverStatus: 1});
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; host: '58f00a86bad1',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; version: '6.0.42',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; process: 'ferretdb',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pid: Long("10"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; uptime: 6277.435694035,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; uptimeMillis: Long("6277435"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; uptimeEstimate: Long("6277"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; localTime: ISODate("2023-04-11T18:59:46.488Z"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; freeMonitoring: { state: 'undecided' },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; metrics: {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; commands: {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ping: { total: Long("1"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; getFreeMonitoringStatus: { total: Long("1"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; create: { total: Long("1"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; insert: { total: Long("3"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; atlasVersion: { total: Long("1"), failed: Long("1") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; getLog: { total: Long("1"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; buildInfo: { total: Long("1"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; getCmdLineOpts: { total: Long("1"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; listCollections: { total: Long("2"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ismaster: { total: Long("611"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; find: { total: Long("4"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; getParameter: { total: Long("1"), failed: Long("1") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; hello: { total: Long("1"), failed: Long("0") },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; unknown: { total: Long("5"), failed: Long("0") }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ok: 1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; catalogStats: {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; collections: 210,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; capped: 0,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; timeseries: 0,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; views: 0,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; internalCollections: 0,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; internalViews: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test&gt; &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;FerretDB is a MongoDB protocol server built upon PostgreSQL. Those unhappy with the change in MongoDB’s license change away from open source now have another path they can follow. I will have a full session at &lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live&lt;/a&gt; on &lt;a href="https://www.ferretdb.io/" target="_blank" rel="noopener noreferrer"&gt;FerretDB&lt;/a&gt; where I will delve into how complete of an option this is for those desiring an open solution.&lt;/p&gt;</content:encoded>
      <author>David Stokes</author>
      <category>FerretDB</category>
      <category>MongoDB</category>
      <category>Databases</category>
      <category>Opensource</category>
      <media:thumbnail url="https://percona.community/blog/2023/04/Ferret-1200_hu_f84f546d74aa0f3b.jpg"/>
      <media:content url="https://percona.community/blog/2023/04/Ferret-1200_hu_1eab71f1011408b4.jpg" medium="image"/>
    </item>
    <item>
      <title>​​Using the JSON data type with MySQL 8 - Part II</title>
      <link>https://percona.community/blog/2023/04/11/using-the-json-data-type-with-mysql-8-ii/</link>
      <guid>https://percona.community/blog/2023/04/11/using-the-json-data-type-with-mysql-8-ii/</guid>
      <pubDate>Tue, 11 Apr 2023 00:00:00 UTC</pubDate>
      <description>If you read - Using the JSON data type with MySQL 8 - Part I, you will see that inserting data into MySQL of JSON type is a very common and effective practice. Now we’ll see how to do it with a Python project, using SQLAlchemy and Docker Compose, which further automates this example. You can run this example using a single command: docker-compose up</description>
      <content:encoded>&lt;p&gt;If you read - &lt;a href="https://percona.community/blog/2023/03/13/using-the-json-data-type-with-mysql-8/" target="_blank" rel="noopener noreferrer"&gt;Using the JSON data type with MySQL 8 - Part I&lt;/a&gt;, you will see that inserting data into &lt;strong&gt;MySQL&lt;/strong&gt; of &lt;strong&gt;JSON&lt;/strong&gt; type is a very common and effective practice. Now we’ll see how to do it with a &lt;strong&gt;Python&lt;/strong&gt; project, using &lt;strong&gt;SQLAlchemy&lt;/strong&gt; and &lt;strong&gt;Docker Compose&lt;/strong&gt;, which further automates this example. You can run this example using a single command: &lt;strong&gt;docker-compose up&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Before getting down to work, we will review some important concepts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Percona Server for MySQL&lt;/strong&gt; is an open source, drop-in replacement for MySQL Community that provides better performance, more scalability, and enhanced security features.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SQLAlchemy&lt;/strong&gt; is a library that allows us to communicate between Python programs and databases.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt; is a tool for defining and running multi-container Docker applications.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s start with the structure of this project:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/04/01-mjii-folders.jpg" alt="Project folder structure" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We have a folder called &lt;strong&gt;app&lt;/strong&gt; which contains the &lt;strong&gt;db.py&lt;/strong&gt; file, and this is where we create the &lt;strong&gt;library&lt;/strong&gt; database and establish the connection with this database.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;db_user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; os.environ&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'DB_USER'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;db_password&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; os.environ&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'DB_PASSWORD'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; create_engine&lt;span class="o"&gt;(&lt;/span&gt;f&lt;span class="s2"&gt;"mysql+pymysql://{db_user}:{db_password}@db:3306/library"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this file, we also create the class transactions. This will create the fields for the &lt;strong&gt;library&lt;/strong&gt; databases with &lt;strong&gt;SQLAlchemy&lt;/strong&gt;; we define the attributes, and they will be the database fields.
We have four attributes: book_id, tittle, publishes, and labels. The last one (labels) of JSON data type.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;class transactions&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nv"&gt;__tablename__&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'book'&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nv"&gt;book_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; Column&lt;span class="o"&gt;(&lt;/span&gt;Integer, &lt;span class="nv"&gt;primary_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nv"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; Column&lt;span class="o"&gt;(&lt;/span&gt;String&lt;span class="o"&gt;(&lt;/span&gt;50&lt;span class="o"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nv"&gt;publisher&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; Column&lt;span class="o"&gt;(&lt;/span&gt;String&lt;span class="o"&gt;(&lt;/span&gt;50&lt;span class="o"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nv"&gt;labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; Column&lt;span class="o"&gt;(&lt;/span&gt;JSON&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; def __init__&lt;span class="o"&gt;(&lt;/span&gt;self, book_id, title, publisher, labels&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; self.book_id &lt;span class="o"&gt;=&lt;/span&gt; book_id
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; self.title &lt;span class="o"&gt;=&lt;/span&gt; title
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; self.publisher &lt;span class="o"&gt;=&lt;/span&gt; publisher
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; self.labels &lt;span class="o"&gt;=&lt;/span&gt; labels
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;base.metadata.create_all&lt;span class="o"&gt;(&lt;/span&gt;engine&lt;span class="o"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now let’s review the Python script called &lt;strong&gt;insert.py&lt;/strong&gt;, where we use the transactions class to insert data into the database.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;import db
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;from sqlalchemy.orm import sessionmaker
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;Session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; sessionmaker&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;db.engine&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; Session&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;tr1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; db.transactions&lt;span class="o"&gt;(&lt;/span&gt;1,&lt;span class="s1"&gt;'Green House'&lt;/span&gt;, &lt;span class="s1"&gt;'Joe Monter'&lt;/span&gt;, &lt;span class="s1"&gt;'{"about" : {"gender": "action", "cool": true, "notes": "labeled"}}'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;session.add&lt;span class="o"&gt;(&lt;/span&gt;tr1&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;session.commit&lt;span class="o"&gt;()&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now let’s explore the &lt;strong&gt;docker-compose.yaml&lt;/strong&gt; file, we have two services, the db and the api&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;version: &lt;span class="s2"&gt;"3.8"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;services:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; api:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; build: .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; container_name: api
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; depends_on:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; db:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; condition: service_healthy
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; db:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: percona/percona-server:8.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; container_name: db
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; restart: always
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; environment:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MYSQL_USER: root
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MYSQL_ROOT_PASSWORD: root
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MYSQL_DATABASE: library
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; healthcheck:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; test: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CMD"&lt;/span&gt;, &lt;span class="s2"&gt;"mysqladmin"&lt;/span&gt;, &lt;span class="s2"&gt;"ping"&lt;/span&gt;, &lt;span class="s2"&gt;"-h"&lt;/span&gt;, &lt;span class="s2"&gt;"localhost"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; timeout: 20s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; retries: &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - my-db:/var/lib/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ports:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - &lt;span class="s2"&gt;"3306:3306"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; expose:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - &lt;span class="s2"&gt;"3306"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Names for volume&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; my-db:&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;strong&gt;db&lt;/strong&gt; service uses the &lt;strong&gt;Percona Server for MySQL&lt;/strong&gt; image (percona/percona-server:8.0) for the database and has a healthcheck that allows you to confirm when the database is started and ready to receive requests.
The &lt;strong&gt;api&lt;/strong&gt; service depends on the &lt;strong&gt;db&lt;/strong&gt; service to start. The api service will build a Dockerfile, it does a build of the Python applications (of db.py and insert.py), so in this way, we can insert data into the database when it is ready.&lt;/p&gt;
&lt;p&gt;It’s time to see the example in action; let’s locate it inside the &lt;strong&gt;json-mysql&lt;/strong&gt; project and run &lt;strong&gt;docker-compose ps -d&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Once this is done, we can connect to the database and query the table without needing to go inside the container with the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; -i db mysql -uroot -proot &lt;span class="o"&gt;&lt;&lt;&lt;&lt;/span&gt; &lt;span class="s2"&gt;"use library;show tables;select \* from book;describe book;"&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We can check the data types of our fields and the inserted data. You will also see the JSON data type “labels” data type.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;book_id title publisher labels
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 Green House Joe Monter &lt;span class="s2"&gt;"{\\"&lt;/span&gt;about&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;" : {\\"&lt;/span&gt;gender&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": \\"&lt;/span&gt;action&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;", \\"&lt;/span&gt;cool&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": true, \\"&lt;/span&gt;notes&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": \\"&lt;/span&gt;labeled&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;"}}"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2 El camino Daniil Zotl &lt;span class="s2"&gt;"{\\"&lt;/span&gt;about&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;" : {\\"&lt;/span&gt;gender&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": \\"&lt;/span&gt;documental&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;", \\"&lt;/span&gt;cool&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": true, \\"&lt;/span&gt;notes&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": \\"&lt;/span&gt;labeled&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;"}}"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;3 London Bridge Mario Mesa &lt;span class="s2"&gt;"{\\"&lt;/span&gt;about&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;" : {\\"&lt;/span&gt;gender&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": \\"&lt;/span&gt;drama&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;", \\"&lt;/span&gt;cool&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": true, \\"&lt;/span&gt;notes&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;": \\"&lt;/span&gt;labeled&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;"}}"&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Field Type Null Key Default Extra
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;book_id int NO PRI NULL auto_increment
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;title varchar&lt;span class="o"&gt;(&lt;/span&gt;50&lt;span class="o"&gt;)&lt;/span&gt; YES NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;publisher varchar&lt;span class="o"&gt;(&lt;/span&gt;50&lt;span class="o"&gt;)&lt;/span&gt; YES NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;labels json YES NULL&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Use “docker compose ps” to see your services running. In this case, we have the “db” service running, which is for the database, and we have “api” with the state “exited,” which means that the scripts to create the database and insert the data into the database “library” was created.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME COMMAND SERVICE STATUS PORTS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;api &lt;span class="s2"&gt;"/bin/sh -c 'bash -C…"&lt;/span&gt; api exited &lt;span class="o"&gt;(&lt;/span&gt;0&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;db &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt; db running &lt;span class="o"&gt;(&lt;/span&gt;healthy&lt;span class="o"&gt;)&lt;/span&gt; 0.0.0.0:3306-&gt;3306/tcp, 33060/tcp&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It was an example of inserting JSON data into MySQL using SQLAlchemy in Python and docker-compose for deployment.&lt;/p&gt;
&lt;p&gt;You can find the project on &lt;a href="//github.com/edithturn/json-mysql.git"&gt;GitHub&lt;/a&gt;. If there is any other way to make it better happy to hear it so I can improve this project.&lt;/p&gt;
&lt;p&gt;You can explore more about &lt;a href="https://www.percona.com/software/mysql-database/percona-server" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt;, and if you want to see how this project start check &lt;a href="https://percona.community/blog/2023/03/13/using-the-json-data-type-with-mysql-8/" target="_blank" rel="noopener noreferrer"&gt;Using the JSON data type with MySQL 8 - Part I&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>JSON</category>
      <category>MySQL</category>
      <category>Databases</category>
      <category>Open Source</category>
      <media:thumbnail url="https://percona.community/blog/2023/04/00-mjii-intro_hu_cb63b3892f2127a0.jpg"/>
      <media:content url="https://percona.community/blog/2023/04/00-mjii-intro_hu_9e1506643b3b38f0.jpg" medium="image"/>
    </item>
    <item>
      <title>How a Database Monitoring Tool Can Help a Developer. The Story of One Mistake.</title>
      <link>https://percona.community/blog/2023/04/07/how-a-database-monitoring-tool-can-help-a-developer.-the-story-of-one-mistake./</link>
      <guid>https://percona.community/blog/2023/04/07/how-a-database-monitoring-tool-can-help-a-developer.-the-story-of-one-mistake./</guid>
      <pubDate>Fri, 07 Apr 2023 00:00:00 UTC</pubDate>
      <description>I will tell you the real story of using database monitoring tools when developing an application. I will show you an example of how I managed to detect and fix a problem in the application.</description>
      <content:encoded>&lt;p&gt;I will tell you the real story of using database monitoring tools when developing an application. I will show you an example of how I managed to detect and fix a problem in the application.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;A small clarification, the real story from my development practice happened a little more than a week ago, but for the article I took graphs of final debugging, so that the graphs show the correct sequence and fit into the available for explanation and demonstration. It’s just that in reality, I went out for coffee several times and thought for a long time about what was reflected in the graphs of monitoring:)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/04/start-new-feature_hu_e388473634f50f2c.jpg 480w, https://percona.community/blog/2023/04/start-new-feature_hu_6ee4b99be8b8f5d.jpg 768w, https://percona.community/blog/2023/04/start-new-feature_hu_c53b7015cc9294ce.jpg 1400w"
src="https://percona.community/blog/2023/04/start-new-feature.jpg" alt="How a Database Monitoring Tool Can Help a Developer" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/04/pmm-image-1.jpg" alt="How a Database Monitoring Tool Can Help a Developer" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="about-the-app-and-the-process"&gt;About the app and the process&lt;/h2&gt;
&lt;p&gt;I am developing a PHP application using MongoDB as a database. The application is lightweight, and most load falls on the database. I have implemented functions at the application level to adjust the number of queries, as the application can quickly load the database to 100%.&lt;/p&gt;
&lt;p&gt;For development, I use several small dev instances in AWS, use Percona Server for MongoDB with three nodes as a database, and have Percona Monitoring and Management (PMM) installed for monitoring the databases.&lt;/p&gt;
&lt;p&gt;My development process consists of the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I developed a new feature and ran it on the dev server for testing.&lt;/li&gt;
&lt;li&gt;I check the prefiling on the PHP side, and there is no memory leak, and I am happy with the speed.&lt;/li&gt;
&lt;li&gt;I check the database monitoring to ensure everything works fine.&lt;/li&gt;
&lt;li&gt;I debug the feature, setting the number and types of queries in the function to balance the number of queries and the load on the database, if necessary.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="adding-new-functionality-to-the-application"&gt;Adding new functionality to the application&lt;/h2&gt;
&lt;p&gt;So I started the application and got ready to run the new feature. The feature was getting information from open sources, processing it, and saving it to the database. The second part of the functionality went through all the saved documents and did some additional processing.&lt;/p&gt;
&lt;p&gt;At this point, the application already had a lot of features that loaded the CPU of the Primary Node by 25-40%, and everything was running stably. I decided to have a performance reserve, as I planned to add new features.&lt;/p&gt;
&lt;p&gt;I checked several dashboards, and there were no anomalies or changes. PMM has many dashboards and charts, and I will only show a few, just some.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/04/pmm-image-2_hu_48e6a15291707376.jpg 480w, https://percona.community/blog/2023/04/pmm-image-2_hu_4a5f1024da7cfc48.jpg 768w, https://percona.community/blog/2023/04/pmm-image-2_hu_97352fcff0751432.jpg 1400w"
src="https://percona.community/blog/2023/04/pmm-image-2.jpg" alt="Adding new functionality to the application" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I saved the changes with the new feature and pushed it to the dev server to make it work. Then I checked that the function started without errors, and the result was visible in the database. I use MongoDB Compass to check the result of a database entry.&lt;/p&gt;
&lt;h2 id="something-has-gone-differently-than-planned"&gt;Something has gone differently than planned.&lt;/h2&gt;
&lt;p&gt;I waited a few minutes and rechecked the dashboard. At first glance, the main screen was fine. However, I was alarmed by the speed of processing. The number of operations has mostly stayed the same.&lt;/p&gt;
&lt;p&gt;I scrolled down through the various charts on the dashboard and saw an anomaly.&lt;/p&gt;
&lt;p&gt;The latency increased, and the app loaded the instance to 100% CPU.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/04/pmm-image-3_hu_5c9ea71ea64a484f.jpg 480w, https://percona.community/blog/2023/04/pmm-image-3_hu_85695c9e2820b9b9.jpg 768w, https://percona.community/blog/2023/04/pmm-image-3_hu_636f194f3cab85.jpg 1400w"
src="https://percona.community/blog/2023/04/pmm-image-3.jpg" alt="Something has gone differently than planned" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/04/pmm-image-4_hu_891228e9918cb0a6.jpg 480w, https://percona.community/blog/2023/04/pmm-image-4_hu_a0a21a0133c958f.jpg 768w, https://percona.community/blog/2023/04/pmm-image-4_hu_9b942b0fe9455c23.jpg 1400w"
src="https://percona.community/blog/2023/04/pmm-image-4.jpg" alt="Something has gone differently than planned" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I have made a test run on the application side and checked the profiler there, too. The app worked poorly, and queries were slow.&lt;/p&gt;
&lt;h2 id="finding-the-cause-of-the-problem"&gt;Finding the cause of the problem&lt;/h2&gt;
&lt;p&gt;I knew the reason was the new feature and immediately rolled back the last changes.&lt;/p&gt;
&lt;p&gt;I had a rough idea of where the problem might be, made a few changes, and started again.&lt;/p&gt;
&lt;p&gt;I did it several times, but the result was the same (the CPU was loaded at 100%).&lt;/p&gt;
&lt;p&gt;I selected a period with a load and used the Query Analytics function built into the monitoring.
Query Analytics shows a list of queries sorted by load or execution speed. Some of the queries to the Pages collection gave 90% load, and the Query Time was more than 3 minutes.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/04/pmm-qan_hu_52cebfd6dc9a9489.jpg 480w, https://percona.community/blog/2023/04/pmm-qan_hu_87ebf89815b2406a.jpg 768w, https://percona.community/blog/2023/04/pmm-qan_hu_a5c9b2e9b0531c7b.jpg 1400w"
src="https://percona.community/blog/2023/04/pmm-qan.jpg" alt="Percona Monitoring and Management PMM - MongoDB - QAN" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In Query Analytics, you can find slow queries, see their details, and then debug them in the application.&lt;/p&gt;
&lt;h2 id="fixing-the-problem"&gt;Fixing the problem&lt;/h2&gt;
&lt;p&gt;I made a few changes that fixed the problem.&lt;/p&gt;
&lt;p&gt;The first problem was the indexes. I create indexes from within the application using the command.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$app['db']-&gt;CollectionName-&gt;createIndex(['index_key' =&gt; 1]);&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Since the application uses many different collections and queries with conditions on various fields and with or without sorting, I have a lot of indexes.&lt;/p&gt;
&lt;p&gt;I made a typo in this case, and the index was not created correctly.&lt;/p&gt;
&lt;p&gt;After the indexes were created correctly, I needed quick runs to debug the number of queries to adjust the CPU load to around 50%.&lt;/p&gt;
&lt;p&gt;You can see the final chart after debugging and fixing the problem.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/04/pmm-image-5.jpg" alt="Percona Monitoring and Management PMM - Fixing the problem" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/04/pmm-image-6_hu_81842427908c9ee6.jpg 480w, https://percona.community/blog/2023/04/pmm-image-6_hu_3865e12470808635.jpg 768w, https://percona.community/blog/2023/04/pmm-image-6_hu_3fae836c84c0dfa9.jpg 1400w"
src="https://percona.community/blog/2023/04/pmm-image-6.jpg" alt="Percona Monitoring and Management PMM - Fixing the problem" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Don’t forget to add indexes and make sure they work.&lt;/p&gt;
&lt;p&gt;I am a simple developer who can make mistakes and do different experiments. Installing the monitoring was one of the experiments, and previously I just focused on the speed of the PHP script. From time to time, I have looked at the monitoring dashboard in the AWS control panel, but it gives less information, only about the instance itself, without being able to investigate in detail.&lt;/p&gt;
&lt;p&gt;So, &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;PMM&lt;/a&gt; have a great tools for debugging and searching “bottlenecks” in the databases. And I recommend installing and trying database monitoring with PMM if your application uses MySQL, PostgreSQL, or MongoDB.&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>PMM</category>
      <category>Monitoring</category>
      <media:thumbnail url="https://percona.community/blog/2023/04/start-new-feature_hu_ed15b8daaede0d34.jpg"/>
      <media:content url="https://percona.community/blog/2023/04/start-new-feature_hu_6276e328e7492af0.jpg" medium="image"/>
    </item>
    <item>
      <title>How to prevent unauthorized users from connecting to ProxySQL</title>
      <link>https://percona.community/blog/2023/03/30/how-to-prevent-unauthorized-users-from-connecting-to-proxysql/</link>
      <guid>https://percona.community/blog/2023/03/30/how-to-prevent-unauthorized-users-from-connecting-to-proxysql/</guid>
      <pubDate>Thu, 30 Mar 2023 00:00:00 UTC</pubDate>
      <description>ProxySQL is a great load balancer which however suffers from some shortcomings concerning the management of MySQL users. ProxySQL provides a firewall which, in my case, is not complete enough to properly manage users and secure their access. Indeed, this firewall does not accept subnets and keeps unauthorized connections in ProxySQL. We cannot then be sure of not suffering a DDOS attack on our ProxySQL instance. In this article, I will explain how I managed to overcome this problem.</description>
      <content:encoded>&lt;p&gt;ProxySQL is a great load balancer which however suffers from some shortcomings concerning the management of MySQL users. ProxySQL provides a firewall which, in my case, is not complete enough to properly manage users and secure their access. Indeed, this firewall does not accept subnets and keeps unauthorized connections in ProxySQL. We cannot then be sure of not suffering a DDOS attack on our ProxySQL instance. In this article, I will explain how I managed to overcome this problem.&lt;/p&gt;
&lt;h2 id="reminder-of-the-principle-of-connection-through-proxysql"&gt;Reminder of the principle of connection through ProxySQL&lt;/h2&gt;
&lt;p&gt;To understand what follows, you have to bear in mind how ProxySQL connects to MySQL. The user connects to Proxysql which then establishes the connection to MySQL. For this, ProxySQL maintains MySQL users in its internal database. The names of MySQL users, their passwords as well as the MySQL destination server are entered in the mysql_users table. At each connection request to a MySQL server, ProxySQL checks the presence of the user in the mysql_users table to connect itself to MySQL with this same user.&lt;/p&gt;
&lt;p&gt;Something is missing, isn’t it?&lt;/p&gt;
&lt;p&gt;Yes, the host associated with each MySQL user is missing!&lt;/p&gt;
&lt;p&gt;In MySQL, users are configured to only be able to connect from ProxySQL. In ProxySQL we don’t have this information. By default, all users can therefore connect to ProxySQL from any IP address and ProxySQL will open connections to MySQL for them. As I specified in the introduction, ProxySQL provides a Firewall to overcome this problem, but this one is not really satisfactory.&lt;/p&gt;
&lt;h2 id="prevent-connection-to-proxysql-with-an-unauthorized-user"&gt;Prevent connection to ProxySQL with an unauthorized user&lt;/h2&gt;
&lt;p&gt;In this part, our ProxySQL instance will allow user &lt;em&gt;bob&lt;/em&gt; to connect to the MySQL (&lt;em&gt;mysql_server&lt;/em&gt;) instance. &lt;em&gt;Bob&lt;/em&gt; is allowed to connect from &lt;em&gt;IP_1&lt;/em&gt; but cannot from &lt;em&gt;IP_2&lt;/em&gt;. The ProxySQL instance is running on &lt;em&gt;IP_PROXYSQL&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In MySQL, user &lt;em&gt;bob&lt;/em&gt; was created like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'bob'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'IP_PROXYSQL'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;IDENTIFIED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;BY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'PASSWORD'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In ProxySQL, let’s create &lt;em&gt;bob&lt;/em&gt; like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;INTO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql_users&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;default_hostgroup&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'bob'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'PASSWORD'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;LOAD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;USERS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RUNTIME&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;SAVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;USERS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DISK&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;and declare the MySQL server like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;INTO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql_servers&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hostgroup_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'mysql_server'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;LOAD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SERVERS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RUNTIME&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;SAVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SERVERS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DISK&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you may have noticed, I didn’t declare the same hostgroup when creating the user and the server. Hostgroup 0 does not correspond to any MySQL server. By default, our user bob will therefore be able to connect to ProxySQL but his queries will not be redirected to any MySQL server. Let’s move on to host management. I will declare each authorized host in the mysql_query_rules table. In ProxySQL, this table is used, among other things, to assign different parameters to a connection. You see what I mean? Let’s declare our rule!&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;INTO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql_query_rules&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rule_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;active&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;client_addr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;destination_hostgroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'bob'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'IP_1'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;LOAD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;QUERY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RULES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RUNTIME&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;SAVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;QUERY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RULES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DISK&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I have just declared a rule indicating that all requests coming from user bob connected from &lt;em&gt;IP_1&lt;/em&gt; must be played on host 1. And icing on the cake, &lt;em&gt;IP_1&lt;/em&gt; can be a subnet (&lt;em&gt;IP_1%&lt;/em&gt;), which would not have could not be possible with the firewall. From now on, bob will be able to perform queries from IP_1 and get results from MySQL. If bob plays a request from IP_2, he will not be able to obtain a result since the hostgroup queried will be 0 which does not correspond to any MySQL server. However, this is not satisfactory. Nothing prevents bob from creating a very large number of connections from &lt;em&gt;IP_2&lt;/em&gt;. It won’t reach any MySQL servers but may be able to crash my ProxySQL instance. It’s time to deal with those unauthorized connections!&lt;/p&gt;
&lt;p&gt;ProxySQL provides a scheduler which will be very useful here. This scheduler will allow us to play a bash script every x ms. I created this script in the ProxySQL datadir:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;kill_connections.sh&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="cp"&gt;#!/bin/bash
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="cp"&gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;PROXYSQL_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;PROXYSQL_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;PROXYSQL_HOSTNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;PROXYSQL_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"6032"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -u&lt;span class="nv"&gt;$PROXYSQL_USERNAME&lt;/span&gt; -p&lt;span class="nv"&gt;$PROXYSQL_PASSWORD&lt;/span&gt; -h&lt;span class="nv"&gt;$PROXYSQL_HOSTNAME&lt;/span&gt; -P&lt;span class="nv"&gt;$PROXYSQL_PORT&lt;/span&gt; -e &lt;span class="s2"&gt;"SELECT SessionID,user,cli_host FROM stats_mysql_processlist WHERE hostgroup = 0"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; SessionID user cli_host&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$SessionID&lt;/span&gt; !&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SessionID"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nv"&gt;enabled_account&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$(&lt;/span&gt;mysql -u&lt;span class="nv"&gt;$PROXYSQL_USERNAME&lt;/span&gt; -p&lt;span class="nv"&gt;$PROXYSQL_PASSWORD&lt;/span&gt; -h&lt;span class="nv"&gt;$PROXYSQL_HOSTNAME&lt;/span&gt; -P&lt;span class="nv"&gt;$PROXYSQL_PORT&lt;/span&gt; -se&lt;span class="s2"&gt;"SELECT count(*) FROM mysql_query_rules WHERE username = '&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;' and '&lt;/span&gt;&lt;span class="nv"&gt;$cli_host&lt;/span&gt;&lt;span class="s2"&gt;' LIKE client_addr;"&lt;/span&gt;&lt;span class="k"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$enabled_account&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; -eq &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql -u&lt;span class="nv"&gt;$PROXYSQL_USERNAME&lt;/span&gt; -p&lt;span class="nv"&gt;$PROXYSQL_PASSWORD&lt;/span&gt; -h&lt;span class="nv"&gt;$PROXYSQL_HOSTNAME&lt;/span&gt; -P&lt;span class="nv"&gt;$PROXYSQL_PORT&lt;/span&gt; -e &lt;span class="s2"&gt;"KILL CONNECTION &lt;/span&gt;&lt;span class="nv"&gt;$SessionID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;done&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This script lists all the connections opened in ProxySQL on hostgroup 0. It then checks whether the connected user/host pair is authorized using the mysql_query_rules table. If not, the connection is killed. Let’s activate the scheduler in ProxySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;INTO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;arg1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;arg2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;interval_ms&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'kill_connections.sh'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'proxysql_admin_user'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'proxysql_admin_password'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;LOAD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SCHEDULER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RUNTIME&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;SAVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SCHEDULER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DISK&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now, any connection opened in ProxySQL but not authorized will be automatically killed!&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;WARNING:&lt;/em&gt;&lt;/strong&gt; unfortunately, the ProxySQL scheduler does not work like the MySQL scheduler. It is necessary to open the connection from a .sh file and therefore to indicate the ProxySQL administration credentials. These identifiers will then be visible by monitoring the list of server processes. To avoid this problem, I advise you to indicate the identifiers directly in the .sh file and to protect this file correctly on your server.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id="additional-information"&gt;Additional Information&lt;/h2&gt;
&lt;p&gt;When I deploy ProxySQL, I always create a rule with a very high rule_id to block unauthorized connections; this is an additional barrier in case I forget something:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;INTO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql_query_rules&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rule_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;active&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;error_msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;destination_hostgroup&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;999999999&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'ProxySQL : Access denied'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;LOAD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;QUERY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RULES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RUNTIME&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;SAVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;QUERY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RULES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DISK&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This rule redirects unauthorized connections to hostgroup 0 (if ever a user was declared in mysql_users with a hostgroup leading to a MySQL server) and displays an error message for each request.
I create all my rules to manage hosts with a rule_id &gt; or = 10000. This allows me to have 9999 empty slots if I ever want to create other priority rules in mysql_query_rules.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;INTO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql_query_rules&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rule_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;active&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;client_addr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;IFNULL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;MAX&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rule_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql_query_rules&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;WHERE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;rule_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;MAX&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rule_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mysql_query_rules&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;rule_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9999&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'USERNAME'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'HOST'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;LOAD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;QUERY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RULES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RUNTIME&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;SAVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;QUERY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RULES&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DISK&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Don’t hesitate to ask me questions, I’ll be happy to answer them.&lt;/p&gt;</content:encoded>
      <author>Valentin TRAËN</author>
      <category>Databases</category>
      <category>MySQL</category>
      <category>ProxySQL</category>
      <category>LoadBalancer</category>
      <media:thumbnail url="https://percona.community/blog/2023/03/proxysql_user_management_cover_hu_56dc53c3bd9592d8.jpg"/>
      <media:content url="https://percona.community/blog/2023/03/proxysql_user_management_cover_hu_4a261bdecc8d22a5.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.36 preview release</title>
      <link>https://percona.community/blog/2023/03/20/preview-release/</link>
      <guid>https://percona.community/blog/2023/03/20/preview-release/</guid>
      <pubDate>Mon, 20 Mar 2023 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.36 preview release Hello folks! Percona Monitoring and Management (PMM) 2.36 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-236-preview-release"&gt;Percona Monitoring and Management 2.36 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.36 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;You can find the Release Notes &lt;a href="https://two-36-0-pr-1011.onrender.com/release-notes/2.36.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker-installation"&gt;Percona Monitoring and Management server docker installation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.36.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; In order to use the DBaaS functionality during the Percona Monitoring and Management preview release, you should add the following environment variablewhen starting PMM server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PERCONA_TEST_DBAAS_PMM_CLIENT=perconalab/pmm-client:2.36.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client release candidate tarball for 2.36 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-5090.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.36.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.36.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-0ce04c507ec1187b1&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us in &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <category>Release</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>How to Develop a Simple Web Application Using Docker, Nginx, PHP, and Percona Server for MongoDB</title>
      <link>https://percona.community/blog/2023/03/17/how-to-develop-a-simple-web-application-using-docker-nginx-php-and-percona-server-for-mongodb/</link>
      <guid>https://percona.community/blog/2023/03/17/how-to-develop-a-simple-web-application-using-docker-nginx-php-and-percona-server-for-mongodb/</guid>
      <pubDate>Fri, 17 Mar 2023 00:00:00 UTC</pubDate>
      <description>I’m developing an application that takes data from different sources, processes it, and prepares reports. In this series of articles, I will explain how to install and configure the tools, application, and database to develop and run the application.</description>
      <content:encoded>&lt;p&gt;I’m developing an application that takes data from different sources, processes it, and prepares reports. In this series of articles, I will explain how to install and configure the tools, application, and database to develop and run the application.&lt;/p&gt;
&lt;h2 id="about-the-application-and-choice-of-tools"&gt;About the application and choice of tools&lt;/h2&gt;
&lt;p&gt;The application I develop gets data from GitHub, Jira, and websites via API, processes it and creates reports according to the desired requirements.&lt;/p&gt;
&lt;p&gt;The application is developed with PHP version 8+ and Nginx as a web server, and &lt;a href="https://www.percona.com/software/mongodb/percona-server-for-mongodb?utm_source=percona-community&amp;utm_medium=blog&amp;utm_campaign=daniil" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB&lt;/a&gt; as a database. For local development, I use Docker and Docker-compose.&lt;/p&gt;
&lt;p&gt;I use PHP and Nginx because I’m familiar with them, and it’s a popular stack with lots of documentation and examples. Docker was chosen for the same reason. I used to install Nginx/Apache + PHP + Database in the same container, but over time I found Docker-compose and separate containers more convenient, so now I use docker-compose.&lt;/p&gt;
&lt;p&gt;My application includes the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Web application to run in a browser and display reports.&lt;/li&gt;
&lt;li&gt;Сonsole scripts in PHP for bulk data updates in the background on the server.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As a database for this application, I use MongoDB. There are objective reasons for that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The API I’m querying for data from gives it back to me page by page in JSON format. I need to do a lot of queries, so my script gets all data and stores it in MongoDB beforehand to create reports without needing to go to API.&lt;/li&gt;
&lt;li&gt;MongoDB is suitable for storing data in JSON and queries with different conditions.&lt;/li&gt;
&lt;li&gt;The data schema from the API can be very different and flexible depending on the service and query. MongoDB allows me to save responses from the API to the database as it is, without complicated processing or preconfiguring the database schema. I didn’t want to spend much time setting up the database table schema.&lt;/li&gt;
&lt;li&gt;Installing and configuring MongoDB for development is easy and does not require great skills to work with it.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I use Percona Server for MongoDB because it’s free and open source. I once thought about backups and monitoring, and Percona has ready-made solutions for that.&lt;/p&gt;
&lt;p&gt;First, I will talk about my development configuration. I am starting from scratch using a minimal PHP application as an example.&lt;/p&gt;
&lt;h2 id="preparing-docker-and-docker-compose"&gt;Preparing Docker and Docker-compose&lt;/h2&gt;
&lt;h3 id="dockerfile-for-phm--mongodb"&gt;Dockerfile for PHM + MongoDB&lt;/h3&gt;
&lt;p&gt;For PHP to work with MongoDB, we need to install PHP with the required extensions.
I prepared a Dockerfile for PHP 8.2 and used php-fpm because I use Nginx as a web server.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Dockerfile&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FROM php:8.2-fpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;RUN apt-get -y update \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &amp;&amp; apt-get install -y libssl-dev pkg-config libzip-dev unzip git
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;RUN pecl install zlib zip mongodb \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &amp;&amp; docker-php-ext-enable zip \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &amp;&amp; docker-php-ext-enable mongodb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Install composer (updated via entry point)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I also used Composer and installed it immediately in the container with Dockerfile.&lt;/p&gt;
&lt;p&gt;Now I run the Image build command from the Dockerfile to use it in Docker Compose&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker build -t php8.2-fpm-mongo .&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where php8.2-fpm-mongo - is the name of the image to be used in docker-compose&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/PHP-Mongod-1-6_hu_d01c326e6e73214c.jpg 480w, https://percona.community/blog/2023/03/PHP-Mongod-1-6_hu_2c67a09d12a8250c.jpg 768w, https://percona.community/blog/2023/03/PHP-Mongod-1-6_hu_6b9669afb6ffd19d.jpg 1400w"
src="https://percona.community/blog/2023/03/PHP-Mongod-1-6.jpg" alt="Dockerfile for PHM + MongoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="docker-composeyml-to-test-the-web-app"&gt;Docker-compose.yml to test the web app&lt;/h3&gt;
&lt;p&gt;The next step is to create the docker-compose.yml file.
I will add the Nginx web server and my Image with PHP and MongoDB to the Docker-compose file.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;docker-compose.yml&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;version: '3.9'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;services:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; web:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: nginx:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ports:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - '80:80'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./app:/var/www/html
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./config/default.conf:/etc/nginx/conf.d/default.conf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; php-fpm:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: php8.2-fpm-mongo
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./app:/var/www/html&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The application is located in the app/ folder in the same directory as docker-compose.yml.&lt;/p&gt;
&lt;p&gt;Now it will be a very simple index.php script that prints out information about itself.&lt;/p&gt;
&lt;p&gt;Create an app directory and an index.php file.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;app/index.php&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;?php
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;phpinfo();&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Structure of files and folders&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── app
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ └── index.php
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── config
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ └── default.conf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── Dockerfile
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;└── docker-compose.yml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You will also notice /config/default.conf. This is the configuration of Nginx for handling requests and running PHP. Here is my example of a default.conf file. Let’s create it too.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;/config/default.conf&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;server {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; listen 80;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; server_name localhost;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; index index.php index.html;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; error_log /var/log/nginx/error.log;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; access_log /var/log/nginx/access.log;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; root /var/www/html;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; rewrite ^/(.*)/$ /$1 permanent;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; location / {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; try_files $uri $uri/ /index.php?$query_string;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; location ~ \.php$ {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; try_files $uri =404;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fastcgi_split_path_info ^(.+\.php)(/.+)$;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fastcgi_pass php-fpm:9000;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fastcgi_index index.php;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; include fastcgi_params;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fastcgi_param PATH_INFO $fastcgi_path_info;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fastcgi_buffering off;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If we run docker-compose now, we can open &lt;code&gt;localhost&lt;/code&gt; in the browser and see the running php from the app/index.php file.&lt;/p&gt;
&lt;p&gt;Run docker-compose&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose up -d&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/PHP-Mongod-1-4_hu_21a6ba8c75e19294.jpg 480w, https://percona.community/blog/2023/03/PHP-Mongod-1-4_hu_b7f76e2ab41bc361.jpg 768w, https://percona.community/blog/2023/03/PHP-Mongod-1-4_hu_9eace24a939ba5b1.jpg 1400w"
src="https://percona.community/blog/2023/03/PHP-Mongod-1-4.jpg" alt="Run docker-compose" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Open localhost in the browser.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/03/PHP-Mongod-1-8.jpg" alt="PHP Info - Localhost - browser" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Stop docker-compose to continue setting up. We haven’t connected MongoDB yet.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose down&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/PHP-Mongod-1-7_hu_4515c4abb81baea5.jpg 480w, https://percona.community/blog/2023/03/PHP-Mongod-1-7_hu_453f3c372a38e7e5.jpg 768w, https://percona.community/blog/2023/03/PHP-Mongod-1-7_hu_c77eb195e792d90a.jpg 1400w"
src="https://percona.community/blog/2023/03/PHP-Mongod-1-7.jpg" alt="Stop docker-compose" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="connect-mongodb-to-our-docker-compose"&gt;Connect MongoDB to our docker-compose&lt;/h3&gt;
&lt;p&gt;Add the new db service to our docker-compose.yml.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;version: '3.9'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;services:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; web:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: nginx:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ports:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - '80:80'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./app:/var/www/html
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./config/default.conf:/etc/nginx/conf.d/default.conf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; php-fpm:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: php8.2-fpm-mongo
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./app:/var/www/html
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; environment:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; DB_USERNAME: root
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; DB_PASSWORD: secret
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; DB_HOST: mongodb # matches the service with mongodb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mongodb:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: "percona/percona-server-mongodb:6.0.4"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; # image: "percona/percona-server-mongodb:6.0.4-3-arm64" # For Apple M1/M2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./data:/data/db
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; restart: always
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; environment:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MONGO_INITDB_ROOT_USERNAME: root
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MONGO_INITDB_ROOT_PASSWORD: secret
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MONGO_INITDB_DATABASE: tutorial
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ports:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - "27017:27017"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you examine the changes in the docker-compose.yml file carefully, you will notice:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I add image (db) with Percona Server MongoDB 6.0.4&lt;/li&gt;
&lt;li&gt;I use data/ folder in the same directory as volumes. It’s convenient for me to easily access DB files, transfer them, and examine them locally.&lt;/li&gt;
&lt;li&gt;I pass environment variables to create a MongoDB root user.&lt;/li&gt;
&lt;li&gt;I also added environment variables in php-fpm to use them to connect to the database in the application.&lt;/li&gt;
&lt;li&gt;And the volumes parameter will link our local app directory directly to the container, this will allow us to modify the code and immediately check the result in the browser without restarting the container.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; volumes:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ./app:/var/www/html&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s modify our PHP script to check the operation of the database.&lt;/p&gt;
&lt;h2 id="connecting-to-mongodb-in-the-application"&gt;Connecting to MongoDB in the application&lt;/h2&gt;
&lt;h3 id="install-required-php-packages-to-work-with-mongodb"&gt;Install required PHP packages to work with MongoDB&lt;/h3&gt;
&lt;p&gt;Create an app/composer.json file to install and use the required MongoDB libraries and extensions for PHP.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;app/composer.json&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "require": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "mongodb/mongodb": "^1.6",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "ext-mongodb": "^1.6"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Connect to the php-fpm container and install the Composer packages&lt;/p&gt;
&lt;p&gt;Run docker-compose&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker-compose up -d&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Look up the name of the container with php-fpm, in my case it is github-php-fpm-1.&lt;/p&gt;
&lt;p&gt;Run the command to connect to the container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker exec -it [php-fpm-container] bash&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Run the installation of the Composer packages described in our composer.json file with&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;composer install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/PHP-Mongod-1-2_hu_d6469f322ba2f6cf.jpg 480w, https://percona.community/blog/2023/03/PHP-Mongod-1-2_hu_6274b3fd7bbf365a.jpg 768w, https://percona.community/blog/2023/03/PHP-Mongod-1-2_hu_757bfb16a98a1539.jpg 1400w"
src="https://percona.community/blog/2023/03/PHP-Mongod-1-2.jpg" alt="Install required PHP packages to work with MongoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now we can connect to MongoDB in our PHP application.&lt;/p&gt;
&lt;h3 id="connecting-to-mongodb-in-a-php-application"&gt;Connecting to MongoDB in a PHP application.&lt;/h3&gt;
&lt;p&gt;Now we slightly modify the index.php script to connect to the database and test data recording.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;/app/index.php&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;?php
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;// Enabling Composer Packages
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;require __DIR__ . '/vendor/autoload.php';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;// Get environment variables
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$local_conf = getenv();
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;define('DB_USERNAME', $local_conf['DB_USERNAME']);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;define('DB_PASSWORD', $local_conf['DB_PASSWORD']);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;define('DB_HOST', $local_conf['DB_HOST']);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;// Connect to MongoDB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$db_client = new \MongoDB\Client('mongodb://'. DB_USERNAME .':' . DB_PASSWORD . '@'. DB_HOST . ':27017/');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$db = $db_client-&gt;selectDatabase('tutorial');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;// Test insert data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;for ($page = 1; $page &lt;= 1000; $page++) {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; $data = [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 'page_id' =&gt; $page,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 'title' =&gt; "Page " . $page,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 'date' =&gt; date("m.d.y H:i:s"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 'timestamp' =&gt; time(),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 'mongodb_time' =&gt; new MongoDB\BSON\UTCDateTime(time() * 1000)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ];
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; $updateResult = $db-&gt;pages-&gt;updateOne(
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 'page_id' =&gt; $page // query
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ['$set' =&gt; $data],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ['upsert' =&gt; true]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; );
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; echo $page . " " ;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;echo '&lt;br/&gt;Finish';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;exit;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If we run localhost in the browser, our application will write 1,000 documents from the for loop into the database. It will also display the sequential numbers of the documents being written.&lt;/p&gt;
&lt;h2 id="connecting-to-mongodb-via-mongodb-compass"&gt;Connecting to MongoDB via MongoDB Compass&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.mongodb.com/products/compass" target="_blank" rel="noopener noreferrer"&gt;MongoDB Compass&lt;/a&gt; is a handy desktop application to work with MongoDB. I use it to browse databases and collections and create indexes.&lt;/p&gt;
&lt;p&gt;This is a quick way to conveniently look through written data and check errors.&lt;/p&gt;
&lt;p&gt;Let’s connect to the database using MongoDB Compass to check that the data is actually written.&lt;/p&gt;
&lt;p&gt;You need to use Localhost as host and the user/password from docker-compose.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/PHP-Mongod-1-5_hu_3a3d2b3359736bef.jpg 480w, https://percona.community/blog/2023/03/PHP-Mongod-1-5_hu_181f6e88846c9748.jpg 768w, https://percona.community/blog/2023/03/PHP-Mongod-1-5_hu_a84d5f8775ebdff5.jpg 1400w"
src="https://percona.community/blog/2023/03/PHP-Mongod-1-5.jpg" alt="Connecting to MongoDB via MongoDB Compass" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;After connecting you will see 1000 documents written to the database and you can make test queries or add an index.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/PHP-Mongod-1-3_hu_c82f68933043a73d.jpg 480w, https://percona.community/blog/2023/03/PHP-Mongod-1-3_hu_b1ff412b6bfc46ce.jpg 768w, https://percona.community/blog/2023/03/PHP-Mongod-1-3_hu_3df3a35cefd96cba.jpg 1400w"
src="https://percona.community/blog/2023/03/PHP-Mongod-1-3.jpg" alt="Connecting to MongoDB via MongoDB Compass" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="dont-forget-indexes-mongodb"&gt;Don’t forget indexes MongoDB&lt;/h2&gt;
&lt;p&gt;If you write and read data on certain fields, make sure to create indexes on those fields.&lt;/p&gt;
&lt;p&gt;For example, if you make 10,000 records with the script we developed above, you will notice a slow writing speed. It could be 20 seconds. But if you create an index on the page field, the write speed will be reduced by a factor of 10 to 2 seconds.&lt;/p&gt;
&lt;p&gt;Always create indexes.&lt;/p&gt;
&lt;p&gt;This is not hard to do through MongoDB Compass in the Indexes section of the collection.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/PHP-Mongod-1-1_hu_56c5b94928054a4b.jpg 480w, https://percona.community/blog/2023/03/PHP-Mongod-1-1_hu_551af93396fbc222.jpg 768w, https://percona.community/blog/2023/03/PHP-Mongod-1-1_hu_adffafc44c347f5d.jpg 1400w"
src="https://percona.community/blog/2023/03/PHP-Mongod-1-1.jpg" alt="Connecting to MongoDB via MongoDB Compass" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is also easy to do in our PHP app, using the method&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$db-&gt;pages-&gt;createIndex(['page_id' =&gt; 1]);&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This will create an index on page_id, because we do insert/upsert with a condition on this field and it is a unique key.&lt;/p&gt;
&lt;p&gt;Add it before the for loop, and increase the number of pages to 10k to compare.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;// Create an index
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$db-&gt;pages-&gt;createIndex(['page_id' =&gt; 1]);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;// Test insert data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;for ($page = 1; $page &lt;= 10000; $page++) {&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This will greatly increase the speed at which the script runs.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;We set up an environment and developed a PHP script to work with MongoDB.&lt;/p&gt;
&lt;p&gt;In my opinion, it was simple. All the source code you can see and use from my &lt;a href="https://github.com/dbazhenov/nginx-php-mongodb-docker-compose" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;To summarize:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;We have now installed standalone &lt;a href="https://www.percona.com/software/mongodb/percona-server-for-mongodb?utm_source=percona-community&amp;utm_medium=blog&amp;utm_campaign=daniil" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB&lt;/a&gt; using Docker-compose locally. However, it is recommended to use ReplicaSet with at least one node for production. We will definitely try this on a separate server using AWS as an example.&lt;/li&gt;
&lt;li&gt;For production applications, it is recommended to use ReplicaSet with several nodes. We will definitely do that too.&lt;/li&gt;
&lt;li&gt;We will install PMM to monitor database, see how our script loads the database, and see database queries with QAN and other PMM features.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In the next posts, I will work on improving the application. I will focus on database customization. You will learn how to improve the application so that it brings practical use, gets data from the GitHub API, and writes to the database. We’ll divide the application into console scripts and Web.&lt;/p&gt;
&lt;p&gt;If you are interested in learning more about the PHP application, write in a comment or on the &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;forum&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>MongoDB</category>
      <category>Databases</category>
      <category>Percona</category>
      <category>PHP</category>
      <category>Docker</category>
      <media:thumbnail url="https://percona.community/blog/2023/03/PHP-1_hu_42b5ad181a7edbf6.jpg"/>
      <media:content url="https://percona.community/blog/2023/03/PHP-1_hu_f283ee59e2d55cf3.jpg" medium="image"/>
    </item>
    <item>
      <title>Some Notable Bugfixes in MySQL 8.0.32</title>
      <link>https://percona.community/blog/2023/03/15/some-notable-bugfixes-in-mysql-8.0.32/</link>
      <guid>https://percona.community/blog/2023/03/15/some-notable-bugfixes-in-mysql-8.0.32/</guid>
      <pubDate>Wed, 15 Mar 2023 00:00:00 UTC</pubDate>
      <description>MySQL 8.0.32 came out recently and had some important bugfixes contributed by Perconians. Here is a brief overview of the work done.</description>
      <content:encoded>&lt;p&gt;MySQL 8.0.32 came out recently and had some important bugfixes contributed by Perconians. Here is a brief overview of the work done.&lt;/p&gt;
&lt;h2 id="inconsistent-data-and-gtids-with-mysqldump"&gt;Inconsistent data and GTIDs with mysqldump&lt;/h2&gt;
&lt;p&gt;Marcelo Altmann (Senior Software Engineer) fixed the bug when data and GTIDs backed up by mysqldump were inconsistent. It happened when the options –single-transaction and –set-gtid-purged=ON were both used because GTIDs on the server could have already increased between the start of the transaction by mysqldump and the fetching of GTID_EXECUTED. Marcelo developed a patch, and it was partially included in the release. Now, in MySQL 8.0.32, a FLUSH TABLES WITH READ LOCK is performed before fetching GTID_EXECUTED, to ensure its value is consistent with the snapshot taken by mysqldump. However, Percona Server for MySQL includes the entire patch, which does not require FLUSH TABLE WITH READ LOCK to work.&lt;/p&gt;
&lt;p&gt;Marcelo also corrected the issue when the MySQL server &lt;a href="https://perconadev.atlassian.net/browse/PS-8303" target="_blank" rel="noopener noreferrer"&gt;exits on ALTER TABLE created an assertion failure: dict0mem.h:2498:pos &lt; n_def&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="fixing-garbled-utf-characters"&gt;Fixing garbled UTF characters&lt;/h2&gt;
&lt;p&gt;Kamil Holubicki (Senior Software Engineer) proposed a patch to fix garbled UTF characters in SHOW ENGINE INNODB STATUS. It happened because the string was truncated, and UTF characters (which are multibyte) were cut in the middle which caused garbage at the end of the string.&lt;/p&gt;
&lt;h2 id="duplicate-table-space-objects-in-56-to-80-upgrade"&gt;Duplicate table space objects in 5.6 to 8.0 upgrade&lt;/h2&gt;
&lt;p&gt;Rahul Malik (Software Engineer) investigated and fixed an issue when an 8.0 upgrade from MySQL 5.6 crashed with Assertion failure. It happened due to a duplicate table space object. All SYS_* tables are loaded, and then their table IDs are changed. Some SYS tables like SYS_ZIP_DICT, VIRTUAL can have ids &gt; 1024 (say, 1028).&lt;/p&gt;
&lt;p&gt;Changing table_ids of SYS_FIELDS from 4 to 1028 will conflict with the table_ids of those existing SYS_ZIP_DICT/VIRTUAL which haven’t been shifted by 1024 yet and are currently loaded with 1028. Hence, we need to change the IDs of that SYS tables in reverse order to fix it. So in the example above, SYS_FIELD is the first shift to 1028+1024, and then SYS_FIELD changes to 1028 to avoid conflicts.&lt;/p&gt;
&lt;h2 id="why-open-source-databases-matter"&gt;Why open source databases matter&lt;/h2&gt;
&lt;p&gt;Great work by Marcelo, Kamil, Rahul, and everybody else who contributed to the MySQL 8.0.32 release.&lt;/p&gt;
&lt;p&gt;This is why open source databases are so important. We can all help improve MySQL, and those improvements benefit all users of MySQL.&lt;/p&gt;
&lt;p&gt;Percona is proud to be part of the MySQL community, and we hope you’ll join us in improving MySQL and its surrounding software. Check out our &lt;a href="https://percona.community/contribute/" target="_blank" rel="noopener noreferrer"&gt;contributing page&lt;/a&gt; to find ways to contribute!&lt;/p&gt;</content:encoded>
      <author>Aleksandra Abramova</author>
      <category>MySQL</category>
      <category>Databases</category>
      <category>Open Source</category>
      <category>Release</category>
      <media:thumbnail url="https://percona.community/blog/2023/03/mysql-bugfixes_hu_3963a5ecfa010f5.jpg"/>
      <media:content url="https://percona.community/blog/2023/03/mysql-bugfixes_hu_1fd86d064829157f.jpg" medium="image"/>
    </item>
    <item>
      <title>Using the JSON data type with MySQL 8</title>
      <link>https://percona.community/blog/2023/03/13/using-the-json-data-type-with-mysql-8/</link>
      <guid>https://percona.community/blog/2023/03/13/using-the-json-data-type-with-mysql-8/</guid>
      <pubDate>Mon, 13 Mar 2023 00:00:00 UTC</pubDate>
      <description>If you are a mobile app, frontend, backend, or game developer, you use data types such as string, numeric, or DateTime. You also know that since the advent of non-relational databases (NoSQL) such as MongoDB, which, by not being tied to a traditional SQL schema, do reading and writing on databases much faster. But MySQL showed that storing the JSON (JavaScript Object Notation) data type could also improve the speed of reading and writing relational databases.</description>
      <content:encoded>&lt;p&gt;If you are a mobile app, frontend, backend, or game developer, you use data types such as string, numeric, or DateTime. You also know that since the advent of non-relational databases (NoSQL) such as &lt;strong&gt;MongoDB&lt;/strong&gt;, which, by not being tied to a traditional &lt;strong&gt;SQL schema&lt;/strong&gt;, do reading and writing on databases much faster. But &lt;strong&gt;MySQL&lt;/strong&gt; showed that storing the JSON (JavaScript Object Notation) data type could also improve the speed of reading and writing relational databases.&lt;/p&gt;
&lt;p&gt;This post will explore the JSON Data type in &lt;a href="https://www.percona.com/software/mysql-database/percona-server" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;One of the key features of &lt;strong&gt;Percona Server&lt;/strong&gt; is support for &lt;strong&gt;JSON&lt;/strong&gt; data type, which allows for the storage of JSON documents within MySQL. It allows for more flexible and efficient storage of semi-structured data (​​which is more human-readable ) within a relational database.&lt;/p&gt;
&lt;p&gt;We will install &lt;strong&gt;Percona Server for MySQL&lt;/strong&gt; in a Docker container to make basic operations for inserting, modifying, and removing JSON data types.&lt;/p&gt;
&lt;p&gt;To start, we will bring version 8.0 of Percona Server for MySQL; the name of this image in Docker Hub is percona-server. You will need Docker; if you don’t have it installed, follow the official &lt;a href="https://docs.docker.com/engine/install/" target="_blank" rel="noopener noreferrer"&gt;Docker documentation&lt;/a&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker pull percona/percona-server:8.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We will run the container for &lt;strong&gt;Percona Server for MySQL&lt;/strong&gt;, call our container percona-server and pass in an environment variable called &lt;strong&gt;MYSQL_ROOT_PASSWORD&lt;/strong&gt;; This variable specifies a password that is set for the MySQL root account.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run -d &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --name percona-server &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; -e &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; percona/percona-server:8.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After confirming that our container is running with “docker ps,” we can enter our Percona Server for MySQL container to start executing commands.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; -it percona-server /bin/bash&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The Percona Server for MySQL database is already running, and we will proceed to connect to it:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -uroot -p&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Use &lt;strong&gt;root&lt;/strong&gt; as a password.&lt;/p&gt;
&lt;p&gt;Create the database called &lt;strong&gt;cinema&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE DATABASE library&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;USE library&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create a table called &lt;strong&gt;books&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE books &lt;span class="o"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; book_id BIGINT PRIMARY KEY AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; title VARCHAR&lt;span class="o"&gt;(&lt;/span&gt;100&lt;span class="o"&gt;)&lt;/span&gt; UNIQUE NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; publisher VARCHAR&lt;span class="o"&gt;(&lt;/span&gt;100&lt;span class="o"&gt;)&lt;/span&gt; NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; labels JSON NOT NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;ENGINE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; InnoDB&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="insert-json-type-into-books-table"&gt;Insert JSON type into books table&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO books&lt;span class="o"&gt;(&lt;/span&gt;title,publisher, labels&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;VALUES&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Green House'&lt;/span&gt;, &lt;span class="s1"&gt;'Joe Monter'&lt;/span&gt;, &lt;span class="s1"&gt;'{"about" : {"gender": "action", "cool": true, "notes": "labeled"}}'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO books&lt;span class="o"&gt;(&lt;/span&gt;title,publisher, labels&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;VALUES&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'El camino'&lt;/span&gt;, &lt;span class="s1"&gt;'Daniil Zotl'&lt;/span&gt;, &lt;span class="s1"&gt;'{"about" : {"gender": "documental", "cool": true, "notes": "labeled"}}'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;select&lt;/span&gt; * from books&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you can see, JSON is a more flexible data type than what you might be used to when working with data in &lt;strong&gt;MySQL&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="select-with-json_extract"&gt;Select with JSON_EXTRACT&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT title, JSON_EXTRACT&lt;span class="o"&gt;(&lt;/span&gt;labels, &lt;span class="s1"&gt;'$.about.notes'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; AS Notes FROM books&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;A shortcut of JSON_EXTRACT is “-&gt;”
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT title, labels-&gt;&lt;span class="s1"&gt;'$.about.notes'&lt;/span&gt; AS Notes FROM books&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The short operator -&gt; provides the same functionality as JSON_EXTRACT&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT titulo, etiquetas-&gt;&lt;span class="s1"&gt;'$.acerca.genero'&lt;/span&gt; AS Genero FROM books&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="updating-json-type-records"&gt;Updating JSON type records&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;UPDATE books SET &lt;span class="nv"&gt;labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; JSON_REPLACE&lt;span class="o"&gt;(&lt;/span&gt;labels, &lt;span class="s1"&gt;'$.about.gender'&lt;/span&gt;, &lt;span class="s1"&gt;'romance'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; WHERE &lt;span class="nv"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'the roses'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;UPDATE books SET &lt;span class="nv"&gt;labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; JSON_REPLACE&lt;span class="o"&gt;(&lt;/span&gt;labels, &lt;span class="s1"&gt;'$.about.notes'&lt;/span&gt;, &lt;span class="s1"&gt;'not labeled'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; WHERE &lt;span class="nv"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'the roses'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;select&lt;/span&gt; * from books&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="deleting-a-json-record"&gt;Deleting a JSON record&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DELETE FROM books WHERE &lt;span class="nv"&gt;book_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt; AND JSON_EXTRACT&lt;span class="o"&gt;(&lt;/span&gt;labels, &lt;span class="s1"&gt;'$.about.gender'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"documental"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="deleting-a-value-inside-a-json-structure"&gt;Deleting a value inside a JSON structure&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;UPDATE books SET &lt;span class="nv"&gt;labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; JSON_REMOVE&lt;span class="o"&gt;(&lt;/span&gt;labels, &lt;span class="s1"&gt;'$.about.notes'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; WHERE &lt;span class="nv"&gt;book_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; 2&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can use these fundamental operations to manage JSON data types in Percona Server MySQL. This allows for more flexible and efficient data modeling and querying for applications that work with JSON data. How will that work in an application? Keep an eye out, I’ll be following this up with a blog about an application using JSON data in MySQL very soon.&lt;/p&gt;
&lt;p&gt;Get more about Percona Server for MySQL documentation in our &lt;a href="https://www.percona.com/software/mysql-database/percona-server" target="_blank" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;. And if you want to know why JSON is the preferred format for many developers and why it’s so popular, check out &lt;a href="https://www.percona.com/blog/json-and-relational-databases-part-one" target="_blank" rel="noopener noreferrer"&gt;David Stokes’ blog: JSON and Relational Databases – Part One&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>JSON</category>
      <category>MySQL</category>
      <category>Databases</category>
      <category>Open Source</category>
      <media:thumbnail url="https://percona.community/blog/2023/03/13-cover-change_hu_828a39cdc2af9b2a.jpg"/>
      <media:content url="https://percona.community/blog/2023/03/13-cover-change_hu_ad37fbc7e77a97a0.jpg" medium="image"/>
    </item>
    <item>
      <title>Backups for MySQL With mysqldump</title>
      <link>https://percona.community/blog/2023/03/10/backups-for-mysql-with-mysqldump/</link>
      <guid>https://percona.community/blog/2023/03/10/backups-for-mysql-with-mysqldump/</guid>
      <pubDate>Fri, 10 Mar 2023 00:00:00 UTC</pubDate>
      <description>Basic Usage mysqldump is a client utility that can be used for doing logical backups. It will generate the necessary SQL statements to reproduce the original database.</description>
      <content:encoded>&lt;h2 id="basic-usage"&gt;Basic Usage&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html" target="_blank" rel="noopener noreferrer"&gt;mysqldump&lt;/a&gt; is a client utility that can be used for doing logical backups. It will generate the necessary SQL statements to reproduce the original database.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/03/backup.jpg" alt="Backup" /&gt;&lt;figcaption&gt;Backup by Nick Youngson CC BY-SA 3.0 Pix4free&lt;/figcaption&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The following statements are some common uses of mysqldump:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;mysqldump -u username -p database_name [table_name] &gt; dump.sql&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;mysqldump -u username -p --databases db1_name db2_name &gt; dump.sql&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;mysqldump -u username -p --all-databases &gt; dump.sql&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The first example is for backing up a single database. If you need to back up some specific tables instead of the whole database, write their names, space-separated.&lt;/p&gt;
&lt;p&gt;With the &lt;code&gt;--databases&lt;/code&gt; option, you can back up two or more databases, their names must be space separated.&lt;/p&gt;
&lt;p&gt;To back up all the databases in your MySQL server, just append the &lt;code&gt;--all-databases&lt;/code&gt; option.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;dump.sql&lt;/code&gt; file doesn’t contain the create database SQL statement. If you need it, add it with the &lt;code&gt;-B&lt;/code&gt; option. This is unnecessary if you’re running &lt;code&gt;mysqldump&lt;/code&gt; with the &lt;code&gt;--databases&lt;/code&gt; and &lt;code&gt;--all-databases&lt;/code&gt; options.&lt;/p&gt;
&lt;p&gt;Ignoring tables when backing up a database is also possible with the &lt;code&gt;--ignore-tables&lt;/code&gt; option.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u username -p database_name --ignore-tables=database_name.table1 &gt; database_name.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you need to ignore more than one database, just use the option as many times as needed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u root -p database_name --ignore-table=database_name.table1 --ignore-table=database_name.table2 &gt; database_name.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="schema-backup"&gt;Schema Backup&lt;/h2&gt;
&lt;p&gt;In case you need to backup only the schema of your database with no data, run mysqldump with the &lt;code&gt;--no-data&lt;/code&gt; option:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u username -p database_name --no-data &gt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can also backup the schema while running &lt;code&gt;mysqldump&lt;/code&gt; with the &lt;code&gt;--databases&lt;/code&gt; and &lt;code&gt;--all-databases&lt;/code&gt; options.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u username -p --all-databases --no-data &gt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u username -p --databases db1_name db2_name --no-data &gt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="data-restore"&gt;Data Restore&lt;/h2&gt;
&lt;p&gt;To restore the databases in your &lt;code&gt;dump.sql&lt;/code&gt; file, run the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u root -p &lt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you need to restore a single database from the complete backup, you can do it by running any of the following statements:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u root -p -o database_name &lt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u root -p --one-database database_name &lt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In both cases, the database must exist in your MySQL server, as it only will restore the schema and the data.&lt;/p&gt;
&lt;h2 id="conditional-backup"&gt;Conditional Backup&lt;/h2&gt;
&lt;p&gt;If you need to create a backup that contains data that matches a condition, you can use a &lt;code&gt;WHERE&lt;/code&gt; clause with mysqldump.&lt;/p&gt;
&lt;p&gt;You can use a single where condition:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump database_name table_name --where="id &gt; 500" &gt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or multiple conditions:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump database_name users --where="id &gt; 500 and disabled = 0" &gt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As explained &lt;a href="https://mysqldump.guru/how-to-use-a-where-clause-with-mysqldump.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt; in the &lt;a href="https://mysqldump.guru/" target="_blank" rel="noopener noreferrer"&gt;mysqldump.guru&lt;/a&gt; website.&lt;/p&gt;
&lt;p&gt;For example, in a database with the following schema, built from the &lt;a href="https://movienet.github.io/" target="_blank" rel="noopener noreferrer"&gt;Movienet&lt;/a&gt; dataset:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/03/movienet_model.png" alt="Movienet Database" /&gt;&lt;figcaption&gt;Movienet Database&lt;/figcaption&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;If you want to back up the movies produced in a specific country, like Mexico, a way to do it is by running mysqldump with a &lt;code&gt;WHERE&lt;/code&gt; clause.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqldump -u root -p movienet movies --where=”country = 22” &gt; dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;22&lt;/code&gt; is the &lt;code&gt;country_id&lt;/code&gt; of Mexico in this particular database, created using &lt;a href="https://github.com/mattdark/json-mysql-importer" target="_blank" rel="noopener noreferrer"&gt;this Python script&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can also get those values by executing the following SQL statement:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;select movies.movie_id, movies.title, countries.name as country from movies inner join countries on movies.country = countrie
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;s.country_id and movies.country = '22';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+-----------------------------------------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| movie_id | title | country |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+-----------------------------------------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0047501 | Sitting Bull (1954) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0049046 | Canasta de cuentos mexicanos (1956) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0076336 | Hell Without Limits (1978) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0082048 | El barrendero (1982) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0082080 | Blanca Nieves y sus 7 amantes (1980) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0083057 | El sexo de los pobres (1983) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0110185 | El jardín del Edén (1994) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0116043 | De jazmín en flor (1996) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0121322 | El giro, el pinto, y el Colorado (1979) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0133354 | Algunas nubes (1995) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0207055 | La risa en vacaciones 4 (TV Movie 1994) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0208889 | To and Fro (2000) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0211878 | La usurpadora (TV Series 1998– ) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0220306 | El amarrador 3 (1995) | Mexico |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| tt0229008 | El vampiro teporocho (1989) | Mexico |&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="skipping-databases"&gt;Skipping Databases&lt;/h2&gt;
&lt;p&gt;There’s no option for &lt;code&gt;mysqldump&lt;/code&gt; to skip databases when generating the backup, but here’s a solution that could work for you:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DATABASES_TO_EXCLUDE="db1 db2 db3"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EXCLUSION_LIST="'information_schema','mysql'"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;for DB in `echo "${DATABASES_TO_EXCLUDE}"`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;do
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; EXCLUSION_LIST="${EXCLUSION_LIST},'${DB}'"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;done
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SQLSTMT="SELECT schema_name FROM information_schema.schemata"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SQLSTMT="${SQLSTMT} WHERE schema_name NOT IN (${EXCLUSION_LIST})"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MYSQLDUMP_DATABASES="--databases"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;for DB in `mysql -u username -p -ANe"${SQLSTMT}"`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;do
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MYSQLDUMP_DATABASES="${MYSQLDUMP_DATABASES} ${DB}"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;done
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MYSQLDUMP_OPTIONS="--routines --triggers"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysqldump -u username -p ${MYSQLDUMP_OPTIONS} ${MYSQLDUMP_DATABASES} &gt; MySQLDatabases.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The above BASH script will generate the backup of your MySQL server excluding the &lt;code&gt;information_schema&lt;/code&gt; and &lt;code&gt;mysql&lt;/code&gt; databases, listed in the &lt;code&gt;EXCLUSION_LIST&lt;/code&gt; variable, as well as the databases of your choice in the &lt;code&gt;DATABASES_TO_EXCLUDE&lt;/code&gt; variable.&lt;/p&gt;
&lt;p&gt;Don’t forget to add the databases you want to exclude to the &lt;code&gt;DATABASES_TO_EXCLUDE&lt;/code&gt; variable, replace the &lt;code&gt;username&lt;/code&gt;, in both &lt;code&gt;mysql&lt;/code&gt; and &lt;code&gt;mysqldump&lt;/code&gt; commands, and add the required options to the &lt;code&gt;MYSQLDUMP_OPTIONS&lt;/code&gt; variable.&lt;/p&gt;
&lt;h2 id="security-considerations"&gt;Security Considerations&lt;/h2&gt;
&lt;p&gt;Some of the common questions in &lt;a href="https://forums.percona.com" target="_blank" rel="noopener noreferrer"&gt;our forum&lt;/a&gt; are about how to do a partial restoration from a complete backup. For example, when you back up a database with &lt;code&gt;mysqldump&lt;/code&gt;, you will get the statements for creating the schema of the database and inserting the data from your backup.&lt;/p&gt;
&lt;p&gt;If you only need the schema, you can run mysqldump with the –no-data option. But if you need to restore the schema of a specific database from a complete backup, I found an interesting solution:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat dump.sql | grep -v ^INSERT | mysql -u username -p&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The above command will restore the schema of your database, skipping the SQL statements for inserting the data. It works well when you backup a single database, but there’s no reason to use it as you can get the schema with the &lt;code&gt;--no-data&lt;/code&gt; option, instead of removing the inserts.&lt;/p&gt;
&lt;p&gt;What happens if you try to run this command with a backup that includes all the databases in your server? You must be careful as this will try to overwrite the system schema in the &lt;code&gt;mysql&lt;/code&gt; database which is dangerous. This database store authentication details and overriding the data will make you lose access to your server.&lt;/p&gt;
&lt;p&gt;If you don’t need to backup the &lt;code&gt;mysql&lt;/code&gt; database, run &lt;code&gt;mysqldump&lt;/code&gt; with the &lt;code&gt;--databases&lt;/code&gt; option to specify which databases you require or use the script shared in the &lt;a href="#skipping-databases"&gt;Skipping Databases&lt;/a&gt; section.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Through this blog post you learned how to use mysqldump for backing up the databases in your MySQL server as well as some recommendations while using this tool. For advanced usage of mysqldump you can check &lt;a href="https://www.percona.com/blog/the-mysqlpump-utility/" target="_blank" rel="noopener noreferrer"&gt;this article&lt;/a&gt; in our blog.&lt;/p&gt;</content:encoded>
      <author>Mario García</author>
      <category>MySQL</category>
      <category>Backup</category>
      <media:thumbnail url="https://percona.community/blog/2023/03/backup_hu_aac33fbd4cc33f69.jpg"/>
      <media:content url="https://percona.community/blog/2023/03/backup_hu_8f2cb2ce79c4a147.jpg" medium="image"/>
    </item>
    <item>
      <title>Monitor your databases with Open Source tools like PMM</title>
      <link>https://percona.community/blog/2023/03/06/monitor-your-databases-with-open-source-tools-like-pmm/</link>
      <guid>https://percona.community/blog/2023/03/06/monitor-your-databases-with-open-source-tools-like-pmm/</guid>
      <pubDate>Mon, 06 Mar 2023 00:00:00 UTC</pubDate>
      <description>In this post, we will cover the value of database monitoring and how we can use Open Source tools like PMM Percona Monitoring and Management to monitor and manage databases effectively.</description>
      <content:encoded>&lt;p&gt;In this post, we will cover the value of database monitoring and how we can use Open Source tools like &lt;strong&gt;PMM&lt;/strong&gt; &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management&lt;/a&gt; to monitor and manage databases effectively.&lt;/p&gt;
&lt;h2 id="why-should-i-care-about-database-monitoring"&gt;Why should I care about database monitoring?&lt;/h2&gt;
&lt;p&gt;Once you have passed the installation and configuration of your databases and it is well underway, you have to start monitoring it, and not only the database but the elements related to it.&lt;/p&gt;
&lt;p&gt;Questions like these will begin to arise:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Is my database performing well?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Are query response times consistently slow?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Is my database available and accepting connections?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Connections to the database close to the maximum limit&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Is my system stable?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;How about CPU, memory, and disk?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Am I experiencing avoidable downtime?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Hardware failures, network outages.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;li&gt;Am I experiencing data loss?
&lt;ul&gt;
&lt;li&gt;Disk crashes&lt;/li&gt;
&lt;li&gt;Human errors&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Am I minimizing performance issues that can impact my business?&lt;/li&gt;
&lt;li&gt;Can I quickly identify and resolve issues before they become more significant problems?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To answer these questions, you will need to find tools that let you keep your database monitored, and you can opt for free tools for monitoring. &lt;strong&gt;PMM&lt;/strong&gt; is one of them, which is entirely open source.&lt;/p&gt;
&lt;h2 id="percona-monitoring-and-management-pmm"&gt;Percona Monitoring and Management (PMM)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;PMM&lt;/strong&gt; is an open source database observability, monitoring, and management tool for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;MySQL&lt;/li&gt;
&lt;li&gt;MariaDB&lt;/li&gt;
&lt;li&gt;PostgreSQL&lt;/li&gt;
&lt;li&gt;MongoDB&lt;/li&gt;
&lt;li&gt;And others&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It can also help to improve the performance of your databases, simplify their management and strengthen their security.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;PMM&lt;/strong&gt; is built on top of open source software&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Grafana&lt;/li&gt;
&lt;li&gt;VictoriaMetrics/Prometheus&lt;/li&gt;
&lt;li&gt;ClickHouse&lt;/li&gt;
&lt;li&gt;PostgreSQL&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="pmm-interface"&gt;PMM Interface&lt;/h2&gt;
&lt;p&gt;There are three levels of depth:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Dashboards&lt;/li&gt;
&lt;li&gt;Graphs&lt;/li&gt;
&lt;li&gt;Metrics&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/03/01-interface.jpg" alt="Interface" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="metrics--database-monitoring"&gt;Metrics &amp; Database Monitoring&lt;/h2&gt;
&lt;p&gt;Important database metrics you should monitor:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It will depend on your specific database and use case&lt;/li&gt;
&lt;li&gt;Monitor the metrics that are relevant to your database and your business&lt;/li&gt;
&lt;li&gt;You should have alerts and monitoring processes to ensure you are aware of any problems as they occur or before&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some important metrics that could indicate potential database issues:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Query performance&lt;/li&gt;
&lt;li&gt;High CPU utilization&lt;/li&gt;
&lt;li&gt;High Memory usage&lt;/li&gt;
&lt;li&gt;High Disk I/O&lt;/li&gt;
&lt;li&gt;User Connection&lt;/li&gt;
&lt;li&gt;Data growth&lt;/li&gt;
&lt;li&gt;Others&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s analyze each of them, and they will also answer your questions at the beginning.&lt;/p&gt;
&lt;h3 id="long-query-response-times"&gt;Long Query Response Times&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;PMM&lt;/strong&gt; helps you monitor the performance of individual queries and identify slow-performing queries that need to be optimized.
We can use &lt;a href="https://docs.percona.com/percona-monitoring-and-management/get-started/query-analytics.html" target="_blank" rel="noopener noreferrer"&gt;Query Analytics in PMM&lt;/a&gt; to visualize all the queries running in our database; we can inspect each of them and see which is the one sending more queries per second and much longer it takes to execute it. Also, &lt;strong&gt;PMM&lt;/strong&gt; will show you suggestions to fix or improve queries.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/02-long-query-response_hu_62a852a5133eca47.jpg 480w, https://percona.community/blog/2023/03/02-long-query-response_hu_c591c54cbe30a2db.jpg 768w, https://percona.community/blog/2023/03/02-long-query-response_hu_3267cec7f34660ea.jpg 1400w"
src="https://percona.community/blog/2023/03/02-long-query-response.jpg" alt="Long query Response" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="high-cpu-utilization"&gt;High CPU Utilization&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;PMM&lt;/strong&gt; helps you monitor the number of &lt;a href="https://docs.percona.com/percona-monitoring-and-management/details/dashboards/dashboard-cpu-utilization-details.html" target="_blank" rel="noopener noreferrer"&gt;CPU resources&lt;/a&gt; the database uses and identify performance bottlenecks.&lt;/p&gt;
&lt;p&gt;In the section on CPU utilization, you will see how much of your CPU is being used in a period of time. This is very useful when you need to increase your resources.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/03-high-cpu-utilization_hu_864290ce529632af.jpg 480w, https://percona.community/blog/2023/03/03-high-cpu-utilization_hu_db70c270db2c3c4b.jpg 768w, https://percona.community/blog/2023/03/03-high-cpu-utilization_hu_a3a8d6ea22fbef2e.jpg 1400w"
src="https://percona.community/blog/2023/03/03-high-cpu-utilization.jpg" alt="High Cpu Utilization" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="high-memory-usage"&gt;High Memory usage&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;PMM&lt;/strong&gt; helps you &lt;a href="https://docs.percona.com/percona-monitoring-and-management/details/dashboards/dashboard-memory-details.html" target="_blank" rel="noopener noreferrer"&gt;monitor the amount of memory&lt;/a&gt; being used by the database and determine if you need to add more memory or optimize your database configuration.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/04-high-memory-usage_hu_26bf5764a8cc862e.jpg 480w, https://percona.community/blog/2023/03/04-high-memory-usage_hu_44c022f77a424a53.jpg 768w, https://percona.community/blog/2023/03/04-high-memory-usage_hu_906f31d272252b92.jpg 1400w"
src="https://percona.community/blog/2023/03/04-high-memory-usage.jpg" alt="High Memory Usage" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="disk-io"&gt;Disk I/O&lt;/h3&gt;
&lt;p&gt;PMM helps you monitor the number of &lt;a href="https://docs.percona.com/percona-monitoring-and-management/details/dashboards/dashboard-disk-details.html" target="_blank" rel="noopener noreferrer"&gt;disk I/O operations&lt;/a&gt; performed by the database and identify any potential performance bottlenecks. See here the panel of Disk IO Latency!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/05-disk-io_hu_4a27991163e403f1.jpg 480w, https://percona.community/blog/2023/03/05-disk-io_hu_7be04ca93c7a3e22.jpg 768w, https://percona.community/blog/2023/03/05-disk-io_hu_6ed179d2eea2cc2a.jpg 1400w"
src="https://percona.community/blog/2023/03/05-disk-io.jpg" alt="Disk Io" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="user-connections"&gt;User connections&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;PMM&lt;/strong&gt; helps you monitor the number of &lt;a href="https://docs.percona.com/percona-monitoring-and-management/details/dashboards/dashboard-mysql-user-details.html" target="_blank" rel="noopener noreferrer"&gt;active database connections&lt;/a&gt; and determine if your user connection is sized appropriately. If you limit the number of users that should connect to your database, this panel will show you when you are reaching that limit so that you can increase the number.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/06-user-conexion_hu_fbde4c457f6e6d56.jpg 480w, https://percona.community/blog/2023/03/06-user-conexion_hu_7077b7be7ebefd15.jpg 768w, https://percona.community/blog/2023/03/06-user-conexion_hu_2f45dabbef25cf92.jpg 1400w"
src="https://percona.community/blog/2023/03/06-user-conexion.jpg" alt="User Conexion" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="data-growth"&gt;Data growth&lt;/h3&gt;
&lt;p&gt;PMM helps you monitor &lt;a href="https://docs.percona.com/percona-monitoring-and-management/details/dashboards/dashboard-mysql-table-details.html" target="_blank" rel="noopener noreferrer"&gt;your database growth&lt;/a&gt; over time and plan for capacity and performance needs. This dashboard helps to see the time period in which your database is growing and to be able to learn about performance issues or issues as they occur.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/03/07-data-grown_hu_5c5fc49e74281736.jpg 480w, https://percona.community/blog/2023/03/07-data-grown_hu_6e0ef3cc7be596ed.jpg 768w, https://percona.community/blog/2023/03/07-data-grown_hu_e8ddf2f3e2ca5739.jpg 1400w"
src="https://percona.community/blog/2023/03/07-data-grown.jpg" alt="Data Grown" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="summary"&gt;Summary&lt;/h3&gt;
&lt;p&gt;We see the importance of monitoring databases and how to explore PMM for some essential metrics to detect issues and prevent them on time.&lt;/p&gt;
&lt;p&gt;Want to try PMM? We have a &lt;a href="https://pmmdemo.percona.com/graph/" target="_blank" rel="noopener noreferrer"&gt;test environment to try PMM&lt;/a&gt; without having to install it first. Feel free to play with it and see how PMM works. If you like it, you can &lt;a href="https://www.percona.com/software/pmm/quickstart" target="_blank" rel="noopener noreferrer"&gt;install PMM quickly and start using it in your own environment&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Monitor</category>
      <category>PMM</category>
      <category>Databases</category>
      <category>Open Source</category>
      <media:thumbnail url="https://percona.community/blog/2023/03/00-moni-cover_hu_bac40403f6d5c7ee.jpg"/>
      <media:content url="https://percona.community/blog/2023/03/00-moni-cover_hu_723ac9c5f153c5a1.jpg" medium="image"/>
    </item>
    <item>
      <title>How to test code blocks in documentation</title>
      <link>https://percona.community/blog/2023/02/28/doc-testing/</link>
      <guid>https://percona.community/blog/2023/02/28/doc-testing/</guid>
      <pubDate>Tue, 28 Feb 2023 00:00:00 UTC</pubDate>
      <description>As any developer, I don’t like to write documentation. But if I am writing it, I would like to test that what I wrote works. I often found myself copy-pasting something from documentation and trying to run it in the terminal (commands, files, etc.), and it didn’t work.</description>
      <content:encoded>&lt;p&gt;As any developer, I don’t like to write documentation. But if I am writing it, I would like to test that what I wrote works.
I often found myself copy-pasting something from documentation and trying to run it in the terminal (commands, files, etc.), and it didn’t work.&lt;/p&gt;
&lt;p&gt;There are usually some environment, typos, or even wrong commands in the doc (that people copy-pasted from the wrong place).&lt;/p&gt;
&lt;p&gt;I know that issue, and after writing the documentation, I usually try to clean up everything in my environment and test the doc. I am reading it and executing commands as they wrote. Sometimes I find something needs to be fixed. So I needed to find a way to test it quickly.&lt;/p&gt;
&lt;p&gt;For example, recent &lt;a href="https://github.com/percona/pmm-doc/blob/main/docs/setting-up/server/podman.md" target="_blank" rel="noopener noreferrer"&gt;Podman&lt;/a&gt; doc has both code and files:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; You can override the environment variables by defining them in the file `~/.config/pmm-server/env`. For example, to override the path to a custom registry `~/.config/pmm-server/env`:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ```sh
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mkdir -p ~/.config/pmm-server/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; cat &lt;&lt; "EOF" &gt; ~/.config/pmm-server/env
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; PMM_TAG=2.35.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; PMM_IMAGE=docker.io/percona/pmm-server
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; PMM_PUBLIC_PORT=8443
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; EOF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ```
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; !!! caution alert alert-warning "Important"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Ensure that you modify PMM_TAG in `~/.config/pmm-server/env` and update it regularly as Percona cannot update it. It needs to be done by you.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1. Enable and Start.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ```sh
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; systemctl --user enable --now pmm-server
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ```&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Documentation ages and there could be new images that wouldn’t work with this documentation anymore.
Another issue is making changes to the existing documentation - how to know that there are no regressions with something new added or with some fixes?&lt;/p&gt;
&lt;p&gt;Usually, to mitigate those issues, developers and/or tech writers are [re]checking everything manually.&lt;/p&gt;
&lt;p&gt;There are many different automatic approaches to mitigate that issue. The ultimate solution for this is probably &lt;a href="https://orgmode.org/" target="_blank" rel="noopener noreferrer"&gt;GNU Emacs Org Mode&lt;/a&gt;. I dream that one day I will learn Emacs and Org Mode.&lt;/p&gt;
&lt;p&gt;But I need a solution now, and the team process is to have documentation in a Markdown format and track everything in GitHub - &lt;a href="https://github.com/percona/pmm-doc" target="_blank" rel="noopener noreferrer"&gt;PMM Documentation&lt;/a&gt;. &lt;a href="#github"&gt;GitHub&lt;/a&gt; supports quite good code formating, and it is easy to develop and review Markdown there.&lt;/p&gt;
&lt;p&gt;There are probably &lt;a href="#frameworks"&gt;frameworks&lt;/a&gt; that could help with that, but I needed something quick and didn’t want to introduce yet another testing framework to the batch.&lt;/p&gt;
&lt;p&gt;So I was looking for something that would allow me to quickly cut code snippets from the documentation and run them in a GitHub action. While searching, I found this doc blog: &lt;a href="https://tomlankhorst.nl/testing-code-in-markdown-doc-md-github" target="_blank" rel="noopener noreferrer"&gt;Test codeblocks in markdown documents&lt;/a&gt;. Which was exactly what I needed :)&lt;/p&gt;
&lt;p&gt;The only problem I found out quickly is that I need something else. Pandoc builds AST tree just fine:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;json&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-json" data-lang="json"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"CodeBlock"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"sh"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;],&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;],&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"mkdir -p ~/.config/pmm-server/\ncat &lt;&lt; \"EOF\" &gt; ~/.config/pmm-server/env\nPMM_TAG=2.35.0\nPMM_IMAGE=docker.io/percona/pmm-server\nPMM_PUBLIC_PORT=8443\nEOF"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"CodeBlock"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"sh"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;],&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;],&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"systemctl --user enable --now pmm-server"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"CodeBlock"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nt"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"sh"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;],&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;],&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"#first pull can take time\nsleep 80\ntimeout 60 podman wait --condition=running pmm-server"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you see, the 3rd &lt;code&gt;CodeBlock&lt;/code&gt; is not on the same level. So when using &lt;code&gt;jq&lt;/code&gt; approach (from the blog post):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pandoc -i podman.md -t json | jq -r -c '.blocks[] | select(.t | contains("CodeBlock"))? | .c'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[["",[],[]],"```sh\npodman exec -it pmm-server \\\ncurl -ku admin:admin https://localhost/v1/version\n```"]p'&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It returns only the block from that level that &lt;code&gt;jq&lt;/code&gt; program specifies. My first reaction was to try to advance that filter, but there were so many levels of code block that could be found, and there was so much I needed to learn and do to create the number of filters that I abandoned the idea.&lt;/p&gt;
&lt;p&gt;Still, &lt;a href="#pandoc"&gt;Pandoc&lt;/a&gt; is a very powerful tool, so I started digging to find out if any embedded filters could help me filter only &lt;code&gt;CodeBlocks&lt;/code&gt;. And apparently, there are &lt;a href="https://pandoc.org/lua-filters.html" target="_blank" rel="noopener noreferrer"&gt;Pandoc Lua Filters&lt;/a&gt;. After some experiments, I came up with the following:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;lua&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-lua" data-lang="lua"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;traverse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'topdown'&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kr"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;CodeBlock&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;block&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kr"&gt;if&lt;/span&gt; &lt;span class="n"&gt;block.classes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"sh"&lt;/span&gt; &lt;span class="kr"&gt;then&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"#-----CodeBlock-----"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;io.stdout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;block.text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kr"&gt;end&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kr"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kr"&gt;end&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This gives me a sequence of blocks that are marked as &lt;code&gt;sh&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pandoc -i podman.md --lua-filter ../../../_resources/bin/CodeBlock.lua -t html -o /dev/null
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;#-----CodeBlock-----&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mkdir -p ~/.config/pmm-server/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat &lt;&lt; &lt;span class="s2"&gt;"EOF"&lt;/span&gt; &gt; ~/.config/pmm-server/env
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;PMM_TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2.31.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;PMM_IMAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;docker.io/percona/pmm-server
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;PMM_PUBLIC_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;8443&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;#-----CodeBlock-----&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;systemctl --user &lt;span class="nb"&gt;enable&lt;/span&gt; --now pmm-server&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So that is easy to wrap up in a shell script and execute - locally or in a &lt;a href="https://github.com/percona/pmm-doc/blob/main/.github/workflows/podman-tests.yml#L35" target="_blank" rel="noopener noreferrer"&gt;GitHub Action&lt;/a&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: Copy test template
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; run: cp _resources/bin/doc_test_template.sh ./docs_test_podman.sh
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: Get CodeBlocks and push them to test template
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; run: pandoc -i docs/setting-up/server/podman.md --lua-filter _resources/bin/CodeBlock.lua -t html -o /dev/null &gt;&gt; docs_test_podman.sh
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - name: Run podman tests
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; run: ./docs_test_podman.sh&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Sometimes, you will find yourself in a situation where you need to execute something (env, infra, cleanup) that should not be shown in the documentation. For example, wait for the previous action before the next one:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;div hidden&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sleep &lt;span class="m"&gt;30&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;timeout &lt;span class="m"&gt;60&lt;/span&gt; podman &lt;span class="nb"&gt;wait&lt;/span&gt; --condition&lt;span class="o"&gt;=&lt;/span&gt;running pmm-server&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;/div&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I use &lt;code&gt;html&lt;/code&gt; to hide &lt;code&gt;CodeBlocks&lt;/code&gt; from the rendered document.&lt;/p&gt;
&lt;p&gt;You could solve documentation testing at least for some or most of the cases with these simple conventions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;sh&lt;/code&gt; language identifier for the fenced code blocks for examples&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&lt;div hidden&gt;&lt;/code&gt; for code blocks that should not be in the rendered documentation&lt;/li&gt;
&lt;li&gt;&lt;a href="https://linuxize.com/post/bash-heredoc/" target="_blank" rel="noopener noreferrer"&gt;Bash Heredoc&lt;/a&gt; for files&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This approach is easy to use locally to test documentation you just wrote and integrate into the CI pipeline.&lt;/p&gt;
&lt;h2 id="links"&gt;Links&lt;/h2&gt;
&lt;p&gt;Here are some links. If you have some more suggestions - please open an issue or PR: &lt;a href="https://github.com/percona/community/" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/community/&lt;/a&gt; .&lt;/p&gt;
&lt;h3 id="editors"&gt;Editors&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://orgmode.org/" target="_blank" rel="noopener noreferrer"&gt;https://orgmode.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://howardism.org/Technical/Emacs/literate-devops.html" target="_blank" rel="noopener noreferrer"&gt;http://howardism.org/Technical/Emacs/literate-devops.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="github"&gt;GitHub&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Library is used on GitHub.com to detect blob languages: &lt;a href="https://github.com/github/linguist/blob/master/lib/linguist/languages.yml" target="_blank" rel="noopener noreferrer"&gt;https://github.com/github/linguist/blob/master/lib/linguist/languages.yml&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks" target="_blank" rel="noopener noreferrer"&gt;https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="frameworks"&gt;Frameworks&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Widdershin/markdown-doctest" target="_blank" rel="noopener noreferrer"&gt;https://github.com/Widdershin/markdown-doctest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nschloe/pytest-codeblocks" target="_blank" rel="noopener noreferrer"&gt;https://github.com/nschloe/pytest-codeblocks&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="pandoc"&gt;Pandoc&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://tomlankhorst.nl/testing-code-in-markdown-doc-md-github" target="_blank" rel="noopener noreferrer"&gt;https://tomlankhorst.nl/testing-code-in-markdown-doc-md-github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pandoc.org/MANUAL.html" target="_blank" rel="noopener noreferrer"&gt;https://pandoc.org/MANUAL.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pandoc.org/filters.html" target="_blank" rel="noopener noreferrer"&gt;https://pandoc.org/filters.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pandoc.org/lua-filters.html" target="_blank" rel="noopener noreferrer"&gt;https://pandoc.org/lua-filters.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Denys Kondratenko</author>
      <category>Documentation</category>
      <category>Testing</category>
      <category>Pandoc</category>
      <media:thumbnail url="https://percona.community/blog/2023/02/doc-testing_hu_2b3938e6799f0c07.jpg"/>
      <media:content url="https://percona.community/blog/2023/02/doc-testing_hu_b34e00ba5fd77d55.jpg" medium="image"/>
    </item>
    <item>
      <title>Exploring Databases on Containers with Percona Server for MySQL</title>
      <link>https://percona.community/blog/2023/02/23/exploring-databases-on-containers-with-mysql/</link>
      <guid>https://percona.community/blog/2023/02/23/exploring-databases-on-containers-with-mysql/</guid>
      <pubDate>Thu, 23 Feb 2023 00:00:00 UTC</pubDate>
      <description>In this blog, we will explore databases on containers. We will use Docker as a container engine tool and Percona Server for MySQL as a database administration tool. Both are open source tools.</description>
      <content:encoded>&lt;p&gt;In this blog, we will explore databases on containers. We will use Docker as a container engine tool and &lt;a href="https://www.percona.com/software/mysql-database/percona-server" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt; as a database administration tool. Both are open source tools.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MySQL&lt;/strong&gt; is a relational database management system that stores data on disk. Percona Server for &lt;strong&gt;MySQL&lt;/strong&gt; is a fork of MYSQL, providing much more advanced features. To run it correctly, we need to know volumes because we want to “persist” the data, the most important thing in databases.&lt;/p&gt;
&lt;h2 id="running-a-single-percona-server-for-mysql-container"&gt;Running a single Percona Server for MySQL container&lt;/h2&gt;
&lt;p&gt;First, let’s create a container without volumes:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/02/1-volume_hu_7c2971b9d8f8b858.png 480w, https://percona.community/blog/2023/02/1-volume_hu_c18f7cb63cd443b3.png 768w, https://percona.community/blog/2023/02/1-volume_hu_f342c14e99bb50bd.png 1400w"
src="https://percona.community/blog/2023/02/1-volume.png" alt="1-no-volume" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Figure 1: From Percona Server for MySQL image to a running container in Docker&lt;/p&gt;
&lt;p&gt;The following command will create a container called percona-server-1, where we can create databases and add some data.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run -d --name percona-server-1 -e &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root percona/percona-server:8.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Listing the image and the container:
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/02/2-ls_hu_de414575bf8202f4.png 480w, https://percona.community/blog/2023/02/2-ls_hu_75b0710387e86d0.png 768w, https://percona.community/blog/2023/02/2-ls_hu_bf400be65ca46ef9.png 1400w"
src="https://percona.community/blog/2023/02/2-ls.png" alt="2-ls" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;After the container is created:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We have our base image, which is &lt;strong&gt;percona/percona-server:8.0&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The base image in Docker is read-only. We can’t modify it. It allows you to spin up multiple containers from the same image with the same immutable base.&lt;/li&gt;
&lt;li&gt;We can add data to our image. This new layer is readable and writable.
If we create our database and populate it:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Accessing the detached container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; -it percona-server-1 /bin/bash&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Connecting to the database&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -uroot -proot&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create a Database “cinema” and use it&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE DATABASE cinema&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;USE cinema&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create table movies in Database “cinema”&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE movies &lt;span class="o"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;book_id BIGINT PRIMARY KEY AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;title VARCHAR&lt;span class="o"&gt;(&lt;/span&gt;100&lt;span class="o"&gt;)&lt;/span&gt; UNIQUE NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;publisher VARCHAR&lt;span class="o"&gt;(&lt;/span&gt;100&lt;span class="o"&gt;)&lt;/span&gt; NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;labels JSON NOT NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;ENGINE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; InnoDB&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Insert data into Database “cinema”&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;INTO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;movies&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;Green&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;House&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;Joe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Monter&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;’{“&lt;/span&gt;&lt;span class="n"&gt;about&lt;/span&gt;&lt;span class="err"&gt;”&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{“&lt;/span&gt;&lt;span class="n"&gt;gender&lt;/span&gt;&lt;span class="err"&gt;”&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;“&lt;/span&gt;&lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="err"&gt;”&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;“&lt;/span&gt;&lt;span class="n"&gt;cool&lt;/span&gt;&lt;span class="err"&gt;”&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;“&lt;/span&gt;&lt;span class="n"&gt;notes&lt;/span&gt;&lt;span class="err"&gt;”&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;“&lt;/span&gt;&lt;span class="n"&gt;labeled&lt;/span&gt;&lt;span class="err"&gt;”}}’&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Checking table&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;movies&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you delete this container, everything will be deleted, too, your databases and your data because containers are temporary.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/02/3-image-no-volume.png" alt="3-image-no-volume" /&gt;&lt;/figure&gt;
Figure 2: View of the layers that are generated when we create the container. Source: Severalnines AB&lt;/p&gt;
&lt;h2 id="running-multiple-mysql-containers"&gt;Running Multiple MySQL Containers&lt;/h2&gt;
&lt;p&gt;Now let’s see how the layers of two different containers work together.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run -d --name percona-server-1 -e &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root percona/percona-server:8.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run -d --name percona-server-2 -e &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root percona/percona-server:8.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Multiple containers can share the same base image, which is read-only. Each container can have its data state for reading and writing (Which is built on the top of the base image), but this state will be lost if we don’t create persistent volumes that can ve saved after the container is shut down.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/02/4-image-multiple-sql_hu_5b5689c245419b36.png 480w, https://percona.community/blog/2023/02/4-image-multiple-sql_hu_1c2329af4b746da4.png 768w, https://percona.community/blog/2023/02/4-image-multiple-sql_hu_254369876d9745e7.png 1400w"
src="https://percona.community/blog/2023/02/4-image-multiple-sql.png" alt="4-image-multiple-sql.png" /&gt;&lt;/figure&gt;
Figure 3: View the layers generated when we create two different containers. Source: Severalnines AB&lt;/p&gt;
&lt;p&gt;As we said before, “Volumes open the door for stateful applications to run efficiently in Docker.”&lt;/p&gt;
&lt;h2 id="running-containers-with-persistent-volumes"&gt;Running containers with Persistent Volumes&lt;/h2&gt;
&lt;p&gt;Now we will create a container with a persistent volume in Docker.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/02/5-no-volume_hu_ad7a1b2379229cf9.png 480w, https://percona.community/blog/2023/02/5-no-volume_hu_101c56ebf8822b3a.png 768w, https://percona.community/blog/2023/02/5-no-volume_hu_bb2f91ebe61513c1.png 1400w"
src="https://percona.community/blog/2023/02/5-no-volume.png" alt="5-image-volume" /&gt;&lt;/figure&gt;
Figure 4: From Percona Server for MySQL image to a running container in Docker with volumes&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;percona-server&lt;/strong&gt; is the base of the image. On top of that, we have all the changes we will make in the database. When we create the volume, we link a directory in the container with a directory on your local machine or in the machine where you want to persist the data.
When you delete the container, you can attach another container to this volume to have the same data on a different container.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run -d --name percona-server -e &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root -v local-datadir:/var/lib/mysql percona/percona-server:8.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/02/6-image-volume_hu_3cdcfe2f40298fae.png 480w, https://percona.community/blog/2023/02/6-image-volume_hu_931ff76363437249.png 768w, https://percona.community/blog/2023/02/6-image-volume_hu_5505923f1bc291ac.png 1400w"
src="https://percona.community/blog/2023/02/6-image-volume.png" alt="6-image-volume" /&gt;&lt;/figure&gt;
Figure 4: View of the layers that are generated when we create the container with volume.&lt;/p&gt;
&lt;h2 id="backing-up-and-restroring-databases"&gt;Backing up and restroring databases&lt;/h2&gt;
&lt;p&gt;There are two kinds of backups in databases, logical and physical backups.
We can use mysqldump to make logical backups and Percona XtraBackup, for physical backups. If we want to restore, we can use mysqldump and Percona XtraBackup, which offer much more advanced features.&lt;/p&gt;
&lt;h2 id="back-up"&gt;Back up&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; -it percona-server-backup mysqldump -uroot --password&lt;span class="o"&gt;=&lt;/span&gt;root --single-transaction &gt; /path/in/physical/host/dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="restore"&gt;Restore&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; -it percona-server-restore mysql -u root --password&lt;span class="o"&gt;=&lt;/span&gt;root &lt; /path/in/physical/host/dump.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now let’s share some tips to run databases on containers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Constantly monitor your database and host system&lt;/li&gt;
&lt;li&gt;Store data in a persistent volume outside the container&lt;/li&gt;
&lt;li&gt;Limit resource utilization, e.g., Memory, CPU&lt;/li&gt;
&lt;li&gt;Regularly back up the database and store the backup in a secure and separate location.&lt;/li&gt;
&lt;li&gt;Have a plan for database migrations and disaster recovery.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We explored how databases work on containers. Volumes are the important thing to persist the data.&lt;/p&gt;
&lt;p&gt;What is next? Watch this fantastic talk by Peter Zaitsev &lt;a href="https://www.youtube.com/watch?v=b_COgWA1lvk&amp;t=145s" target="_blank" rel="noopener noreferrer"&gt;Open Source Databases on Kubernetes&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Thanks for reading this! You can install Percona Server for MySQL from our &lt;a href="https://hub.docker.com/r/percona/percona-server/tags??utm_source=percona-community&amp;utm_medium=blog&amp;utm_campaign=edith" target="_blank" rel="noopener noreferrer"&gt;Docker Repository&lt;/a&gt; and if you have doubts write us in our &lt;a href="https://forums.percona.com/?utm_source=percona-community&amp;utm_medium=blog&amp;utm_campaign=edith" target="_blank" rel="noopener noreferrer"&gt;Percona community forum&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>Docker</category>
      <category>MySQL</category>
      <category>Volume</category>
      <media:thumbnail url="https://percona.community/blog/2023/02/0-cover_hu_595fd28de0de994b.jpg"/>
      <media:content url="https://percona.community/blog/2023/02/0-cover_hu_ac452a760a367ffb.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.35 preview release</title>
      <link>https://percona.community/blog/2023/02/14/preview-release/</link>
      <guid>https://percona.community/blog/2023/02/14/preview-release/</guid>
      <pubDate>Tue, 14 Feb 2023 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.35 preview release Hello folks! Percona Monitoring and Management (PMM) 2.35 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-235-preview-release"&gt;Percona Monitoring and Management 2.35 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.35 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;You can find the Release Notes &lt;a href="https://two-34-0-pr-977.onrender.com/release-notes/2.35.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker-installation"&gt;Percona Monitoring and Management server docker installation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.35.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; In order to use the DBaaS functionality during the Percona Monitoring and Management preview release, you should add the following environment variablewhen starting PMM server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PERCONA_TEST_DBAAS_PMM_CLIENT=perconalab/pmm-client:2.35.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client release candidate tarball for 2.35 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-4898.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.35.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.35.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-09d19be2cfb10a60c&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us in &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <category>Releases</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Node metrics available inside of a container</title>
      <link>https://percona.community/blog/2023/02/06/node-metrics-container/</link>
      <guid>https://percona.community/blog/2023/02/06/node-metrics-container/</guid>
      <pubDate>Mon, 06 Feb 2023 00:00:00 UTC</pubDate>
      <description>Several people asked me this question: Could we get Node metrics inside of a container?</description>
      <content:encoded>&lt;p&gt;Several people asked me this question: Could we get Node metrics inside of a container?&lt;/p&gt;
&lt;p&gt;Usually, this comes from the fact that PMM or standalone people run &lt;code&gt;node_exporter&lt;/code&gt; inside a container. PMM does it as a sidecar along with many other exporters to monitor DBs, and &lt;code&gt;node_exporter&lt;/code&gt; comes out of the box as a default one.
So people could see accurate data on dashboards, like Memory and CPU, that &lt;code&gt;node_exporter&lt;/code&gt; reads inside the container.&lt;/p&gt;
&lt;p&gt;My first reaction to this - the data is inaccurate, and if you need Node metrics, you need to run &lt;code&gt;node_exporter&lt;/code&gt; on the Node so it has proper access to the host system (VM or HW).
By inaccurate, I mean - not all data is there, and this data could be accurate only sometimes in some environments.&lt;/p&gt;
&lt;p&gt;But once a pretty technical person asked me, I needed to respond to this person with some tech details. There was correct data from PMM client coming from the Kubernetes container about the host it was running.&lt;/p&gt;
&lt;p&gt;One of the things I came up with that was true - you wouldn’t see the host process inside a container, and thus you wouldn’t see who and how is consuming memory and CPU.&lt;/p&gt;
&lt;p&gt;I need help understanding why Memory information, CPU, and others are correct.&lt;/p&gt;
&lt;p&gt;So I performed a small investigation to refresh my memory and learn more about namespaces, cgroups, and containers.&lt;/p&gt;
&lt;h2 id="node_exporter"&gt;node_exporter&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/prometheus/node_exporter#docker" target="_blank" rel="noopener noreferrer"&gt;node_exporter documentation&lt;/a&gt; says:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;node_exporter&lt;/code&gt; is designed to monitor the host system. It’s not recommended
to deploy it as a Docker container because it requires access to the host system.&lt;/p&gt;&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;For situations where Docker deployment is needed, some extra flags must be used to allow
the &lt;code&gt;node_exporter&lt;/code&gt; access to the host namespaces.&lt;/p&gt;&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Be aware that any non-root mount points you want to monitor will need to be bind-mounted
into the container.&lt;/p&gt;&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;If you start container for host monitoring, specify &lt;code&gt;path.rootfs&lt;/code&gt; argument.
This argument must match path in bind-mount of host root. The node_exporter will use
&lt;code&gt;path.rootfs&lt;/code&gt; as prefix to access host filesystem.&lt;/p&gt;&lt;/blockquote&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run -d &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --net&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"host"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --pid&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"host"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; -v &lt;span class="s2"&gt;"/:/host:ro,rslave"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; quay.io/prometheus/node-exporter:latest &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --path.rootfs&lt;span class="o"&gt;=&lt;/span&gt;/host&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;On some systems, the timex collector requires an additional Docker flag, –cap-add=SYS_TIME, in order to access the required syscalls.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;So right away, we can see that additional privileges are needed. More interestingly, not only access to the &lt;code&gt;/proc&lt;/code&gt; and &lt;code&gt;/sys&lt;/code&gt; is required, but to the whole &lt;code&gt;/&lt;/code&gt;. Also, some additional capabilities are needed.&lt;/p&gt;
&lt;p&gt;If we will look briefly at the &lt;code&gt;node_exporter&lt;/code&gt; code, we will indeed find different technics it uses to gather data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;procfs&lt;/code&gt; data&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sysfs&lt;/code&gt; data&lt;/li&gt;
&lt;li&gt;D-Bus socket (systemd data)&lt;/li&gt;
&lt;li&gt;system calls (timex)&lt;/li&gt;
&lt;li&gt;and probably more (udev, device data and etc)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="container"&gt;Container&lt;/h2&gt;
&lt;p&gt;Containers and their ecosystem is quite a big topic that is described many times. Please check out “Demystifying Containers” by &lt;a href="https://www.suse.com/c/author/sgrunert/" target="_blank" rel="noopener noreferrer"&gt;Sascha Grunert&lt;/a&gt; and “Building containers by hand” by &lt;a href="https://www.redhat.com/sysadmin/users/steve-ovens" target="_blank" rel="noopener noreferrer"&gt;Steve Ovens&lt;/a&gt;. You can find them in &lt;a href="#links"&gt;Links&lt;/a&gt; section.&lt;/p&gt;
&lt;p&gt;What is related to my investigation is isolation from the host, and that is mostly &lt;code&gt;namespaces&lt;/code&gt; and &lt;code&gt;cgroups&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Other systems that are limiting access to the different files and calls inside a container:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;capabilities&lt;/li&gt;
&lt;li&gt;seccomp&lt;/li&gt;
&lt;li&gt;selinux/apparmor&lt;/li&gt;
&lt;li&gt;additional security options&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="namespaces-and-cgroup"&gt;&lt;code&gt;namespaces&lt;/code&gt; and &lt;code&gt;cgroup&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;Let us focus only on &lt;code&gt;procfs&lt;/code&gt;, where a lot of needed monitoring information comes from. I aim to understand why we have some data in &lt;code&gt;/proc&lt;/code&gt; that corresponds to the host data and some that do not.&lt;/p&gt;
&lt;p&gt;First, &lt;code&gt;/proc&lt;/code&gt; is a &lt;a href="https://www.kernel.org/doc/html/latest/filesystems/proc.html" target="_blank" rel="noopener noreferrer"&gt;special filesystem&lt;/a&gt; that acts as an interface to internal data structures in the kernel. It can obtain information about the system and change certain kernel parameters at runtime (sysctl).&lt;/p&gt;
&lt;p&gt;It is also quite an old interface that was created before any &lt;code&gt;namespaces&lt;/code&gt; and &lt;code&gt;cgroup&lt;/code&gt;. Many different applications expect the data there, and thus it can’t be easily namespaced.&lt;/p&gt;
&lt;p&gt;Here is what was namespaced:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;net&lt;/li&gt;
&lt;li&gt;uts&lt;/li&gt;
&lt;li&gt;ipc&lt;/li&gt;
&lt;li&gt;pid&lt;/li&gt;
&lt;li&gt;user&lt;/li&gt;
&lt;li&gt;cgroups&lt;/li&gt;
&lt;li&gt;time&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Same in code:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/proc/namespaces.c#n15" target="_blank" rel="noopener noreferrer"&gt;https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/proc/namespaces.c#n15&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elixir.bootlin.com/linux/latest/source/include/linux/proc_ns.h#L27" target="_blank" rel="noopener noreferrer"&gt;https://elixir.bootlin.com/linux/latest/source/include/linux/proc_ns.h#L27&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So it means that a lot of data under, for example, &lt;code&gt;/proc/net&lt;/code&gt; would be container specific. Same for other subsystems.&lt;/p&gt;
&lt;p&gt;But the biggest difference for monitoring when &lt;code&gt;namespaces&lt;/code&gt; and &lt;code&gt;cgroup&lt;/code&gt; are used is that container has the access only to its own &lt;code&gt;PID&lt;/code&gt; namespace. That means that even if we see all available memory or CPU, we can’t tell what processes from the host system could consume that. We could only see our namespace processes.&lt;/p&gt;
&lt;p&gt;It is tough to tell what exactly namespaced under &lt;code&gt;/proc&lt;/code&gt;. It looks like all the files (not dirs) under &lt;code&gt;/proc&lt;/code&gt; are directly from the host kernel.&lt;/p&gt;
&lt;p&gt;Thus we could see many files (they aren’t real files) from the host/root namespace and many that are specific to the container namespace.&lt;/p&gt;
&lt;p&gt;For example, here you see uts (hostname) and network namespaces differences between root and container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;#container&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@0d514d31c0a3 opt&lt;span class="o"&gt;]&lt;/span&gt;$ cat /proc/net/dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Inter-&lt;span class="p"&gt;|&lt;/span&gt; Receive &lt;span class="p"&gt;|&lt;/span&gt; Transmit
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; face &lt;span class="p"&gt;|&lt;/span&gt;bytes packets errs drop fifo frame compressed multicast&lt;span class="p"&gt;|&lt;/span&gt;bytes packets errs drop fifo colls carrier compressed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; lo: &lt;span class="m"&gt;250045720&lt;/span&gt; &lt;span class="m"&gt;467744&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;250045720&lt;/span&gt; &lt;span class="m"&gt;467744&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; tap0: &lt;span class="m"&gt;37944098&lt;/span&gt; &lt;span class="m"&gt;3043&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;173264&lt;/span&gt; &lt;span class="m"&gt;2426&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;#host&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;dkondratenko@denlen ~&lt;span class="o"&gt;]&lt;/span&gt;$ cat /proc/net/dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Inter-&lt;span class="p"&gt;|&lt;/span&gt; Receive &lt;span class="p"&gt;|&lt;/span&gt; Transmit
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; face &lt;span class="p"&gt;|&lt;/span&gt;bytes packets errs drop fifo frame compressed multicast&lt;span class="p"&gt;|&lt;/span&gt;bytes packets errs drop fifo colls carrier compressed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; lo: &lt;span class="m"&gt;1674394721&lt;/span&gt; &lt;span class="m"&gt;1671478&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;1674394721&lt;/span&gt; &lt;span class="m"&gt;1671478&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;enp2s0f0: &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; wwan0: &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wlp3s0: &lt;span class="m"&gt;3869584887&lt;/span&gt; &lt;span class="m"&gt;6162652&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;88037&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;1639861087&lt;/span&gt; &lt;span class="m"&gt;3732759&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cni-podman0: &lt;span class="m"&gt;3688&lt;/span&gt; &lt;span class="m"&gt;53&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;53&lt;/span&gt; &lt;span class="m"&gt;22094&lt;/span&gt; &lt;span class="m"&gt;163&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you can see, network interfaces are different (&lt;code&gt;net&lt;/code&gt;) and hostnames (&lt;code&gt;uts&lt;/code&gt; namespace, &lt;code&gt;0d514d31c0a3&lt;/code&gt; in the container and &lt;code&gt;denlen&lt;/code&gt; in the host).&lt;/p&gt;
&lt;h2 id="linux-capabilities-and-seccomp"&gt;Linux Capabilities and seccomp&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://man7.org/linux/man-pages/man7/capabilities.7.html" target="_blank" rel="noopener noreferrer"&gt;Linux Capabilities&lt;/a&gt; allow access for the unprivileged processes to perform some actions/call in the system.
In &lt;a href="#node_exporter"&gt;node_exporter&lt;/a&gt; section, we have seen an example of the &lt;code&gt;CAP_SYS_TIME&lt;/code&gt; that is needed to gather some of the data.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Seccomp" target="_blank" rel="noopener noreferrer"&gt;seccomp&lt;/a&gt; is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a “secure” state where it cannot make any system calls except exit(), sigreturn(), read(), and write() to already-open file descriptors.&lt;/p&gt;
&lt;p&gt;So both systems further restrict access to the data that might be needed to gather monitoring information.&lt;/p&gt;
&lt;p&gt;Docker, Podman have default seccomp filters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/moby/moby/blob/master/profiles/seccomp/default.json" target="_blank" rel="noopener noreferrer"&gt;https://github.com/moby/moby/blob/master/profiles/seccomp/default.json&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/containers/common/blob/main/pkg/seccomp/seccomp.json" target="_blank" rel="noopener noreferrer"&gt;https://github.com/containers/common/blob/main/pkg/seccomp/seccomp.json&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="linux-security-modules"&gt;Linux Security Modules&lt;/h2&gt;
&lt;p&gt;Security-Enhanced Linux, &lt;a href="https://en.wikipedia.org/wiki/Security-Enhanced_Linux" target="_blank" rel="noopener noreferrer"&gt;SELinux&lt;/a&gt; is a Linux kernel security module that provides a mechanism for supporting access control security policies, including mandatory access controls (MAC).&lt;/p&gt;
&lt;p&gt;&lt;a href="AppArmor"&gt;AppArmor&lt;/a&gt; (“Application Armor”) is a Linux kernel security module that allows the system administrator to restrict programs’ capabilities with per-program profiles.&lt;/p&gt;
&lt;p&gt;Both of those could further restrict access inside the container. For example, here is the part of the &lt;a href="https://docs.docker.com/engine/security/apparmor/#nginx-example-profile" target="_blank" rel="noopener noreferrer"&gt;apparmor&lt;/a&gt; profile:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; deny @{PROC}/* w, # deny write for all files directly in /proc (not in a subdir)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; # deny write to files not in /proc/&lt;number&gt;/** or /proc/sys/**
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; deny @{PROC}/sys/[^k]** w, # deny /proc/sys except /proc/sys/k* (effectively /proc/sys/kernel)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; deny @{PROC}/sys/kernel/{?,??,[^s][^h][^m]**} w, # deny everything except shm* in /proc/sys/kernel/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; deny @{PROC}/sysrq-trigger rwklx,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; deny @{PROC}/mem rwklx,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; deny @{PROC}/kmem rwklx,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; deny @{PROC}/kcore rwklx,&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So it is possible to restrict access even for those root &lt;code&gt;/proc&lt;/code&gt; files that provide memory and CPU information.&lt;/p&gt;
&lt;h2 id="additional-security-options"&gt;Additional security options&lt;/h2&gt;
&lt;p&gt;Container runtimes and tools could further harden security.&lt;/p&gt;
&lt;p&gt;One example is masking mount points:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.podman.io/en/latest/markdown/podman-run.1.html#security-opt-option" target="_blank" rel="noopener noreferrer"&gt;https://docs.podman.io/en/latest/markdown/podman-run.1.html#security-opt-option&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/containers/podman/blob/ab7f6095a17bd50477c30fc8c127a8604b5693a6/pkg/specgen/generate/config_linux.go#L91" target="_blank" rel="noopener noreferrer"&gt;https://github.com/containers/podman/blob/ab7f6095a17bd50477c30fc8c127a8604b5693a6/pkg/specgen/generate/config_linux.go#L91&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@0d514d31c0a3 opt&lt;span class="o"&gt;]&lt;/span&gt;$ mount &lt;span class="p"&gt;|&lt;/span&gt; grep proc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;proc on /proc &lt;span class="nb"&gt;type&lt;/span&gt; proc &lt;span class="o"&gt;(&lt;/span&gt;rw,nosuid,nodev,noexec,relatime&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tmpfs on /proc/acpi &lt;span class="nb"&gt;type&lt;/span&gt; tmpfs &lt;span class="o"&gt;(&lt;/span&gt;ro,relatime,context&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"system_u:object_r:container_file_t:s0:c11,c680"&lt;/span&gt;,size&lt;span class="o"&gt;=&lt;/span&gt;0k,uid&lt;span class="o"&gt;=&lt;/span&gt;1000,gid&lt;span class="o"&gt;=&lt;/span&gt;1000,inode64&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;devtmpfs on /proc/kcore &lt;span class="nb"&gt;type&lt;/span&gt; devtmpfs &lt;span class="o"&gt;(&lt;/span&gt;rw,nosuid,seclabel,size&lt;span class="o"&gt;=&lt;/span&gt;4096k,nr_inodes&lt;span class="o"&gt;=&lt;/span&gt;1048576,mode&lt;span class="o"&gt;=&lt;/span&gt;755,inode64&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;devtmpfs on /proc/keys &lt;span class="nb"&gt;type&lt;/span&gt; devtmpfs &lt;span class="o"&gt;(&lt;/span&gt;rw,nosuid,seclabel,size&lt;span class="o"&gt;=&lt;/span&gt;4096k,nr_inodes&lt;span class="o"&gt;=&lt;/span&gt;1048576,mode&lt;span class="o"&gt;=&lt;/span&gt;755,inode64&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;devtmpfs on /proc/latency_stats &lt;span class="nb"&gt;type&lt;/span&gt; devtmpfs &lt;span class="o"&gt;(&lt;/span&gt;rw,nosuid,seclabel,size&lt;span class="o"&gt;=&lt;/span&gt;4096k,nr_inodes&lt;span class="o"&gt;=&lt;/span&gt;1048576,mode&lt;span class="o"&gt;=&lt;/span&gt;755,inode64&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;devtmpfs on /proc/timer_list &lt;span class="nb"&gt;type&lt;/span&gt; devtmpfs &lt;span class="o"&gt;(&lt;/span&gt;rw,nosuid,seclabel,size&lt;span class="o"&gt;=&lt;/span&gt;4096k,nr_inodes&lt;span class="o"&gt;=&lt;/span&gt;1048576,mode&lt;span class="o"&gt;=&lt;/span&gt;755,inode64&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tmpfs on /proc/scsi &lt;span class="nb"&gt;type&lt;/span&gt; tmpfs &lt;span class="o"&gt;(&lt;/span&gt;ro,relatime,context&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"system_u:object_r:container_file_t:s0:c11,c680"&lt;/span&gt;,size&lt;span class="o"&gt;=&lt;/span&gt;0k,uid&lt;span class="o"&gt;=&lt;/span&gt;1000,gid&lt;span class="o"&gt;=&lt;/span&gt;1000,inode64&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;proc on /proc/asound &lt;span class="nb"&gt;type&lt;/span&gt; proc &lt;span class="o"&gt;(&lt;/span&gt;ro,relatime&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;proc on /proc/bus &lt;span class="nb"&gt;type&lt;/span&gt; proc &lt;span class="o"&gt;(&lt;/span&gt;ro,relatime&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;proc on /proc/fs &lt;span class="nb"&gt;type&lt;/span&gt; proc &lt;span class="o"&gt;(&lt;/span&gt;ro,relatime&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;proc on /proc/irq &lt;span class="nb"&gt;type&lt;/span&gt; proc &lt;span class="o"&gt;(&lt;/span&gt;ro,relatime&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;proc on /proc/sys &lt;span class="nb"&gt;type&lt;/span&gt; proc &lt;span class="o"&gt;(&lt;/span&gt;ro,relatime&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;proc on /proc/sysrq-trigger &lt;span class="nb"&gt;type&lt;/span&gt; proc &lt;span class="o"&gt;(&lt;/span&gt;ro,relatime&lt;span class="o"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;The default masked paths are /proc/acpi, /proc/kcore, /proc/keys, /proc/latency_stats, /proc/sched_debug, /proc/scsi, /proc/timer_list, /proc/timer_stats, /sys/firmware, and /sys/fs/selinux. The default paths that are read-only are /proc/asound, /proc/bus, /proc/fs, /proc/irq, /proc/sys, /proc/sysrq-trigger, /sys/fs/cgroup.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;And indeed, masking memory information is straightforward:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;podman run --detach --rm --replace&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; --name&lt;span class="o"&gt;=&lt;/span&gt;pmm-server -p 4443:443/tcp --security-opt&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;mask&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/proc/meminfo:/proc/vmstat docker.io/percona/pmm-server:2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;Kubernetes support most of the above technics as well. And different Kubernetes platforms have different security hardness.&lt;/p&gt;
&lt;p&gt;My knowledge at the beginning of this road needed to be deeper, but the conclusion stays the same - don’t assume that any data inside the container is related to the host.&lt;/p&gt;
&lt;p&gt;Looking at the technics, I didn’t know before and an overall trend of hardening security for the container, my conclusion would be - it is incorrect to assume that &lt;code&gt;node_exporter&lt;/code&gt; could read and provide any meaningful data about the host within the container.&lt;/p&gt;
&lt;p&gt;Container runtimes, tools, systems, and platforms provide the full capability to shut down, fake, and abstract any data or access that &lt;code&gt;node_exporter&lt;/code&gt; needs. And we couldn’t control those - assume you have incorrect data.&lt;/p&gt;
&lt;h2 id="links"&gt;Links&lt;/h2&gt;
&lt;h3 id="procfs"&gt;&lt;code&gt;procfs&lt;/code&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kernel.org/doc/html/latest/filesystems/proc.html" target="_blank" rel="noopener noreferrer"&gt;https://www.kernel.org/doc/html/latest/filesystems/proc.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fabiokung.com/2014/03/13/memory-inside-linux-containers/" target="_blank" rel="noopener noreferrer"&gt;https://fabiokung.com/2014/03/13/memory-inside-linux-containers/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="cgroup"&gt;&lt;code&gt;cgroup&lt;/code&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html" target="_blank" rel="noopener noreferrer"&gt;https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.man7.org/linux/man-pages/man7/cgroups.7.html" target="_blank" rel="noopener noreferrer"&gt;https://www.man7.org/linux/man-pages/man7/cgroups.7.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="namespaces"&gt;&lt;code&gt;namespaces&lt;/code&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://man7.org/linux/man-pages/man7/namespaces.7.html" target="_blank" rel="noopener noreferrer"&gt;https://man7.org/linux/man-pages/man7/namespaces.7.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.man7.org/linux/man-pages/man7/user_namespaces.7.html" target="_blank" rel="noopener noreferrer"&gt;https://www.man7.org/linux/man-pages/man7/user_namespaces.7.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="containers"&gt;Containers&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.suse.com/c/author/sgrunert/" target="_blank" rel="noopener noreferrer"&gt;https://www.suse.com/c/author/sgrunert/&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.suse.com/c/demystifying-containers-part-i-kernel-space/" target="_blank" rel="noopener noreferrer"&gt;https://www.suse.com/c/demystifying-containers-part-i-kernel-space/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.suse.com/c/demystifying-containers-part-iv-container-security/" target="_blank" rel="noopener noreferrer"&gt;https://www.suse.com/c/demystifying-containers-part-iv-container-security/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.redhat.com/sysadmin/users/steve-ovens" target="_blank" rel="noopener noreferrer"&gt;https://www.redhat.com/sysadmin/users/steve-ovens&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.redhat.com/sysadmin/7-linux-namespaces" target="_blank" rel="noopener noreferrer"&gt;https://www.redhat.com/sysadmin/7-linux-namespaces&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.redhat.com/sysadmin/building-container-namespaces" target="_blank" rel="noopener noreferrer"&gt;https://www.redhat.com/sysadmin/building-container-namespaces&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.redhat.com/sysadmin/mount-namespaces" target="_blank" rel="noopener noreferrer"&gt;https://www.redhat.com/sysadmin/mount-namespaces&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.redhat.com/sysadmin/pid-namespace" target="_blank" rel="noopener noreferrer"&gt;https://www.redhat.com/sysadmin/pid-namespace&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="linux-capabilities-and-seccomp-1"&gt;Linux Capabilities and Seccomp&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kernel.org/doc/html/latest/userspace-api/seccomp_filter.html" target="_blank" rel="noopener noreferrer"&gt;https://www.kernel.org/doc/html/latest/userspace-api/seccomp_filter.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://man7.org/linux/man-pages/man7/capabilities.7.html" target="_blank" rel="noopener noreferrer"&gt;https://man7.org/linux/man-pages/man7/capabilities.7.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/security/seccomp/" target="_blank" rel="noopener noreferrer"&gt;https://docs.docker.com/engine/security/seccomp/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/moby/moby/blob/master/profiles/seccomp/default.json" target="_blank" rel="noopener noreferrer"&gt;https://github.com/moby/moby/blob/master/profiles/seccomp/default.json&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/containers/common/blob/main/pkg/seccomp/seccomp.json" target="_blank" rel="noopener noreferrer"&gt;https://github.com/containers/common/blob/main/pkg/seccomp/seccomp.json&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="linux-security-modules-1"&gt;Linux Security Modules&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/security/apparmor/" target="_blank" rel="noopener noreferrer"&gt;https://docs.docker.com/engine/security/apparmor/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/security/apparmor/#nginx-example-profile" target="_blank" rel="noopener noreferrer"&gt;https://docs.docker.com/engine/security/apparmor/#nginx-example-profile&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="kubernetes-security"&gt;Kubernetes security&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/" target="_blank" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" target="_blank" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tasks/configure-pod-container/security-context/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/security/pod-security-standards/" target="_blank" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/concepts/security/pod-security-standards/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tutorials/security/apparmor/" target="_blank" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tutorials/security/apparmor/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tutorials/security/seccomp/" target="_blank" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tutorials/security/seccomp/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/" target="_blank" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" target="_blank" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Denys Kondratenko</author>
      <category>Kubernetes</category>
      <category>Monitoring</category>
      <category>PMM</category>
      <category>DBaaS</category>
      <category>Containers</category>
      <media:thumbnail url="https://percona.community/blog/2023/03/Container-Denys_hu_1ef8ffb063f5e8d.jpg"/>
      <media:content url="https://percona.community/blog/2023/03/Container-Denys_hu_b01ad9e12d922c4.jpg" medium="image"/>
    </item>
    <item>
      <title>Binding your application to the database in the Kubernetes cluster</title>
      <link>https://percona.community/blog/2023/01/24/k8s-app-db-binding/</link>
      <guid>https://percona.community/blog/2023/01/24/k8s-app-db-binding/</guid>
      <pubDate>Tue, 24 Jan 2023 00:00:00 UTC</pubDate>
      <description>dbaas-operator is Yet Another DBaaS Kubernetes Operator (need to suggest yadbko as a name) that tries to simplify and unify Database Cluster deployments by building a higher abstraction layer on top of Percona Kubernetes Operators.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://github.com/percona/dbaas-operator" target="_blank" rel="noopener noreferrer"&gt;dbaas-operator&lt;/a&gt; is Yet Another DBaaS Kubernetes Operator (need to suggest yadbko as a name) that tries to simplify and unify Database Cluster deployments by building a higher abstraction layer on top of &lt;a href="https://www.percona.com/software/percona-kubernetes-operators" target="_blank" rel="noopener noreferrer"&gt;Percona Kubernetes Operators&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So it becomes much easier to deploy the DB cluster with &lt;code&gt;dbaas-operator&lt;/code&gt; and &lt;a href="https://docs.percona.com/percona-monitoring-and-management/get-started/dbaas.html" target="_blank" rel="noopener noreferrer"&gt;PMM DBaaS&lt;/a&gt; on top of it.&lt;/p&gt;
&lt;p&gt;But another part of the picture is applications and their workloads to connect to the deployed DB Clusters.&lt;/p&gt;
&lt;h2 id="services-and-applications"&gt;Services and Applications&lt;/h2&gt;
&lt;p&gt;On Kubernetes, application deployment could be done in many ways, either manually or as part of automatic deployments.&lt;/p&gt;
&lt;p&gt;PMM DBaaS offers both - UI to create DB Clusters and get credentials and API to automate those actions.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;dbaas-operator&lt;/code&gt; adds Kubernetes native API to that batch as well.&lt;/p&gt;
&lt;p&gt;But both require additional automation to join the application and the database in one deployment and provide a service to the end user.&lt;/p&gt;
&lt;p&gt;And that operation is a challenging task, as every application could expect credentials in some specific format: secrets with hardcoded structures, environment variables with custom names, mount point secrets in particular locations, etc.&lt;/p&gt;
&lt;p&gt;Database services add their complexity to that picture by exposing their connections and secrets in a format that is convenient or makes sense for them.&lt;/p&gt;
&lt;p&gt;Usually, some Continues Delivery system or deployment package (helm, etc.) ensures all components’ correct deployment sequence and health. So many custom pipelines and packages exist to connect a specific application to a database service.&lt;/p&gt;
&lt;p&gt;But for simplicity and scalability, it would be nice to have some standard for connection or software that automates such a connection.&lt;/p&gt;
&lt;h2 id="service-binding"&gt;Service Binding&lt;/h2&gt;
&lt;p&gt;Connecting services is a known pattern: Service Discovery (broker, registry, repository) for the &lt;a href="https://en.wikipedia.org/wiki/Service-oriented_architecture" target="_blank" rel="noopener noreferrer"&gt;Service-Oriented Architecture&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://servicebinding.io/" target="_blank" rel="noopener noreferrer"&gt;servicebinding.io&lt;/a&gt; is another pattern to bind applications and workloads to the services (REST APIs, databases, event buses, etc.). This specification aims to create a Kubernetes-wide specification for communicating service secrets to workloads in a consistent way.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://redhat-developer.github.io/service-binding-operator/userguide/intro.html" target="_blank" rel="noopener noreferrer"&gt;Service Binding Operator&lt;/a&gt; glues services and Kubernetes workflows together. It does so for the services and applications that support ServiceBinding specifications and those that don’t.&lt;/p&gt;
&lt;p&gt;Out of the box Service Binding Operator supports &lt;a href="https://docs.percona.com/percona-operator-for-mysql/pxc/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MySQL based on Percona XtraDB Cluster&lt;/a&gt; (PXC), so we will deploy Database Cluster with &lt;code&gt;dbaas-operator&lt;/code&gt; and connect it to the simple Java application. We will use Spring PetClinic application that supports &lt;a href="https://github.com/spring-cloud/spring-cloud-bindings" target="_blank" rel="noopener noreferrer"&gt;Spring Cloud Bindings&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="create-an-environment"&gt;Create an environment&lt;/h2&gt;
&lt;p&gt;We need to have Kubernetes cluster, &lt;a href="https://olm.operatorframework.io/" target="_blank" rel="noopener noreferrer"&gt;Operator Lifecycle Manager&lt;/a&gt; (OLM) to install operators, and all required operators installed. In this blog, I would use minikube and assume that &lt;code&gt;operator-sdk&lt;/code&gt; is installed on the system&lt;/p&gt;
&lt;p&gt;Here is a &lt;a href="https://github.com/denisok/k8s-connect-app-to-db/blob/main/assets/bin/service_binding.sh" target="_blank" rel="noopener noreferrer"&gt;link to the script&lt;/a&gt; that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;setups multi-node Kubernetes cluster&lt;/li&gt;
&lt;li&gt;installs OLM&lt;/li&gt;
&lt;li&gt;installs needed operators with the help of OLM&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As a result we get cluster with all needed operators:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get sub -A
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAMESPACE NAME PACKAGE SOURCE CHANNEL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;default dbaas-operator dbaas-operator dbaas-catalog stable-v0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;default percona-server-mongodb-operator percona-server-mongodb-operator dbaas-catalog stable-v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;default percona-xtradb-cluster-operator percona-xtradb-cluster-operator dbaas-catalog stable-v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;operators my-service-binding-operator service-binding-operator operatorhubio-catalog stable&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="create-database-cluster"&gt;Create Database Cluster&lt;/h2&gt;
&lt;p&gt;We will use &lt;code&gt;dbaas-operator&lt;/code&gt; to demonstrate how easy it is to create DB Cluster with it:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cat &lt;span class="s"&gt;&lt;&lt;EOF | kubectl apply -f -
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;apiVersion: dbaas.percona.com/v1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;kind: DatabaseCluster
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; name: test-pxc-cluster
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; databaseType: pxc
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; databaseImage: percona/percona-xtradb-cluster:8.0.27-18.1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; databaseConfig: |
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; [mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; wsrep_provider_options="debug=1;gcache.size=1G"
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; wsrep_debug=1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; wsrep_trx_fragment_unit='bytes'
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; wsrep_trx_fragment_size=3670016
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; secretsName: pxc-sample-secrets
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; clusterSize: 1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; loadBalancer:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; type: haproxy
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; exposeType: ClusterIP
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; size: 1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; dbInstance:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; cpu: "1"
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; memory: 1G
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; diskSize: 15G
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get db
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME SIZE READY STATUS ENDPOINT AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test-pxc-cluster &lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt; ready test-pxc-cluster-haproxy.default 5m&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="create-spring-petclinic-app-and-bind-it-to-the-database"&gt;Create Spring PetClinic app and bind it to the database&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl apply -f https://raw.githubusercontent.com/redhat-developer/service-binding-operator/master/samples/apps/spring-petclinic/petclinic-mysql-deployment.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;deployment.apps/spring-petclinic created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/spring-petclinic created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spring-petclinic-f7f587c5c-rvq2v 0/1 CrashLoopBackOff &lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;17s ago&lt;span class="o"&gt;)&lt;/span&gt; 67s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As we didn’t create a binding yet, the application can’t connect to the database and thus fails.&lt;/p&gt;
&lt;p&gt;Let us bind application to the database and verify it is working:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cat &lt;span class="s"&gt;&lt;&lt;EOF | kubectl apply -f -
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;apiVersion: binding.operators.coreos.com/v1alpha1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;kind: ServiceBinding
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; name: spring-petclinic
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; namespace: default
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; services:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; - group: pxc.percona.com
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; version: v1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; kind: PerconaXtraDBCluster
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; name: test-pxc-cluster
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; application:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; name: spring-petclinic
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; group: apps
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; version: v1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt; resource: deployments
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get servicebindings
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY REASON AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spring-petclinic True ApplicationsBound 4m47s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get deployments
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY UP-TO-DATE AVAILABLE AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spring-petclinic 1/1 &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt; 17m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube service spring-petclinic --url
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;http://192.168.39.215:31181&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;What we have done above:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Created &lt;code&gt;kind: ServiceBinding&lt;/code&gt;, which takes PXC secrets and maps them to the application as mount points.&lt;/li&gt;
&lt;li&gt;As PetClinic supports ServiceBinding spec with Spring framework, it understands those mount points and connects to the database.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here is what mount point by ServiceBinding specification that Spring Cloud Bindings library parsed and connected to the database:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; deployment/spring-petclinic -- ls -la /bindings/spring-petclinic/..2023_01_20_21_33_47.4121788695
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;total &lt;span class="m"&gt;56&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x &lt;span class="m"&gt;2&lt;/span&gt; root root &lt;span class="m"&gt;320&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxrwxrwt &lt;span class="m"&gt;3&lt;/span&gt; root root &lt;span class="m"&gt;360&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 ..
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;18&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 clustercheck
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;5&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 database
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;32&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 host
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;17&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 monitor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;17&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;18&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 password
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;4&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 port
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;7&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 provider
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;17&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 proxyadmin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;18&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 replication
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;18&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 root
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;5&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 &lt;span class="nb"&gt;type&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;4&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 username
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; root root &lt;span class="m"&gt;17&lt;/span&gt; Jan &lt;span class="m"&gt;20&lt;/span&gt; 21:33 xtrabackup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; deployment/spring-petclinic -- cat /bindings/spring-petclinic/..2023_01_20_21_33_47.4121788695/database
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; deployment/spring-petclinic -- cat /bindings/spring-petclinic/..2023_01_20_21_33_47.4121788695/host
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;test-pxc-cluster-haproxy.default&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Check the url that was exposed by minikube:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/02/petclinic_hu_141342c1c5d61197.png 480w, https://percona.community/blog/2023/02/petclinic_hu_bba157abaa156f1e.png 768w, https://percona.community/blog/2023/02/petclinic_hu_e045fca286d3aa54.png 1400w"
src="https://percona.community/blog/2023/02/petclinic.png" alt="Pet Clinic" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;There are many ways to deploy applications and services and connect them.&lt;/p&gt;
&lt;p&gt;I am trying to collect some of them under my &lt;a href="https://github.com/denisok/k8s-connect-app-to-db" target="_blank" rel="noopener noreferrer"&gt;personal repo&lt;/a&gt; to understand the problem deeper. Please suggest other ways by commenting under this blog or in repo.&lt;/p&gt;
&lt;p&gt;ServiceBinding specification is a standardized way that scales easily and allows you to connect Kubernetes workloads to the database services.&lt;/p&gt;
&lt;p&gt;I will propose to &lt;code&gt;dbaas-operator&lt;/code&gt; to implement that specification so that it could expose different Database engines (mysql, mongo, pg) in a standard way.&lt;/p&gt;</content:encoded>
      <author>Denys Kondratenko</author>
      <category>Labs</category>
      <category>Kubernetes</category>
      <category>Operators</category>
      <category>Databases</category>
      <category>PMM</category>
      <category>DBaaS</category>
      <category>Minikube</category>
      <media:thumbnail url="https://percona.community/blog/2023/02/petclinic_hu_1a8d4824c10dfb88.jpg"/>
      <media:content url="https://percona.community/blog/2023/02/petclinic_hu_a952755f21de9430.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.34 preview release</title>
      <link>https://percona.community/blog/2023/01/17/preview-release/</link>
      <guid>https://percona.community/blog/2023/01/17/preview-release/</guid>
      <pubDate>Tue, 17 Jan 2023 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.34 preview release Hello folks! Percona Monitoring and Management (PMM) 2.34 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-234-preview-release"&gt;Percona Monitoring and Management 2.34 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.34 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;You can find the Release Notes &lt;a href="https://two-34-0-pr-954.onrender.com/release-notes/2.34.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker-installation"&gt;Percona Monitoring and Management server docker installation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.34.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; In order to use the DBaaS functionality during the Percona Monitoring and Management preview release, you should add the following environment variablewhen starting PMM server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PERCONA_TEST_DBAAS_PMM_CLIENT=perconalab/pmm-client:2.34.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client release candidate tarball for 2.34 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-4747.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.34.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.34.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-08c09a75c3dd22956&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us in &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <category>Releases</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Setting Up PMM For Monitoring Your Databases on Windows</title>
      <link>https://percona.community/blog/2023/01/16/setting-up-pmm-for-monitoring-your-databases-on-windows/</link>
      <guid>https://percona.community/blog/2023/01/16/setting-up-pmm-for-monitoring-your-databases-on-windows/</guid>
      <pubDate>Mon, 16 Jan 2023 00:00:00 UTC</pubDate>
      <description>Before deploying Percona Monitoring and Management (PMM) in production, you might want to test it or set up a development instance locally. Since many developers and DBAs have Windows desktops, I wanted to demonstrate how to set up PMM on Windows for an easy test environment. In this post, I’ll walk you through setting up PMM with Docker and WSL.</description>
      <content:encoded>&lt;p&gt;Before deploying Percona Monitoring and Management (PMM) in production, you might want to test it or set up a development instance locally. Since many developers and DBAs have Windows desktops, I wanted to demonstrate how to set up PMM on Windows for an easy test environment. In this post, I’ll walk you through setting up PMM with Docker and WSL.&lt;/p&gt;
&lt;p&gt;If you’re a Linux user, check the blog post I wrote on &lt;a href="https://percona.community/blog/2022/08/05/setting-up-pmm-for-monitoring-mysql-on-a-local-environment/" target="_blank" rel="noopener noreferrer"&gt;Setting up PMM for monitoring MySQL in a local environment&lt;/a&gt;. There you can find instructions for installing Percona Monitoring and Management (PMM) on Linux and how to set it up for monitoring a MySQL instance. Otherwise, continue reading to get PMM up and running on Windows.&lt;/p&gt;
&lt;h2 id="getting-started-with-pmm-on-windows"&gt;Getting Started With PMM on Windows&lt;/h2&gt;
&lt;p&gt;If you’re a Windows user and want to try PMM, the recommended way for installing both the server and client would be to use the official Docker images and follow these guides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/client/index.html#docker" target="_blank" rel="noopener noreferrer"&gt;Client&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Before running the commands in those guides, you should install Docker Desktop and Windows Subsystem for Linux (WSL). These instructions should work for users who are on current versions of Windows 10 and Windows 11. For installing WSL, follow the &lt;a href="https://learn.microsoft.com/en-us/windows/wsl/install" target="_blank" rel="noopener noreferrer"&gt;instructions&lt;/a&gt; on the Microsoft Learn website. Then, get the &lt;a href="https://docs.docker.com/get-docker/" target="_blank" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt; installer. Now you’re ready to install and configure PMM.&lt;/p&gt;
&lt;h2 id="pmm-server"&gt;PMM Server&lt;/h2&gt;
&lt;p&gt;As stated in the &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, you can store data from your PMM in:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Docker volume (Preferred method)&lt;/li&gt;
&lt;li&gt;Data container&lt;/li&gt;
&lt;li&gt;Host directory&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The preferred method is also recommended for Windows. Open PowerShell and execute the instructions in the &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html#run-docker-with-volume" target="_blank" rel="noopener noreferrer"&gt;Run Docker with volume&lt;/a&gt; section. I’ve reproduced the steps here to save you time:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Get the Docker image:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker pull percona/pmm-server:2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Create a volume:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker volume create pmm-data&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Run the image:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker run --detach --restart always \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--publish 443:443 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-v pmm-data:/srv \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--name pmm-server \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona/pmm-server:2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="4"&gt;
&lt;li&gt;Change the password for the default &lt;code&gt;admin&lt;/code&gt; user:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker &lt;span class="nb"&gt;exec&lt;/span&gt; -t pmm-server change-admin-password &lt;new_password&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Once PMM Server is installed, open the browser and visit https://localhost. You will see the PMM login screen.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/01/pmm-login-screen_hu_56ea1277e5f7c4b7.png 480w, https://percona.community/blog/2023/01/pmm-login-screen_hu_75ec84a085ea39ca.png 768w, https://percona.community/blog/2023/01/pmm-login-screen_hu_b889a7e009f52ded.png 1400w"
src="https://percona.community/blog/2023/01/pmm-login-screen.png" alt="PMM Login Screen" /&gt;&lt;figcaption&gt;PMM Login Screen&lt;/figcaption&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now that the server is up and running, you need to get its IP address before connecting the client to the server. To get the IP address, you need the name or container ID. You can get it by running:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker ps&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;docker ps&lt;/code&gt; command will give you a list of the containers running on your system, as follows:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;fd988ad761aa percona/pmm-server:2 "/opt/entrypoint.sh" 2 months ago Up 11 minutes (healthy) 80/tcp, 0.0.0.0:443-&gt;443/tcp pmm-server&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Take note of the container ID or name of the PMM Server container. Then, execute this command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' your_container&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;your_container&lt;/code&gt; with the container ID or name copied previously.&lt;/p&gt;
&lt;h2 id="pmm-client"&gt;PMM Client&lt;/h2&gt;
&lt;p&gt;Go to the &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/client/index.html#docker" target="_blank" rel="noopener noreferrer"&gt;Set Up PMM Client&lt;/a&gt; section in the documentation and follow the first two steps. All of these commands are executed from PowerShell.&lt;/p&gt;
&lt;p&gt;In Step 3, you need to specify the IP address of the PMM Server by setting up the &lt;code&gt;PMM_SERVER&lt;/code&gt; environment variable:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nv"&gt;$env&lt;/span&gt;:PMM_SERVER&lt;span class="o"&gt;=&lt;/span&gt;’X.X.X.X:443’&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;X.X.X.X&lt;/code&gt; with the server’s IP address. Then, initialize the container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker run \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--rm \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--name pmm-client \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-e PMM_AGENT_SERVER_ADDRESS=$env:PMM_SERVER \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-e PMM_AGENT_SERVER_USERNAME=admin \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-e PMM_AGENT_SERVER_PASSWORD=admin \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-e PMM_AGENT_SERVER_INSECURE_TLS=1 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-e PMM_AGENT_SETUP=1 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-e PMM_AGENT_CONFIG_FILE=config/pmm-agent.yaml \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--volumes-from pmm-client-data \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona/pmm-client:2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;PMM_AGENT_SERVER_PASSWORD&lt;/code&gt; default value is &lt;code&gt;admin&lt;/code&gt;. Replace this value with the password assigned when the server was configured.&lt;/p&gt;
&lt;h2 id="configure-your-database"&gt;Configure Your Database&lt;/h2&gt;
&lt;p&gt;Now that the client is connected to the server, you must configure PMM for monitoring your database. Follow the instructions below.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;MySQL
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mysql-installation"&gt;Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mysql-configuration"&gt;Configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;PostgreSQL
&lt;ul&gt;
&lt;li&gt;&lt;a href="#postgresql-installation"&gt;Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#postgresql-configuration"&gt;Configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;MongoDB
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mongodb-installation"&gt;Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mongodb-configuration"&gt;Configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once PMM is configured, the Home Dashboard will show the databases that are being monitored. For more information and advanced configuration, check the &lt;a href="https://docs.percona.com/percona-monitoring-and-management/index.html" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/01/pmm-home-dashboard_hu_39234b2ff6de287c.png 480w, https://percona.community/blog/2023/01/pmm-home-dashboard_hu_8f62f17f42acb0a9.png 768w, https://percona.community/blog/2023/01/pmm-home-dashboard_hu_7e88b1c2319e1589.png 1400w"
src="https://percona.community/blog/2023/01/pmm-home-dashboard.png" alt="PMM Home Dashboard" /&gt;&lt;figcaption&gt;PMM Home Dashboard&lt;/figcaption&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="mysql-installation"&gt;MySQL Installation&lt;/h3&gt;
&lt;p&gt;If you already have a MySQL instance running, skip the installation process and continue with the &lt;a href="#mysql-configuration"&gt;configuration&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;On Windows, you can install Percona Server for MySQL on Ubuntu running under WSL, but it’s better to use the official &lt;a href="https://hub.docker.com/r/percona/percona-server/" target="_blank" rel="noopener noreferrer"&gt;Docker image&lt;/a&gt;, as the MySQL server would be running on the same network as PMM.&lt;/p&gt;
&lt;p&gt;For installing MySQL using Docker, follow the instructions in the &lt;a href="https://docs.percona.com/percona-server/8.0/installation/docker.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p&gt;You need to start the container with the latest version of Percona Server for MySQL 8.0:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker run -d \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --name ps \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -e MYSQL_ROOT_PASSWORD=root \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; percona/percona-server:8.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where &lt;code&gt;ps&lt;/code&gt; is the name of the container, and the default password for the &lt;code&gt;root&lt;/code&gt; user is &lt;code&gt;root&lt;/code&gt;. You can change these values according to your needs.&lt;/p&gt;
&lt;h3 id="mysql-configuration"&gt;MySQL Configuration&lt;/h3&gt;
&lt;p&gt;Once Percona Server for MySQL is running, you need to get its IP address by running:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker ps&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1fb4ddb35e48 percona/pmm-client:2 &lt;span class="s2"&gt;"/usr/local/percona/…"&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt; minutes ago Up &lt;span class="m"&gt;2&lt;/span&gt; minutes pmm-client&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Copy the container ID or name of the Percona Server for MySQL container. Then, execute this command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker inspect -f &lt;span class="s1"&gt;'{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'&lt;/span&gt; your_container&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;your_container&lt;/code&gt; with the value you copied previously.&lt;/p&gt;
&lt;p&gt;The IP address of the PMM Client container is also needed.&lt;/p&gt;
&lt;p&gt;For configuring PMM for monitoring MySQL, we need to create a PMM user. First, log into MySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker run -it --rm percona mysql -h MYSQL_SERVER -uroot -p&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where &lt;code&gt;MYSQL_SERVER&lt;/code&gt; is the IP address of the Percona Server for MySQL container&lt;/p&gt;
&lt;p&gt;Then, execute the following SQL statements&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'pmm'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'localhost'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;IDENTIFIED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;BY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'pass'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;WITH&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MAX_USER_CONNECTIONS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;GRANT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SUPER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;REPLICATION&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;CLIENT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RELOAD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKUP_ADMIN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ON&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'pmm'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'localhost'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;pass&lt;/code&gt; with your desired password, and &lt;code&gt;localhost&lt;/code&gt; with the IP address of the PMM Client container.&lt;/p&gt;
&lt;p&gt;And finally, register the MySQL server for monitoring:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker &lt;span class="nb"&gt;exec&lt;/span&gt; pmm-client pmm-admin add mysql --username&lt;span class="o"&gt;=&lt;/span&gt;pmm --password&lt;span class="o"&gt;=&lt;/span&gt;pass --host MYSQL_SERVER --query-source&lt;span class="o"&gt;=&lt;/span&gt;perfschema&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where &lt;code&gt;MYSQL_SERVER&lt;/code&gt; is the IP address of the Percona Server for MySQL container. Replace this value with the IP address and replace &lt;code&gt;pass&lt;/code&gt; with the password of your &lt;code&gt;pmm&lt;/code&gt; user.&lt;/p&gt;
&lt;h3 id="postgresql-installation"&gt;PostgreSQL Installation&lt;/h3&gt;
&lt;p&gt;If you already have a PostgreSQL instance running, skip the installation process and continue with the &lt;a href="#postgresql-configuration"&gt;configuration&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;On Windows, you can install PostgreSQL using the Windows installer or install it on Ubuntu running under WSL, but it’s better to install it using the official image provided by the PostgreSQL project.&lt;/p&gt;
&lt;p&gt;You need to start the container with the latest version of PostgreSQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker run --name postgres -e &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;password -d postgres&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where &lt;code&gt;password&lt;/code&gt; is the password for the default &lt;code&gt;postgres&lt;/code&gt; user. Replace this value according to your needs.&lt;/p&gt;
&lt;h3 id="postgresql-configuration"&gt;PostgreSQL Configuration&lt;/h3&gt;
&lt;p&gt;Once PostgreSQL is running, you need to get its IP address by running:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker ps&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;0460c671db12 postgres &lt;span class="s2"&gt;"docker-entrypoint.s…"&lt;/span&gt; &lt;span class="m"&gt;6&lt;/span&gt; days ago Up &lt;span class="m"&gt;48&lt;/span&gt; seconds 5432/tcp postgres&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Copy the container ID or name of the PostgreSQL container. Then, execute this command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker inspect -f &lt;span class="s1"&gt;'{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'&lt;/span&gt; your_container&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;your_container&lt;/code&gt; with the value you copied previously.&lt;/p&gt;
&lt;p&gt;The IP address of the PMM Client container is also needed.&lt;/p&gt;
&lt;p&gt;For configuring PMM for monitoring PostgreSQL, we need to create a PMM user. First, log into PostgreSQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker &lt;span class="nb"&gt;exec&lt;/span&gt; -it postgres psql --user postgres&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Then, execute the following SQL statement:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-22" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-22"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;WITH&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SUPERUSER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ENCRYPTED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PASSWORD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'&lt;password&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;&lt;password&gt;&lt;/code&gt; with your desired password.&lt;/p&gt;
&lt;p&gt;And finally, register the PostgreSQL server for monitoring:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-23" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-23"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker &lt;span class="nb"&gt;exec&lt;/span&gt; pmm-client pmm-admin add postgresql --username&lt;span class="o"&gt;=&lt;/span&gt;pmm --password&lt;span class="o"&gt;=&lt;/span&gt;pass --host POSTGRESQL_SERVER --query-source&lt;span class="o"&gt;=&lt;/span&gt;perfschema&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where &lt;code&gt;POSTGRESQL_SERVER&lt;/code&gt; is the IP address of the PostgreSQL container. Replace this value with the IP address and replace &lt;code&gt;pass&lt;/code&gt; with the password of your &lt;code&gt;pmm&lt;/code&gt; user.&lt;/p&gt;
&lt;h3 id="mongodb-installation"&gt;MongoDB Installation&lt;/h3&gt;
&lt;p&gt;If you already have a MongoDB instance running, skip the installation process and continue with the &lt;a href="#mongodb-configuration"&gt;configuration&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;On Windows, you can install Percona Server for MongoDB on Ubuntu running under WSL, but it’s better to use the official &lt;a href="https://hub.docker.com/r/percona/percona-server/" target="_blank" rel="noopener noreferrer"&gt;Docker image&lt;/a&gt;, as the MongoDB server would be running on the same network as PMM.&lt;/p&gt;
&lt;p&gt;For installing MongoDB using Docker, follow the instructions in the &lt;a href="https://docs.percona.com/percona-server-for-mongodb/6.0/install/docker.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p&gt;You need to start the container with the latest version of Percona Server for MongoDB 6.0:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-24" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-24"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker run --name psmdb -d percona/percona-server-mongodb:6.0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="mongodb-configuration"&gt;MongoDB Configuration&lt;/h3&gt;
&lt;p&gt;Once Percona Server for MongoDB is running, you need to get its IP address by running.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-25" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-25"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker ps&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-26" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-26"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2c3d291535b3 percona/percona-server-mongodb:6.0 &lt;span class="s2"&gt;"/entrypoint.sh mong…"&lt;/span&gt; &lt;span class="m"&gt;6&lt;/span&gt; days ago Up &lt;span class="m"&gt;27&lt;/span&gt; minutes 27017/tcp psmdb&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Copy the container ID or name of the Percona Server for MongoDB container. Then, execute this command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-27" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-27"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker inspect -f &lt;span class="s1"&gt;'{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'&lt;/span&gt; your_container&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;your_container&lt;/code&gt; with the value you copied previously.&lt;/p&gt;
&lt;p&gt;The IP address of the PMM Client container is also needed.&lt;/p&gt;
&lt;p&gt;For configuring PMM for monitoring MongoDB, we need to create a PMM user. First, connect to the &lt;code&gt;admin&lt;/code&gt; database in MongoDB using the MongoDB Shell (mongosh):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-28" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-28"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker run -it --link psmdb --rm percona/percona-server-mongodb:6.0 mongosh mongodb://MONGODB_SERVER:27017/admin&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where &lt;code&gt;MONGODB_SERVER&lt;/code&gt; is the IP address of your MongoDB server.&lt;/p&gt;
&lt;p&gt;Then, create the user for PMM, executing the following instructions:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-29" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-29"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;db.createRole({
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "role":"explainRole",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "privileges":[
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "resource":{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "db":"",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "collection":""
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "actions":[
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "collStats",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "dbHash",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "dbStats",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "find",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "listIndexes",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "listCollections"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "roles":[]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;})&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-30" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-30"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;db.getSiblingDB("admin").createUser({
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "user":"pmm",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "pwd":"&lt;password&gt;",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "roles":[
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "role":"explainRole",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "db":"admin"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "role":"clusterMonitor",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "db":"admin"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "role":"read",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "db":"local"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;})&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;&lt;password&gt;&lt;/code&gt; with the password you want to assign to the &lt;code&gt;pmm&lt;/code&gt; user.&lt;/p&gt;
&lt;p&gt;And finally, register the MongoDB server for monitoring:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-31" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-31"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker &lt;span class="nb"&gt;exec&lt;/span&gt; pmm-client pmm-admin add mongodb --username&lt;span class="o"&gt;=&lt;/span&gt;pmm --password&lt;span class="o"&gt;=&lt;/span&gt;pass --host MONGODB_SERVER&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where &lt;code&gt;MONGODB_SERVER&lt;/code&gt; is the IP address of the MongoDB container. Replace this value with the IP address and replace &lt;code&gt;pass&lt;/code&gt; with the password of your &lt;code&gt;pmm&lt;/code&gt; user.&lt;/p&gt;</content:encoded>
      <author>Mario García</author>
      <category>PMM</category>
      <category>MySQL</category>
      <category>PostgreSQL</category>
      <category>MongoDB</category>
      <category>Monitoring</category>
      <media:thumbnail url="https://percona.community/blog/2023/01/pmm-login-screen_hu_782bdd089cea0085.jpg"/>
      <media:content url="https://percona.community/blog/2023/01/pmm-login-screen_hu_ca2504a4a3694f9d.jpg" medium="image"/>
    </item>
    <item>
      <title>How To Generate Test Data for Your Database Project With Python</title>
      <link>https://percona.community/blog/2023/01/09/how-to-generate-test-data-for-your-database-project-with-python/</link>
      <guid>https://percona.community/blog/2023/01/09/how-to-generate-test-data-for-your-database-project-with-python/</guid>
      <pubDate>Mon, 09 Jan 2023 00:00:00 UTC</pubDate>
      <description>If you need test data for the database of your project, you can get a dataset from Kaggle or use a data generator. In the first case, if you need to process the data before inserting it into the database, you can use Pandas, a widely used Python library for data analysis. This library supports different formats, including CSV and JSON, and it also provides a method for inserting data into a SQL database.</description>
      <content:encoded>&lt;p&gt;If you need test data for the database of your project, you can get a dataset from &lt;a href="https://kaggle.com" target="_blank" rel="noopener noreferrer"&gt;Kaggle&lt;/a&gt; or use a data generator. In the first case, if you need to process the data before inserting it into the database, you can use &lt;a href="https://pandas.pydata.org/" target="_blank" rel="noopener noreferrer"&gt;Pandas&lt;/a&gt;, a widely used Python library for data analysis. This library supports different formats, including CSV and JSON, and it also provides a method for inserting data into a SQL database.&lt;/p&gt;
&lt;p&gt;If you choose a data generator instead, you can find one for &lt;a href="https://github.com/Percona-Lab/mysql_random_data_load" target="_blank" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt; in one of the repositories on our &lt;a href="https://github.com/Percona-Lab" target="_blank" rel="noopener noreferrer"&gt;Percona Lab&lt;/a&gt; GitHub account. Are you using other database technologies? You can follow the guides I already published where I explain how to create your own data generator for &lt;a href="https://www.percona.com/blog/how-to-generate-test-data-for-mysql-with-python/" target="_blank" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt; (it could work for PostgreSQL) and &lt;a href="https://www.percona.com/blog/how-to-generate-test-data-for-mongodb-with-python/" target="_blank" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you create you’re own data generator, this is the process you must follow:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Generate fake data using Faker&lt;/li&gt;
&lt;li&gt;Store generated data in a Pandas DataFrame&lt;/li&gt;
&lt;li&gt;Establish a connection to your database&lt;/li&gt;
&lt;li&gt;Insert the content of the DataFrame into the database&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;
&lt;h3 id="dependencies"&gt;Dependencies&lt;/h3&gt;
&lt;p&gt;Make sure all the dependencies are installed before creating the Python script that will generate the data for your project.&lt;/p&gt;
&lt;p&gt;You can create a &lt;code&gt;requirements.txt&lt;/code&gt; file with the following content:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pandas
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tqdm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;faker&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or if you’re using Anaconda, create an &lt;code&gt;environment.yml&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;dependencies&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;python=3.10&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;pandas&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;tqdm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;faker&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can change the Python version as this script has been proven to work with these versions of Python: 3.7, 3.8, 3.9, 3.10, and 3.11.&lt;/p&gt;
&lt;p&gt;Depending on the database technology you’re using, you must add the corresponding package to your &lt;code&gt;requirements.txt&lt;/code&gt; or &lt;code&gt;environment.yml&lt;/code&gt; file:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;MySQL → &lt;code&gt;PyMySQL&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;PostgreSQL → &lt;code&gt;psycopg2&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;MongoDB → &lt;code&gt;pymongo&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Run the following command if you’re using &lt;code&gt;pip&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pip install -r requirements.txt&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or run the following statement to configure the project environment when using Anaconda:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;conda env create -f environment.yml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="database"&gt;Database&lt;/h3&gt;
&lt;p&gt;Now that you have the dependencies installed, you must create a database named &lt;code&gt;company&lt;/code&gt;, for MySQL or PostgreSQL.&lt;/p&gt;
&lt;p&gt;Log into MySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysql -u root -p&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replace &lt;code&gt;root&lt;/code&gt; with your username, if necessary, and replace &lt;code&gt;localhost&lt;/code&gt; with the IP address or URL of your MySQL server instance if needed.&lt;/p&gt;
&lt;p&gt;Or log into PostgreSQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo su postgres
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ psql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;and create the &lt;code&gt;company&lt;/code&gt; database:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;company&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You don’t need to create the MongoDB database previously.&lt;/p&gt;
&lt;h2 id="creating-a-pandas-dataframe"&gt;Creating a Pandas DataFrame&lt;/h2&gt;
&lt;p&gt;Before creating the script, it’s important to know that we need to implement multiprocessing for optimizing the execution time of the script.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://docs.python.org/3/library/multiprocessing.html" target="_blank" rel="noopener noreferrer"&gt;Multiprocessing&lt;/a&gt; is a way to take advantage of the CPU cores available in the computer where the script is running. In Python, single-CPU use is caused by the &lt;a href="https://realpython.com/python-gil/" target="_blank" rel="noopener noreferrer"&gt;global interpreter lock&lt;/a&gt;, which allows only one thread to carry the Python interpreter at any given time. With multiprocessing, all the workload is divided into every CPU core available. For more information see &lt;a href="https://urban-institute.medium.com/using-multiprocessing-to-make-python-code-faster-23ea5ef996ba" target="_blank" rel="noopener noreferrer"&gt;this blog post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Now, let’s start creating our own data generator. First, a &lt;code&gt;modules&lt;/code&gt; directory needs to be created, and inside the directory, we will create a module named &lt;code&gt;dataframe.py&lt;/code&gt;. This module will be imported later into our main script, and this is where we define the method that will generate the data.&lt;/p&gt;
&lt;p&gt;You need to import the required libraries and methods:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;multiprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;cpu_count&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nn"&gt;pd&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;tqdm&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tqdm&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;faker&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Faker&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;pandas&lt;/code&gt;. Data generated with Faker will be stored in a Pandas DataFrame before being imported into the database.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tqdm()&lt;/code&gt;. This method is required for adding a progress bar to show the progress of the DataFrame creation.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Faker()&lt;/code&gt;. It’s the generator from the faker library.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;cpu_count()&lt;/code&gt;. This is a method from the multiprocessing module that will return the number of cores available.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Then, a faker generator will be created and initialized, by calling the &lt;code&gt;Faker()&lt;/code&gt; method. This is required to generate data by accessing the properties in the Faker library.&lt;/p&gt;
&lt;p&gt;And we determine the number of cores of the CPU available, by calling the &lt;code&gt;cpu_count()&lt;/code&gt; method and assigning this value to the &lt;code&gt;num_cores variable&lt;/code&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;fake&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Faker&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;num_cores&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cpu_count&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;num_cores&lt;/code&gt; is a variable that stores the value returned after calling the &lt;code&gt;cpu_count()&lt;/code&gt; method. We use all the cores minus one to avoid freezing the computer.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;create_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;60000&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;num_cores&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;tqdm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;desc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'Creating DataFrame'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loc&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'first_name'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;first_name&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loc&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'last_name'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_name&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loc&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'job'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loc&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'company'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;company&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loc&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'address'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loc&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'city'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loc&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'country'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loc&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'email'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Then, we define the &lt;code&gt;create_dataframe()&lt;/code&gt; function, where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;x&lt;/code&gt; is the variable that will determine the number of iterations of the &lt;code&gt;for&lt;/code&gt; loop where the DataFrame is created.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;data&lt;/code&gt; is an empty DataFrame that will later be fulfilled with data generated with Faker.&lt;/li&gt;
&lt;li&gt;Pandas &lt;a href="https://www.geeksforgeeks.org/python-pandas-dataframe-loc/" target="_blank" rel="noopener noreferrer"&gt;DataFrame.loc&lt;/a&gt; attribute provides access to a group of rows and columns by their label(s). In each iteration, a row of data is added to the DataFrame and this attribute allows assigning values to each column.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The DataFrame that is created after calling this function will have the following columns:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# Column Non-Null Count Dtype&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--- ------ -------------- -----
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="m"&gt;0&lt;/span&gt; first_name &lt;span class="m"&gt;60000&lt;/span&gt; non-null object
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="m"&gt;1&lt;/span&gt; last_name &lt;span class="m"&gt;60000&lt;/span&gt; non-null object
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="m"&gt;2&lt;/span&gt; job &lt;span class="m"&gt;60000&lt;/span&gt; non-null object
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="m"&gt;3&lt;/span&gt; company &lt;span class="m"&gt;60000&lt;/span&gt; non-null object
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="m"&gt;4&lt;/span&gt; address &lt;span class="m"&gt;60000&lt;/span&gt; non-null object
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="m"&gt;5&lt;/span&gt; country &lt;span class="m"&gt;60000&lt;/span&gt; non-null object
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="m"&gt;6&lt;/span&gt; city &lt;span class="m"&gt;60000&lt;/span&gt; non-null object
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="m"&gt;7&lt;/span&gt; email &lt;span class="m"&gt;60000&lt;/span&gt; non-null object&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The script is generating 60 thousand records but it can be adapted to your project, you can modify this value in the &lt;code&gt;x&lt;/code&gt; variable.&lt;/p&gt;
&lt;h2 id="connection-to-the-database"&gt;Connection to the Database&lt;/h2&gt;
&lt;h3 id="mysql-and-postgresql"&gt;MySQL and PostgreSQL&lt;/h3&gt;
&lt;p&gt;Before inserting the data previously generated with Faker, we need to establish a connection to the database, and for doing this the SQLAlchemy library will be used.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.sqlalchemy.org/" target="_blank" rel="noopener noreferrer"&gt;SQLAlchemy&lt;/a&gt; is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sqlalchemy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_engine&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sqlalchemy.orm&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sessionmaker&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;create_engine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"mysql+pymysql://user:password@localhost/company"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;Session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sessionmaker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;From SQLAlchemy, we import the &lt;code&gt;create_engine()&lt;/code&gt; and the &lt;code&gt;sessionmaker()&lt;/code&gt; methods. The first one is for connecting to the database, and the second one is for creating a session bond to the engine object.&lt;/p&gt;
&lt;p&gt;Don’t forget to replace &lt;code&gt;user&lt;/code&gt;, &lt;code&gt;password&lt;/code&gt;, and &lt;code&gt;localhost&lt;/code&gt; with your authentication details. Save this code in the &lt;code&gt;modules&lt;/code&gt; directory and name it as &lt;code&gt;base.py&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;For PostgreSQL, replace:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;create_engine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"mysql+pymysql://user:password@localhost/company"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;With:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;create_engine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"postgresql+psycopg2://user:password@localhost:5432/company"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="database-schema-definition"&gt;Database Schema Definition&lt;/h3&gt;
&lt;p&gt;For MySQL and PostgreSQL, the schema of the database can be defined through the &lt;a href="https://docs.sqlalchemy.org/en/14/core/schema.html" target="_blank" rel="noopener noreferrer"&gt;Schema Definition Language&lt;/a&gt; provided by SQLAlchemy, but as we’re only creating one table and importing the DataFrame by calling Pandas to_sql() method, this is not necessary.&lt;/p&gt;
&lt;p&gt;When calling Pandas &lt;code&gt;to_sql()&lt;/code&gt; method, we define the schema as follows:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sqlalchemy.types&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"first_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"last_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"job"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"company"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"city"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"country"&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Then we pass the &lt;code&gt;schema&lt;/code&gt; variable as a parameter to this method.&lt;/p&gt;
&lt;p&gt;Save this code in the &lt;code&gt;modules&lt;/code&gt; directory with the name &lt;code&gt;schema.py&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id="mongodb"&gt;MongoDB&lt;/h3&gt;
&lt;p&gt;Before inserting the data previously generated with Faker, we need to establish a connection to the database, and for doing this the &lt;a href="https://pypi.org/project/pymongo/" target="_blank" rel="noopener noreferrer"&gt;PyMongo&lt;/a&gt; library will be used.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;pymongo&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoClient&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;uri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mongodb://user:password@localhost:27017/"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;From PyMongo, we import the &lt;code&gt;MongoClient()&lt;/code&gt; method.&lt;/p&gt;
&lt;p&gt;Don’t forget to replace &lt;code&gt;user&lt;/code&gt;, &lt;code&gt;password&lt;/code&gt;, &lt;code&gt;localhost&lt;/code&gt;, and &lt;code&gt;port&lt;/code&gt; (27017) with your authentication details. Save this code in the modules directory and name it &lt;code&gt;base.py&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="generating-your-data"&gt;Generating Your Data&lt;/h2&gt;
&lt;h3 id="mysql-and-postgresql-1"&gt;MySQL and PostgreSQL&lt;/h3&gt;
&lt;p&gt;All the required modules are now ready to be imported into the main script, now it’s time to create the &lt;code&gt;sql.py&lt;/code&gt; script. First, import the required libraries:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;multiprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Pool&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;multiprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;cpu_count&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nn"&gt;pd&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;From multiprocessing, &lt;code&gt;Pool()&lt;/code&gt; and &lt;code&gt;cpu_count()&lt;/code&gt; are required. The &lt;a href="https://superfastpython.com/multiprocessing-pool-python/#:~:text=The%20Python%20Multiprocessing%20Pool%20class,Processes%20and%20Threads%20in%20Python." target="_blank" rel="noopener noreferrer"&gt;Python Multiprocessing Pool&lt;/a&gt; class allows you to create and manage process pools in Python.&lt;/p&gt;
&lt;p&gt;Then, import the modules previously created:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;modules.dataframe&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_dataframe&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;modules.schema&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;schema&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;modules.base&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;engine&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now we create the multiprocessing pool, configured to use all available CPU cores minus one. Each core will call the &lt;code&gt;create_dataframe()&lt;/code&gt; function and create a DataFrame with 4 thousand records. After each call to the function has finished, all the DataFrames created will be concatenated into a single one.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="vm"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"__main__"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;num_cores&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cpu_count&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;create_dataframe&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_cores&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;to_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'employees'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;con&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;if_exists&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'append'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;And finally, we will insert the DataFrame into the MySQL database by calling the &lt;code&gt;to_sql()&lt;/code&gt; method. All the data will be stored in a table named employees.&lt;/p&gt;
&lt;p&gt;The table &lt;code&gt;employees&lt;/code&gt; is created without a primary key, so we execute the following SQL statement to add an &lt;code&gt;id&lt;/code&gt; column that is set to be the primary key of the table.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"ALTER TABLE employees ADD id INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For PostgreSQL, replace this line:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"ALTER TABLE employees ADD id INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;With:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"ALTER TABLE employees ADD COLUMN id SERIAL PRIMARY KEY;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="mongodb-1"&gt;MongoDB&lt;/h3&gt;
&lt;p&gt;All the required modules are now ready to be imported into the main script, now it’s time to create the &lt;code&gt;mongodb.py&lt;/code&gt; script. First, import the required libraries:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-22" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-22"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;multiprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Pool&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;multiprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;cpu_count&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nn"&gt;pd&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;From multiprocessing, &lt;code&gt;Pool()&lt;/code&gt; and &lt;code&gt;cpu_count()&lt;/code&gt; are required. The &lt;a href="https://superfastpython.com/multiprocessing-pool-python/#:~:text=The%20Python%20Multiprocessing%20Pool%20class,Processes%20and%20Threads%20in%20Python." target="_blank" rel="noopener noreferrer"&gt;Python Multiprocessing Pool&lt;/a&gt; class allows you to create and manage process pools in Python.&lt;/p&gt;
&lt;p&gt;Then, import the modules previously created:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-23" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-23"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;modules.dataframe&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_dataframe&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;modules.base&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now we create the multiprocessing pool, configured to use all available CPU cores minus one. Each core will call the &lt;code&gt;create_dataframe()&lt;/code&gt; function and create a DataFrame with 4 thousand records. After each call to the function has finished, all the DataFrames created will be concatenated into a single one.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;python&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-24" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-24"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="vm"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"__main__"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;num_cores&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cpu_count&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;create_dataframe&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_cores&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;data_dict&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;to_dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'records'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"company"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"employees"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;insert_many&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data_dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After logging into the MongoDB server, we specify the database and the collection where the data will be stored.&lt;/p&gt;
&lt;p&gt;And finally, we will insert the DataFrame into MongoDB by calling the &lt;code&gt;insert_many()&lt;/code&gt; method. All the data will be stored in a collection named &lt;code&gt;employees&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="running-the-script"&gt;Running the script&lt;/h2&gt;
&lt;p&gt;Run the following statement to populate the table:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-25" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-25"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ python sql.py&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-26" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-26"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ python mongodb.py&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/01/multiprocessing.png" alt="Multiprocessing" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Execution time depends on the CPU cores available on your machine. I’m running this script on an Intel i7 1260P that has 16 cores, but using 15.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2023/01/cpu-utilization_hu_f0cc9253bb228438.png 480w, https://percona.community/blog/2023/01/cpu-utilization_hu_ee7bfd78f9fb559f.png 768w, https://percona.community/blog/2023/01/cpu-utilization_hu_f74620294842ef4f.png 1400w"
src="https://percona.community/blog/2023/01/cpu-utilization.png" alt="CPU Utilization" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="query-your-data"&gt;Query Your Data&lt;/h2&gt;
&lt;p&gt;Once the script finishes, you can check the data in the database.&lt;/p&gt;
&lt;h3 id="mysql-and-postgresql-2"&gt;MySQL and PostgreSQL&lt;/h3&gt;
&lt;p&gt;Connect to the &lt;code&gt;company&lt;/code&gt; database.&lt;/p&gt;
&lt;p&gt;MySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-27" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-27"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;use&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;company&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;PostgreSQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-28" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-28"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;company&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Then, get the number of records.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-29" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-29"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;count&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;employees&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;count()&lt;/code&gt; function returns the number of records in the &lt;code&gt;employees&lt;/code&gt; table.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-30" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-30"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;|&lt;/span&gt; count&lt;span class="o"&gt;(&lt;/span&gt;*&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="m"&gt;60000&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="m"&gt;1&lt;/span&gt; row in &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;0.22 sec&lt;span class="o"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="mongodb-2"&gt;MongoDB&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-31" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-31"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;use company;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;db.employees.count()&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;count()&lt;/code&gt; function returns the number of records in the &lt;code&gt;employees&lt;/code&gt; table.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-32" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-32"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;60000&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Or you can display the records in the &lt;code&gt;employees&lt;/code&gt; table:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-33" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-33"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;db.employees.find().pretty()&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The code shown in this blog post can be found on my GitHub account in the &lt;a href="https://github.com/mattdark/data-generator" target="_blank" rel="noopener noreferrer"&gt;data-generator&lt;/a&gt; repository.&lt;/p&gt;</content:encoded>
      <author>Mario García</author>
      <category>Python</category>
      <category>Dev</category>
      <category>MySQL</category>
      <category>PostgreSQL</category>
      <category>MongoDB</category>
      <media:thumbnail url="https://percona.community/blog/2023/01/testing_data_hu_ac911562155ce85c.jpg"/>
      <media:content url="https://percona.community/blog/2023/01/testing_data_hu_9b8bc012b9fbed9f.jpg" medium="image"/>
    </item>
    <item>
      <title>Automating Percona's XtraBackup</title>
      <link>https://percona.community/blog/2023/01/04/automating-perconas-xtrabackup/</link>
      <guid>https://percona.community/blog/2023/01/04/automating-perconas-xtrabackup/</guid>
      <pubDate>Wed, 04 Jan 2023 00:00:00 UTC</pubDate>
      <description>Percona’s XtraBackup is a beautiful tool that allows for the backup and restoration of MySQL databases.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://www.percona.com/software/mysql-database/percona-xtrabackup" target="_blank" rel="noopener noreferrer"&gt;Percona’s XtraBackup&lt;/a&gt; is a beautiful tool that allows for the backup and restoration of MySQL databases.&lt;/p&gt;
&lt;p&gt;From the documentation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The Percona XtraBackup tools provide a method of performing a hot backup of your MySQL data while the system is running. Percona XtraBackup is a free, online, open source, complete database backups solution for all versions of Percona Server for MySQL and MySQL®. Percona XtraBackup performs online non-blocking, tightly compressed, highly secure full backups on transactional systems so that applications remain fully available during planned maintenance windows.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;It is great but it quickly becomes difficult to wield when using it multiple times per day across multiple environments. &lt;a href="https://github.com/phildoesdev/xtrabackupautomator" target="_blank" rel="noopener noreferrer"&gt;XtraBackup Automator&lt;/a&gt; attempts to make this easier by providing the ability to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Schedule when we should create backups
&lt;ul&gt;
&lt;li&gt;Times of day, when to make a base backup vs incremental&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Archive old backups
&lt;ul&gt;
&lt;li&gt;Decide what to do with the base backup and its increments when we are ready to create a new base&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Maintain x days of archives
&lt;ul&gt;
&lt;li&gt;Define how many archived backup groups should we keep before removing them from the file system&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://github.com/phildoesdev/xtrabackupautomator" target="_blank" rel="noopener noreferrer"&gt;XtraBackup Automator&lt;/a&gt; automates away the management of MySQL backups.&lt;/p&gt;
&lt;h2 id="considerations-before-installing"&gt;Considerations Before Installing&lt;/h2&gt;
&lt;p&gt;I strongly recommend testing this on some sort of preproduction environment first. The thing I’ve seen most likely to cause trouble is the archival process. By default, this tool uses the gztar ’tarball’ as its compression method, which can be resource intensive if you are working on a large database backup. For instance, one of our servers (a Google Cloud Platform virtual machine with 8 vCPU, 32gb RAM, 1000GB SSD persistent disk, running Debian 10) with a ~140GB base backup currently jumps in CPU usage by ~13% for 4 hours, with a handful of 5%-15% jumps in RAM usage, while creating this archive. Another downside of this compression method is that it can take 10-20 minutes to unzip, depending on settings. The benefit of the tarball is that we are able to take these large backups from 140GB to &lt; 10GB and this is worth all that other trouble for us as we want to have two weeks of daily backups. If these down sides are not acceptable, I recommend playing with the archive type as described in the config. I have not personally tested any other methods.&lt;/p&gt;
&lt;p&gt;I am assuming that you have administrative access to the server this will run on as installing systemd services and timers requires root access. I see no reason why Cron Jobs could not be used to run this program, but I have never tested that and all documentation references systemd and its tools.&lt;/p&gt;
&lt;h2 id="info--requirements"&gt;Info &amp; Requirements&lt;/h2&gt;
&lt;h4 id="developed-on"&gt;Developed On&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;OS:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Debian GNU/Linux 10 (buster)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Python Version:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Python 3.10.4&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Python Packages&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Name:&lt;/strong&gt; &lt;a href="https://pexpect.readthedocs.io/en/stable/" target="_blank" rel="noopener noreferrer"&gt;pexpect&lt;/a&gt;, &lt;strong&gt;Version:&lt;/strong&gt; 4.8.0&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Percona XtraBackup Version:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/software/mysql-database/percona-xtrabackup" target="_blank" rel="noopener noreferrer"&gt;XtraBackup&lt;/a&gt; version 8.0.28-21 based on MySQL server 8.0.28 Linux (x86_64)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MySQL&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;MySql Ver 8.0.28 for Linux on x86_64 (MySQL Community Server - GPL)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="required-python-libraries"&gt;Required Python Libraries&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://pexpect.readthedocs.io/en/stable/" target="_blank" rel="noopener noreferrer"&gt;pexpect&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="required-files"&gt;Required Files&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/phildoesdev/xtrabackupautomator/blob/main/src/xtrabackupautomator.py" target="_blank" rel="noopener noreferrer"&gt;xtrabackupautomator.py&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/phildoesdev/xtrabackupautomator/blob/main/xtrabackupautomator.service" target="_blank" rel="noopener noreferrer"&gt;xtrabackupautomator.service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/phildoesdev/xtrabackupautomator/blob/main/xtrabackupautomator.timer" target="_blank" rel="noopener noreferrer"&gt;xtrabackupautomator.timer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="installing"&gt;Installing&lt;/h2&gt;
&lt;p&gt;Below is a general explanation of how to install and start running this program. I would suggest running the program manually via command line a couple times, in a preproduction environment, to verify things are working as you expect.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Download The Files&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Download the &lt;a href="#required-files"&gt;Required Files&lt;/a&gt; from &lt;a href="https://github.com/phildoesdev/xtrabackupautomator" target="_blank" rel="noopener noreferrer"&gt;https://github.com/phildoesdev/xtrabackupautomator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Review Your Config Settings&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Review the &lt;a href="#configuration"&gt;Configuration&lt;/a&gt; section of this readme and alter these settings to your liking.&lt;br&gt;
Any altered folder paths may affect the create folder instructions below. At minimum you must include database login information, but alter as necessary. I suggest reading through all the config options to see what might be interesting to tweak.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Edit your systemd service and timer&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you change the location that the script should run from you must alter the file path in the xtrabackupautomator.service file. I will not explain much else here as there is a lot that might go into these settings. I have given some default settings that hopefully make sense.&lt;/p&gt;
&lt;p&gt;I have also included several links that describe what is possible in the &lt;a href="#sources--links"&gt;Sources &amp; Links&lt;/a&gt; section. If there are specific questions in the future I will address them here.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/01/automate_xtrabackup_service_timer_example.jpg" alt="ServiceTimerExample" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Install the required dependencies&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ python3 -m pip install pexpect&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Create the directory for our code to live in&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mkdir /lib/xtrabackupautomator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chmod &lt;span class="m"&gt;700&lt;/span&gt; /lib/xtrabackupautomator&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Create the directories for our backups to save to&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mkdir -p /data/backups/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mkdir -p /data/backups/archive
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mkdir -p /data/backups/archive/archive_restore
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chmod &lt;span class="m"&gt;760&lt;/span&gt; /data/backups/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chmod &lt;span class="m"&gt;700&lt;/span&gt; /data/backups/archive
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chmod &lt;span class="m"&gt;700&lt;/span&gt; /data/backups/archive/archive_restore
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chown -R root:root /data/backups/&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Move your downloaded files&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mv xtrabackupautomator.py /lib/xtrabackupautomator/.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mv xtrabackupautomator.service /etc/systemd/system/.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mv xtrabackupautomator.timer /etc/systemd/system/.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Enable your service and timer&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl daemon-reload
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; xtrabackupautomator.timer
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl start xtrabackupautomator.timer
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl status xtrabackupautomator.timer&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Congrats, you are now installed!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;You have now installed XtraBackup Automator, it will begin running automatically according to your xtrabackupautomator.timer file. If you wish to run it manually, you can run the python file, or use my preferred method, the &lt;code&gt;systemd start&lt;/code&gt; command to start it and &lt;code&gt;journalctl&lt;/code&gt; to view its output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ systemctl start xtrabackupautomator.service
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ journalctl -f -n &lt;span class="m"&gt;100&lt;/span&gt; -u xtrabackupautomator&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Unzipping and Restoring your Backup&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I have included a link in the &lt;a href="#sources--links"&gt;Sources &amp; Links&lt;/a&gt; section on &lt;a href="https://linuxize.com/post/how-to-extract-unzip-tar-gz-file/" target="_blank" rel="noopener noreferrer"&gt;unzipping tar gz files&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I strongly suggest reading the &lt;a href="https://docs.percona.com/percona-xtrabackup/8.0/backup_scenarios/incremental_backup.html" target="_blank" rel="noopener noreferrer"&gt;official Percona documentation&lt;/a&gt; on restoring backups.&lt;/p&gt;
&lt;p&gt;For a point of a reference, I will describe my generic unzip and restore process. I am using the directory &lt;code&gt;/data/backups/archive/archive_restore/&lt;/code&gt; as a place to unzip and restore from.&lt;/p&gt;
&lt;p&gt;Executing any of the below commands can obviously be very dangerous as we must stop mysql, wipe the current data, and restore with our prepared backup. &lt;a href="https://docs.percona.com/percona-xtrabackup/8.0/backup_scenarios/incremental_backup.html" target="_blank" rel="noopener noreferrer"&gt;Read the documentation&lt;/a&gt; and come up with your own plan! The code below is only meant as a reference and may change greatly with time and environments. Always test your plans in a preprod environment!!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2023/01/automate_xtrabackup_archive_pic.png" alt="Archives" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Always verify our version of Percona's XtraBackup and MySQL match before performing a backup... these differences can make restores fail or behave oddly.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mysql --version
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo xtrabackup --version
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Clean our restore folder, just to be safe&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ rm -r /data/backups/archive/archive_restore/*
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Unzip our archived backup to an empty folder. Always verify we have enough disk space to unzip before unzipping&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo tar -xvf database_backup_12_23_2022__06_25_10.tar.gz -C /data/backups/archive/archive_restore/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Sanity check, verify we are looking at the backup we think we are (This command checks the base folder, check the latest incremental folder we may have)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;cd&lt;/span&gt; /data/backups/archive/archive_restore/data/backups/mysql/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo cat base/xtrabackup_info &lt;span class="p"&gt;|&lt;/span&gt; grep &lt;span class="s1"&gt;'tool_version\|server_version\|start_time\|end_time\|partial\|incremental'&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Prepare the base&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo xtrabackup --prepare --apply-log-only --no-server-version-check --target-dir&lt;span class="o"&gt;=&lt;/span&gt;/data/backups/archive/archive_restore/data/backups/mysql/base
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Prepare each incremental folder. This must be done for each incremental folder we wish to back up to.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo xtrabackup --prepare --apply-log-only --no-server-version-check --target-dir&lt;span class="o"&gt;=&lt;/span&gt;/data/backups/archive/archive_restore/data/backups/mysql/base --incremental-dir&lt;span class="o"&gt;=&lt;/span&gt;/data/backups/archive/archive_restore/data/backups/mysql/inc_0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Repeat as necssary....&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Stop SQL and our backup script as we do not want it running mid restore&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl stop mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl stop xtrabackupautomator.timer
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Wipe bad/corrupted sql data from current instance&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo rm -rv /var/lib/mysql/*
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Verify our mysql data is wiped&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo ls /var/lib/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# I use this method to restore my base backup, there are other options but they did not work correctly in my environment&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo xtrabackup --copy-back --target-dir&lt;span class="o"&gt;=&lt;/span&gt;/data/backups/archive/archive_restore/data/backups/mysql/base
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Verify the contents are the size we expect as a sanity check and apply the correct ownership to the files&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ du -hs /var/lib/mysql/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chown -R mysql:mysql /var/lib/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Restart mysql and xtrabackupautomator. Verify MySQL's status&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl start mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl xtrabackupautomator.timer
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl status mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="configuration"&gt;Configuration&lt;/h2&gt;
&lt;p&gt;In an attempt to make this a one file, easy to install piece of software, I included the configuration struct in the xtrabackupautomator.py file, in the &lt;code&gt;__init__&lt;/code&gt; method of the XtraBackupAutomator class, on line ~60 (as of this writing). I will describe that struct, its default values, and other relevant information below. Most of this information can also be found in comments throughout, or in the getter methods for each variable.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;== db ==
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -un
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: ""]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; XtraBackup user you set up during your initial configuration of Percona's XtraBackup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -pw
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: ""]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This user's password
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -host
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "localhost"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; The IP of your database
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -port
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: 3306]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; The port to access database
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;== folder_names ==
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -base_dir
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "/data/backups/"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; The root directory for all backup related things. Holds current backup and any archived backups.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This is the default location and is reflected in the setup as we request you create this folder.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; If you change this directory in the config this change must be reflected in the setup.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -datadir
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "mysql/"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Folder that current backups will be saved to. This would be the folder that holds the base backup and any
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; incremental backups before they are archived
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; If you change this directory in the config this change must be reflected in the setup.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; *** XtraBackupAutomator WILL ARCHIVE AND DELETE ANYTHING IN HERE. THIS SHOULD BE AN EMPTY FOLDER, NOT UTILIZED BY ANYTHING ELSE.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -archivedir
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "archive/"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Folder that a group of backups will be archived to.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; If you change this directory in the config this change must be reflected in the setup.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; *** XtraBackupAutomator COULD POTENTIALLY DELETE ANY NON-DIRECTORY IN HERE.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;== file_names ==
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -basefolder_name
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "base"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Foldername for the base backup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -incrementalfolder_perfix
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "inc_"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Folder name prefix for incremental backups.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Suffixed with the current number of incremental backups minus one
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; e.g., 'inc_0'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -archive_name_prefix
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "database_backup_"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Prefix for the archive files.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Suffixed by the datetime of the archive
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; e.g., 'database_backup_11_28_2022__06_25_03.tar.gz'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;== archive_settings ==
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -allow_archive
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: True]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; An override to enable/disable all archive settings.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Currently, disabling this will cause the program to do a base backup and then incremental backups forever.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -archive_zip_format
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "gztar"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; The default archive file type. I like tarballs because they zip our large database into a manageable file.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; However, tarballs can take a long time to create and require a fair amount of resources if your DB is large.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This setting will depend on your system and the size of your DB. I recommend playing around with this.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Other zip options: [Shutil Man Page](https://docs.python.org/3/library/shutil.html#shutil.make_archive)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -archived_bu_count
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: 7]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Keep x archived backups, once this threshold is reached the oldest archive will be deleted.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Archiving daily, this is a week of archives.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -enforce_max_num_bu_before_archive
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: True]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; One of two ways to 'force archive' of backups.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This counts the # of incremental backup folders and initiates the archives once that number is reached.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; A sample use case is that in your systemd timer file is scheduled to do 5 backups throughout the day, so setting this to
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; true and max_num_bu_before_archive_count set to 4 (because we do not count the base) would give you a 'daily archive'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -max_num_bu_before_archive_count
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: 4]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; The max number of incremental backups to do before we archive (does not count the base).
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Set to 0 to archive after each base
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -enforce_archive_at_time
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: False]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; One of two ways to 'force archive' of backups.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This will archive what ever base or incremental folders exist if a backup is happening within the
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; archive_at_utc_24_hour hour. This is intended to make it easier to schedule when your archive and base backup occur.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; These can be resource intensive and so it is nice to do at off hours.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; *If this program is scheduled to run more than once during the 'archive_at_utc_24_hour' hour each run will cause an archive.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -archive_at_utc_24_hour
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: 6]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; If a backup happens within this hour we will archive w/e was previously there and create a new base.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Matching this with a time setup in your xtrabackupautomator.timer allows you to choose when your backups will
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; occur.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; No explicit consideration for daylight savings time.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Defaults to the hour of 1:00am EST, 6:00am UTC.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;== general_settings ==
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -backup_command_timeout_seconds
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: 30]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Give us 'backup_command_timeout_seconds' seconds for the command to respond.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This is not the same as saying 'a backup can only take this long'.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -max_time_between_backups_seconds
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: 60*60*24]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Max number of seconds between this backup and the last.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; If the last backup is older than this we will archive and create a new base.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This is in an attempt to prevent an incremental backup that might span days or weeks due to this service being
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; turned off or some such.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Defaults (arbitrarily) to 24 hours
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -additional_bu_command_params
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: ["no-server-version-check"]]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Any additional parameters that you wish to pass along to your backup commands.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; We loop this list, put a '--' before each element and append it to the end of our backup commands.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This gets applied to the base and incremental backup commands.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; These are params that I have found useful.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;== log_settings ==
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -is_enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: True]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Enables/Disables all logging type settings. This was useful in testing, so I kept it around.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -log_child_process_to_screen
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: True]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; If this is set to true the child process's output will be dumped to screen but not actually logged anywhere
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -is_log_to_file
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: True]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; If set to True we will try to log to the 'default_log_file' in the 'default_log_path' directory
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -default_log_path
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "/var/log/"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; The path that we will try to place our log file ('default_log_file')
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -default_log_file
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [DEFAULT_VALUE: "xtrabackupautomator.log"]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; The file name we will try to log to&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="sources--links"&gt;Sources &amp; Links&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Official Percona XtraBackup Documentation
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-xtrabackup/8.0/index.html" target="_blank" rel="noopener noreferrer"&gt;https://docs.percona.com/percona-xtrabackup/8.0/index.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Systemctl Overveiw
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://fedoramagazine.org/what-is-an-init-system/" target="_blank" rel="noopener noreferrer"&gt;https://fedoramagazine.org/what-is-an-init-system/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units" target="_blank" rel="noopener noreferrer"&gt;https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/codex/setup-a-python-script-as-a-service-through-systemctl-systemd-f0cc55a42267" target="_blank" rel="noopener noreferrer"&gt;https://medium.com/codex/setup-a-python-script-as-a-service-through-systemctl-systemd-f0cc55a42267&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Systemctl Timers Overview
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://linuxconfig.org/how-to-schedule-tasks-with-systemd-timers-in-linux" target="_blank" rel="noopener noreferrer"&gt;https://linuxconfig.org/how-to-schedule-tasks-with-systemd-timers-in-linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://opensource.com/article/20/7/systemd-timers" target="_blank" rel="noopener noreferrer"&gt;https://opensource.com/article/20/7/systemd-timers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Systemctl Services Details
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.service.html" target="_blank" rel="noopener noreferrer"&gt;https://www.freedesktop.org/software/systemd/man/systemd.service.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Systemctl Timers Details
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.timer.html" target="_blank" rel="noopener noreferrer"&gt;https://www.freedesktop.org/software/systemd/man/systemd.timer.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;OnCalendar Expected Formats
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.time.html#" target="_blank" rel="noopener noreferrer"&gt;https://www.freedesktop.org/software/systemd/man/systemd.time.html#&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Archive Zip Options
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.python.org/3/library/shutil.html#shutil.make_archive" target="_blank" rel="noopener noreferrer"&gt;https://docs.python.org/3/library/shutil.html#shutil.make_archive&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;How to Extract (Unzip) Tar Gz File
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://linuxize.com/post/how-to-extract-unzip-tar-gz-file/" target="_blank" rel="noopener noreferrer"&gt;https://linuxize.com/post/how-to-extract-unzip-tar-gz-file/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Restoring Xtrabackup Incremental Backups
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-xtrabackup/8.0/backup_scenarios/incremental_backup.html" target="_blank" rel="noopener noreferrer"&gt;https://docs.percona.com/percona-xtrabackup/8.0/backup_scenarios/incremental_backup.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Phil Plachta</author>
      <category>XtraBackup</category>
      <category>DevOps</category>
      <media:thumbnail url="https://percona.community/blog/2023/01/automate_xtrabackup_hu_7a589919ed67f9a0.jpg"/>
      <media:content url="https://percona.community/blog/2023/01/automate_xtrabackup_hu_4800e348e5210562.jpg" medium="image"/>
    </item>
    <item>
      <title>Dashboard Story: How We Created PMM Dashboard for Highload</title>
      <link>https://percona.community/blog/2022/12/22/dashboard-story-how-we-created-pmm-dashboard-for-highload/</link>
      <guid>https://percona.community/blog/2022/12/22/dashboard-story-how-we-created-pmm-dashboard-for-highload/</guid>
      <pubDate>Thu, 22 Dec 2022 00:00:00 UTC</pubDate>
      <description>Let’s say you have highload instances. How do you monitor them? There are a lot of servers with 100, 200… 500+ nodes. How can we collect, check, and analyze metrics from all these servers? How can we understand what and where something happened? Scroll, scroll, scroll… down? That was the task that we faced at Percona and successfully resolved.</description>
      <content:encoded>&lt;p&gt;Let’s say you have highload instances. How do you monitor them? There are a lot of servers with 100, 200… 500+ nodes. How can we collect, check, and analyze metrics from all these servers? How can we understand what and where something happened? Scroll, scroll, scroll… down? That was the task that we faced at Percona and successfully resolved.&lt;/p&gt;
&lt;h2 id="issue-with-home-dashboard"&gt;Issue With Home Dashboard&lt;/h2&gt;
&lt;p&gt;Percona Monitoring and Management dashboards are based on Grafana. So, when you opened PMM, you can see Grafana’s dashboards. Home Dashboard on the main page of PMM contains different metrics from all environments, databases and other resources. You can see here panels with current resources’ utilization - CPU, Memory, Disk Space, I/O Operations, Network. Certainly, these are very important metrics, which can help us quickly catch some issues… But after some years, we noticed that instances have more and more nodes. And we caught an issue with our Home Dashboard. At big instances (more than 100…200..etc nodes), there were performance issues. It was tough to understand what happened, and a user needed a lot of time for checking. From this point, we started our way.&lt;/p&gt;
&lt;h2 id="searching-the-root-cause-of-our-issues"&gt;Searching the Root Cause of Our Issues&lt;/h2&gt;
&lt;p&gt;Before starting an investigation and searching for “bottleneck”, we defined some questions to answer first:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What’s happened?&lt;/li&gt;
&lt;li&gt;Where to catch performance issues?&lt;/li&gt;
&lt;li&gt;How can we fix it?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s answer these questions! To find out what’s happened, we need to check the response time for our Home Dashboard. So we created a PMM instance with 200 nodes for testing.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/12/dashboard1.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;What can we see here? Loading time is more than two minutes! Let’s dive deeper and check the longest requests.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/12/dashboard2_hu_4f883bee6519618f.png 480w, https://percona.community/blog/2022/12/dashboard2_hu_19ef1c11b891a52c.png 768w, https://percona.community/blog/2022/12/dashboard2_hu_f3f9f955d990dc70.png 1400w"
src="https://percona.community/blog/2022/12/dashboard2.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The longest time takes to get a request to VictoriaMetrics storage. If we try to scroll down, we can see “Lazy load” of our page and slower and slower working of our browser. Why? Because we request tons and tons of metrics.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/12/dashboard3_hu_a900b31c4ce4c67e.png 480w, https://percona.community/blog/2022/12/dashboard3_hu_6612840b4ceaa86.png 768w, https://percona.community/blog/2022/12/dashboard3_hu_a8ffacad0e4011dd.png 1400w"
src="https://percona.community/blog/2022/12/dashboard3.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;And it seems that we caught here our main problem - too much data, too many requests, and too many responses. What can we do? We decided to create a new Home Dashboard!&lt;/p&gt;
&lt;h2 id="strategies-for-creating-a-dashboard"&gt;Strategies for Creating a Dashboard&lt;/h2&gt;
&lt;p&gt;There are a lot of strategies for creating dashboards. But we need a short, informative, user-friendly one. Our final goal is to provide a simple answer for the questions: “Is it all good? I want to drink my morning coffee” or “Is something bad? We need to repair it ASAP!”&lt;/p&gt;
&lt;p&gt;Let’s investigate what we can do here.&lt;/p&gt;
&lt;p&gt;In the O’Reilly &lt;a href="https://www.amazon.com/Site-Reliability-Engineering-Production-Systems/dp/149192912X" target="_blank" rel="noopener noreferrer"&gt;Site Reliability Engineering&lt;/a&gt; book, we can read about four golden signals strategy: Latency, Traffic, Errors and Saturation. Let’s meet each of these signals.&lt;/p&gt;
&lt;h3 id="latency"&gt;Latency&lt;/h3&gt;
&lt;p&gt;Latency means the time it takes to service a request. One important moment — differences between successful and unsuccessful requests.&lt;/p&gt;
&lt;p&gt;For example, an HTTP 500 error means that the connection was lost and this error served very quickly, however, as an HTTP 500 error indicates a failed request, factoring 500s into your overall latency might result in misleading calculations. On the other hand, a slow error is even worse than a fast error! Therefore, it’s important to track error latency, as opposed to just filtering out errors.&lt;/p&gt;
&lt;h3 id="traffic"&gt;Traffic&lt;/h3&gt;
&lt;p&gt;Traffic is a measure of how much demand is being placed on your system. For web service, it is usually HTTP requests per second, for audio may be network I/O rate, for key-value storage systems — transactions and retrievals per second.&lt;/p&gt;
&lt;h3 id="errors"&gt;Errors&lt;/h3&gt;
&lt;p&gt;Errors are the rate of requests that fail, either explicitly (e.g., HTTP 500s), or implicitly (for example, an HTTP 200 success response, but coupled with the wrong content).&lt;/p&gt;
&lt;p&gt;Monitoring these cases can be drastically different: catching HTTP 500s at your load balancer can do a decent job of catching all completely failed requests, while only end-to-end system tests can detect that you’re serving the wrong content.&lt;/p&gt;
&lt;h3 id="saturations"&gt;Saturations&lt;/h3&gt;
&lt;p&gt;It is a measure of your system fraction, emphasizing the resources that are most constrained (e.g., in a memory-constrained system, show memory; in an I/O-constrained system, show I/O). Note that many systems degrade in performance before they achieve 100% utilization, so having a utilization target is essential.&lt;/p&gt;
&lt;p&gt;If you measure all four golden signals and call for a human when one signal is problematic (or, in the case of saturation, nearly problematic), your service will be at least decently covered by monitoring.&lt;/p&gt;
&lt;p&gt;There are also &lt;strong&gt;USE&lt;/strong&gt; and &lt;strong&gt;RED&lt;/strong&gt; strategies that we took into consideration.&lt;/p&gt;
&lt;p&gt;R — Rate, request per second
E — Errors, how many request return error
D — Duration, latency, the time it takes to service a request&lt;/p&gt;
&lt;p&gt;U — utilization, how fully resource working
S — saturation, how long queue at this resources
E — errors, how many errors do we have?&lt;/p&gt;
&lt;h2 id="poc-of-home-dashboard"&gt;POC of Home Dashboard&lt;/h2&gt;
&lt;p&gt;All these strategies are interesting and helpful. But we want to compile the best for our dashboard. Our Tech Lead Dani Guzmán Burgos created a Proof-of-Concept (POC) of our new Home Dashboard. Our main idea is a simple answer to the questions “all good” or “is something bad.”&lt;/p&gt;
&lt;p&gt;When you open this dashboard, you can see simple color panels — green or red. How do we measure this?&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/12/dashboard4_hu_3fc7568d367ed88.png 480w, https://percona.community/blog/2022/12/dashboard4_hu_d6e1de04a25b6a4d.png 768w, https://percona.community/blog/2022/12/dashboard4_hu_3119cf56c7f1ae67.png 1400w"
src="https://percona.community/blog/2022/12/dashboard4.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Here we can see common information about our environment: how many nodes we have, disk operations, DB and node uptime, and advisors’ checks. There is also a very interesting panel with the name “Environment Health,” which is our secret feature.&lt;/p&gt;
&lt;p&gt;For anomaly detection, we use CPU and disk metrics. And here, we also answer questions about how fully our resources are working and what duration we have. On the right panels, we can see data with 15 minutes relative time (to prevent peaks and performance issues). On the left side, we can compare current metrics with metrics from a week ago to measure trends.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/12/dashboard5_hu_705a86506d5d6849.png 480w, https://percona.community/blog/2022/12/dashboard5_hu_1609f30195d2bae4.png 768w, https://percona.community/blog/2022/12/dashboard5_hu_35a05bfdb57e519f.png 1400w"
src="https://percona.community/blog/2022/12/dashboard5.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In the Command center, we can find more details about what’s wrong. There are three kinds of panels: current usage, anomalies, and metrics for one week ago.&lt;/p&gt;
&lt;p&gt;As main metrics, we use CPU, disk queue, write latency, read latency and used memory. These metrics can very quickly help us understand what’s happened in our system.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/12/dashboard6_hu_2271ceb1aaf41250.png 480w, https://percona.community/blog/2022/12/dashboard6_hu_5f34c6ce1572be43.png 768w, https://percona.community/blog/2022/12/dashboard6_hu_d481a0262129c3fa.png 1400w"
src="https://percona.community/blog/2022/12/dashboard6.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;And finally, the panel Service Summary shows detailed information about each service (node, server) in our system: number of connections to DB, QPS at each of them, and uptime.&lt;/p&gt;
&lt;h2 id="polishing-the-dashboard---feedback-matters"&gt;Polishing the Dashboard - Feedback Matters&lt;/h2&gt;
&lt;p&gt;When we discussed the POC with other teams, we got the question, “what does it mean — No anomalies?” Then we added a detailed description “No alerts because CPU less than xx percent.” Sounds better, doesn’t it?&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/12/dashboard7.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Our previous dashboard looked good, but we wanted more! What could we improve? We already have CPU, Disk anomalies, maybe we can add more metrics here? And we did! High memory? Perfect! Also, to prevent paying a lot for unused hardware, we implemented “Low CPU Servers” where we get alerts when using less than 30 CPUs.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/12/dashboard8_hu_fcac0fff3e0a699b.png 480w, https://percona.community/blog/2022/12/dashboard8_hu_862e531127fe1679.png 768w, https://percona.community/blog/2022/12/dashboard8_hu_67b47babd5dd53d2.png 1400w"
src="https://percona.community/blog/2022/12/dashboard8.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;When we have red statuses for nodes in the Anomaly Detection section, we can explore it and drill down. We can jump to a more detailed level and check what happened - CPU, Disk, and Memory for each metric.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/12/dashboard9_hu_978df235540313a.png 480w, https://percona.community/blog/2022/12/dashboard9_hu_a82ee5ee11ee470e.png 768w, https://percona.community/blog/2022/12/dashboard9_hu_856ff2af0d86f3e8.png 1400w"
src="https://percona.community/blog/2022/12/dashboard9.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The first version of Overview was changed too. We added more details about different databases. Some panels were removed after feedback. And the main feature is filtering.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/12/dashboard10_hu_eadfa7b7c6863d44.png 480w, https://percona.community/blog/2022/12/dashboard10_hu_5a45558c07e8cafc.png 768w, https://percona.community/blog/2022/12/dashboard10_hu_a8843f0dd9ef501d.png 1400w"
src="https://percona.community/blog/2022/12/dashboard10.png" alt="Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Here we tried to create a view where a user can choose the environment and see only its nodes.&lt;/p&gt;
&lt;p&gt;That’s how we achieved our final goal - you can open the dashboard, check it, and then drink your morning cup of coffee with a calm mind!&lt;/p&gt;
&lt;p&gt;Try this out if you’re already using Percona PMM. If you’re not? You can set up and try out PMM in just a few minutes, start with the &lt;a href="https://www.percona.com/software/pmm/quickstart" target="_blank" rel="noopener noreferrer"&gt;Quickstart&lt;/a&gt;."&lt;/p&gt;</content:encoded>
      <author>Anton Bystrov</author>
      <author>Aleksandra Abramova</author>
      <category>PMM</category>
      <category>monitoring</category>
      <category>dashboard</category>
      <category>VictoriaMetrics</category>
      <media:thumbnail url="https://percona.community/blog/2022/12/Dashboards-PMM_hu_8ff5fb4af433d313.jpg"/>
      <media:content url="https://percona.community/blog/2022/12/Dashboards-PMM_hu_c783eefe696393dd.jpg" medium="image"/>
    </item>
    <item>
      <title>Testing Kubernetes with KUTTL</title>
      <link>https://percona.community/blog/2022/12/16/testing-kubernetes-with-kuttl/</link>
      <guid>https://percona.community/blog/2022/12/16/testing-kubernetes-with-kuttl/</guid>
      <pubDate>Fri, 16 Dec 2022 00:00:00 UTC</pubDate>
      <description>Automated testing is the only way to be sure that your code works. Enabling automated testing can be hard and we say a lot of tools to write automated tests in the industry since the beginning. Some veterans in the industry may remember Selenium, Cucumber frameworks that help automate testing in the browser. However, testing in Kubernetes can be hard.</description>
      <content:encoded>&lt;p&gt;Automated testing is the only way to be sure that your code works. Enabling automated testing can be hard and we say a lot of tools to write automated tests in the industry since the beginning. Some veterans in the industry may remember Selenium, Cucumber frameworks that help automate testing in the browser. However, testing in Kubernetes can be hard.&lt;/p&gt;
&lt;p&gt;In Percona we deal with Kubernetes and have different operators to automate the management of databases. It requires testing. A lot of testing. We have different frameworks to help us with it&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Codecept.js to write UI tests for PMM. Also, we use a playwright for some cases.&lt;/li&gt;
&lt;li&gt;We have tools to help us with API testing as well as automating some routines by running bash commands during the test step.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;However, those frameworks are not applicable to test Kubernetes workloads as well as Kubernetes operators. I’ve been working in the PMM integrations team for six months and saw different approaches to automate testing for PMM/DBaaS. We have a Go test library with wrappers around kubectl and codecept.js for end-to-end tests for the User Interface.&lt;/p&gt;
&lt;h2 id="what-challenges-do-we-have"&gt;What challenges do we have?&lt;/h2&gt;
&lt;p&gt;Well, to be sure that a database cluster creation works we need to automate the following steps&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Installation of operators to Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Test integration with version service to respect compatibility matrix.&lt;/li&gt;
&lt;li&gt;Create a database cluster and wait once it’ll be available&lt;/li&gt;
&lt;li&gt;Do some assertions against Kubernetes as well as UI.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The main pain point here is that we need to wait up to 10-15 minutes for each step and we can’t have different test cases to cover as many cases as we can. Yet we can achieve some performance benefits by paralleling workloads, still, it requires learning Javascript and testing framework to work with it. We had some architectural changes recently and moved from our custom gRPC API to create and manage database clusters to an operator that runs on top of other operators and converts K8s’ Custom Resource from generic format to operator specific. We had a couple of options for this new project and after research, we chose kuttl as a framework for integration/e2e testing.&lt;/p&gt;
&lt;h2 id="what-is-kuttl-anyway-and-why-should-i-care"&gt;What is KUTTL anyway and why should I care?&lt;/h2&gt;
&lt;p&gt;KUTTL is the KUbernetes Test TooL. It’s written in Go and provides a declarative way to test Kubernetes operators using Kubernetes primitives. It’s easy to start kuttling. Let’s take a deeper look. I’ll use &lt;a href="https://github.com/percona/dbaas-operator" target="_blank" rel="noopener noreferrer"&gt;dbaas-operator&lt;/a&gt; as an example. DBaaS-operator is an operator that has a simple and generic Custom Resource Definition available to create Percona Server MongoDB or Percona XtraDB Cluster instances in kubernetes. It uses underlying operators as a dependencies. We have the following structure&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;e2e-tests
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── kind.yml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── kuttl-eks.yml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── kuttl.yml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;└── tests
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; └── pxc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 00-assert.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 00-deploy-operators.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 01-assert.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 01-deploy-pxc.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 02-assert.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 02-upgrade-pxc.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 03-assert.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 03-restart-pxc.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 04-delete-cluster.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 05-assert.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 05-create-cluster.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 06-assert.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 06-scale-up-pxc.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 07-assert.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ├── 07-scale-down-pxc.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; └── 08-delete-cluster.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2 directories, 19 files&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s discuss these YAML files more&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;kind.yml contains settings to run &lt;a href="https://kind.sigs.k8s.io/" target="_blank" rel="noopener noreferrer"&gt;Kind&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;kuttl.yml has all required settings for Kuttl framework and kuttl-eks.yml has some EKS specific configurations&lt;/li&gt;
&lt;li&gt;tests folder has test steps and assertions&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="kind-and-kuttl-settings"&gt;Kind and KUTTL settings&lt;/h2&gt;
&lt;p&gt;Let’s discuss Kind and kuttl settings and I’ll start with KUTTL first&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;kuttl.dev/v1beta1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;TestSuite&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kindConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;e2e-tests/kind.yml &lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Path to Kind config that will be used to create Kind clusters&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;crdDir&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;config/crd/bases &lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Path to a directory that contains CRD files. Kuttl will apply them before running tests&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;artifactsDir&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;/tmp/ &lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Path to a directory to store artifacts such as logs and other information&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;testDirs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;- &lt;span class="l"&gt;e2e-tests/tests &lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Path to directories that have test steps&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Kind config is quite easy&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Cluster&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;nodes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;- &lt;span class="nt"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;control-plane&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;- &lt;span class="nt"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;worker&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;- &lt;span class="nt"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;worker&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;- &lt;span class="nt"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;worker&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;containerdConfigPatches&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;- &lt;span class="p"&gt;|-&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; endpoint = ["http://kind-registry:5000"]&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The aforementioned config will use local registry and will create 3 k8s worker nodes controlled by control plane&lt;/p&gt;
&lt;h2 id="writing-tests"&gt;Writing tests&lt;/h2&gt;
&lt;p&gt;At the first glance, kuttling can be easy because it uses Kubernetes primitives as a test step and assertion but I had a couple of problems to test my operator. Let’s take a look at a couple of examples. Since dbaas-operator depends on PXC operator we need to prepare our environment for testing. Let’s write first test that installs PXC operator and ensures that it was installed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat e2e-tests/tests/pxc/00-deploy-pxc-operator.yml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: kuttl.dev/v1beta1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: TestStep
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;timeout: 10 # Timeout for the test step
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;commands:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - command: kubectl apply -f https://raw.githubusercontent.com/percona/percona-xtradb-cluster-operator/v${PXC_OPERATOR_VERSION}/deploy/bundle.yaml -n "${NAMESPACE}"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;KUTTL test steps easily extensible with &lt;a href="https://kuttl.dev/docs/testing/reference.html#commands" target="_blank" rel="noopener noreferrer"&gt;commands&lt;/a&gt;. One can run even scripts as a prerequisite for a test case. PXC operator installs CRDs and creates a deployment and here’s an example of assertion.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat e2e-tests/tests/pxc/00-assert.yml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: kuttl.dev/v1beta1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: TestAssert
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;timeout: 120 # Timeout waiting for the state
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;---
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: apiextensions.k8s.io/v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: CustomResourceDefinition
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: perconaxtradbclusters.pxc.percona.com
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; group: pxc.percona.com
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; names:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; kind: PerconaXtraDBCluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; listKind: PerconaXtraDBClusterList
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; plural: perconaxtradbclusters
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; shortNames:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - pxc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - pxcs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; singular: perconaxtradbcluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; scope: Namespaced
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;---
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: apiextensions.k8s.io/v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: CustomResourceDefinition
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: databaseclusters.dbaas.percona.com
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; group: dbaas.percona.com
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; names:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; kind: DatabaseCluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; listKind: DatabaseClusterList
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; plural: databaseclusters
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; shortNames:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - db
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; singular: databasecluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; scope: Namespaced
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;---
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: apps/v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Deployment
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: percona-xtradb-cluster-operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;status:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; availableReplicas: 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; observedGeneration: 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; readyReplicas: 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; replicas: 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; updatedReplicas: 1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Our first test is ready and one needs to run &lt;code&gt;kubectl kuttl test --config ./e2e-tests/kuttl.yml&lt;/code&gt; to run kuttl.&lt;/p&gt;
&lt;h2 id="more-advanced-tests"&gt;More advanced tests&lt;/h2&gt;
&lt;p&gt;We need to run our operator first to be able to work with resources and test it. KUTTL recomends configure it via &lt;code&gt;TestSuite&lt;/code&gt; by the following example&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;kuttl.dev/v1beta1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;TestSuite&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nn"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;./bin/manager&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nn"&gt;...&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;However, since dbaas-operator depends on underlying operators it needs to work correctly even if they are not present in a Kubernetes cluster. It has the following logic in the controller&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;// SetupWithManager sets up the controller with the Manager.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;func (r *DatabaseReconciler) SetupWithManager(mgr ctrl.Manager) error {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fmt.Println(os.Getenv("WATCH_NAMESPACE"))
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; unstructuredResource := &amp;unstructured.Unstructured{}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; unstructuredResource.SetGroupVersionKind(schema.GroupVersionKind{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Group: "apiextensions.k8s.io",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Kind: "CustomResourceDefinition",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Version: "v1",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; })
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; controller := ctrl.NewControllerManagedBy(mgr).
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; For(&amp;dbaasv1.DatabaseCluster{})
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; err := r.Get(context.Background(), types.NamespacedName{Name: pxcCRDName}, unstructuredResource)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; if err == nil {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; if err := r.addPXCToScheme(r.Scheme); err == nil {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; controller.Owns(&amp;pxcv1.PerconaXtraDBCluster{})
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; err = r.Get(context.Background(), types.NamespacedName{Name: psmdbCRDName}, unstructuredResource)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; if err == nil {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; if err := r.addPSMDBToScheme(r.Scheme); err == nil {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; controller.Owns(&amp;psmdbv1.PerconaServerMongoDB{})
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; return controller.Complete(r)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;controller.Owns&lt;/code&gt; sets up a controller to watch specified resources and once they were changed it’ll run a reconciliation loop to sync changes. Also, it checks that operator is present in the cluster by checking that deployment and CRDs are available. It means that to make the operator work correctly in tests we need to choose from the following options&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Restart operator once upsteam operator was installed by sending &lt;code&gt;HUP&lt;/code&gt; signal&lt;/li&gt;
&lt;li&gt;Run operator only after underlying operator is present in a cluster&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Hence, I moved command as the next step before creating a cluster. You can see it below&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat e2e-tests/tests/pxc/01-deploy-pxc.yml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: kuttl.dev/v1beta1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: TestStep
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;timeout: 10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;commands:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - script: WATCH_NAMESPACE=$NAMESPACE ../../../bin/manager
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; background: true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;---
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: dbaas.percona.com/v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: DatabaseCluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: test-cluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; databaseType: pxc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; databaseImage: percona/percona-xtradb-cluster:8.0.23-14.1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; databaseConfig: |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; wsrep_provider_options="debug=1;gcache.size=1G"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; secretsName: pxc-sample-secrets
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; clusterSize: 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; loadBalancer:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; type: haproxy
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; exposeType: ClusterIP
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; size: 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; dbInstance:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; cpu: "1"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; memory: 1G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; diskSize: 15G&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Note: &lt;code&gt;command&lt;/code&gt; supports only simple commands and does not fully support env variables. It supports only $NAMESPACE, $PATH and $HOME. However, &lt;code&gt;script&lt;/code&gt; solves the problem of setting &lt;code&gt;WATCH_NAMESPACE&lt;/code&gt; environment variable.&lt;/p&gt;
&lt;p&gt;In nutshell, the test step above does two things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Runs the operator&lt;/li&gt;
&lt;li&gt;Creates a database cluster&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The assertion checks that kubernetes cluster has the &lt;code&gt;DatabaseCluster&lt;/code&gt; object with &lt;code&gt;ready&lt;/code&gt; status as well as PXC cluster with the same status.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;kuttl.dev/v1beta1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;TestAssert&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;600&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nn"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;dbaas.percona.com/v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;DatabaseCluster&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;test-cluster&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;databaseType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pxc&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;databaseImage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/percona-xtradb-cluster:8.0.23-14.1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;databaseConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; [mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; wsrep_provider_options="debug=1;gcache.size=1G"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;secretsName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pxc-sample-secrets&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;clusterSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;loadBalancer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;haproxy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;exposeType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ClusterIP&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/percona-xtradb-cluster-operator:1.11.0-haproxy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;dbInstance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;cpu&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;1G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;diskSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;15G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nn"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pxc.percona.com/v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;PerconaXtraDBCluster&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;test-cluster&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;allowUnsafeConfigurations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;crVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1.11.0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;haproxy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/percona-xtradb-cluster-operator:1.11.0-haproxy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;serviceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ClusterIP&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;pxc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; [mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; wsrep_provider_options="debug=1;gcache.size=1G"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;expose&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{}&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona/percona-xtradb-cluster:8.0.23-14.1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;livenessProbes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{}&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;readinessProbes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{}&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;cpu&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;1G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;serviceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ClusterIP&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;sidecarResources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{}&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumeSpec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;15G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;secretsName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pxc-sample-secrets&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;updateStrategy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;SmartUpdate&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;upgradeOptions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;8.0&lt;/span&gt;-&lt;span class="l"&gt;recommended&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;schedule&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;*&lt;span class="w"&gt; &lt;/span&gt;*&lt;span class="w"&gt; &lt;/span&gt;*&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ready&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ready&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="caveats-and-notes"&gt;Caveats and notes&lt;/h2&gt;
&lt;p&gt;I had problems running tests in Kind. They were flaky because PXC operator can’t expose metrics and had problems with liveness probe. I haven’t figured out how to fix it but as a workaround I use minikube to run tests&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; minikube start --nodes=4 --cpus=2 --memory=4g --apiserver-names host.docker.internal --kubernetes-version=v1.23.6
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; minikube kubectl -- config view --flatten --minify &gt; ~/.kube/test-minikube
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; KUBECONFIG=~/.kube/test-minikube kubectl kuttl test --config ./e2e-tests/kuttl.yml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="further-steps"&gt;Further steps&lt;/h2&gt;
&lt;p&gt;There’s always room for improvement and I have these steps in mind&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Use docker images and OLM bundles as a way to run the operator for tests. This will be the best way to simulate a production like environment.&lt;/li&gt;
&lt;li&gt;Add more advanced tests for database clusters such as running queries, try loading data as well as capacity testing. It’s easily achivable with kuttl&lt;/li&gt;
&lt;/ol&gt;</content:encoded>
      <author>Andrew Minkin</author>
      <category>PMM</category>
      <category>DBaaS</category>
      <category>KUTTL</category>
      <category>testing</category>
      <media:thumbnail url="https://percona.community/blog/2022/12/K8S-KUTTL_hu_89a8b6eb1df21ae3.jpg"/>
      <media:content url="https://percona.community/blog/2022/12/K8S-KUTTL_hu_848f0ae66b9be622.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.33 preview release</title>
      <link>https://percona.community/blog/2022/12/08/preview-release/</link>
      <guid>https://percona.community/blog/2022/12/08/preview-release/</guid>
      <pubDate>Thu, 08 Dec 2022 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.33 preview release Hello folks! Percona Monitoring and Management (PMM) 2.33 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-233-preview-release"&gt;Percona Monitoring and Management 2.33 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.33 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Release notes can be found in &lt;a href="https://pmm-2-33-0.onrender.com/release-notes/2.33.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker"&gt;Percona Monitoring and Management server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.33.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; In order to use the DBaaS functionality during the Percona Monitoring and Management preview release, you should add the following environment variablewhen starting PMM server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PERCONA_TEST_DBAAS_PMM_CLIENT=perconalab/pmm-client:2.33.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client release candidate tarball for 2.33 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-4615.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.33.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.33.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-005acacf35adcfa57&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.32 preview release</title>
      <link>https://percona.community/blog/2022/11/04/preview-release/</link>
      <guid>https://percona.community/blog/2022/11/04/preview-release/</guid>
      <pubDate>Fri, 04 Nov 2022 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.32 preview release Hello folks! Percona Monitoring and Management (PMM) 2.32 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-232-preview-release"&gt;Percona Monitoring and Management 2.32 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.32 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Release notes can be found in &lt;a href="https://pmm-doc-2-32-pr-904.onrender.com/release-notes/2.32.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker"&gt;Percona Monitoring and Management server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.32.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; In order to use the DBaaS functionality during the Percona Monitoring and Management preview release, you should add the following environment variablewhen starting PMM server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PERCONA_TEST_DBAAS_PMM_CLIENT=perconalab/pmm-client:2.32.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client release candidate tarball for 2.32 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-4500.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;percona-release enable percona testing&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://percona-vm.s3.amazonaws.com/PMM2-Server-2.32.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.32.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-02cfe7580e77fb5fa&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>AWS Summit Mexico City: Back to In-Person Events</title>
      <link>https://percona.community/blog/2022/10/19/aws-summit-mexico-city-back-to-in-person-events/</link>
      <guid>https://percona.community/blog/2022/10/19/aws-summit-mexico-city-back-to-in-person-events/</guid>
      <pubDate>Wed, 19 Oct 2022 00:00:00 UTC</pubDate>
      <description>In March 2020, I presented my latest talk at an in-person event. This event was organized by a local university, Then when events started to happen online, during the pandemic, I was a speaker at about 41 events, where I presented 38 talks and 7 workshops (until September this year), in both Spanish and English.</description>
      <content:encoded>&lt;p&gt;In March 2020, I presented my latest talk at an in-person event. This event was organized by a local university, Then when events started to happen online, during the pandemic, I was a speaker at about 41 events, where I presented 38 talks and 7 workshops (until September this year), in both Spanish and English.&lt;/p&gt;
&lt;p&gt;Having the opportunity to join so many events and collaborate with communities around the world was one of the advantages that events were organized in virtual spaces. I met awesome people doing amazing things, and I learned a lot from them. But I really was missing attending in-person events, especially those being held in other cities.&lt;/p&gt;
&lt;p&gt;On July 12, I joined Percona as a Technical Evangelist, and two months later I was at the airport, in my city, waiting for my flight to my first in-person event after 2 and a half years, and the first one as Perconian. &lt;a href="http://aws.amazon.com/es/events/summits/mexico-city/" target="_blank" rel="noopener noreferrer"&gt;AWS Summit Mexico City&lt;/a&gt; was held on September 21 and September 22 at &lt;a href="https://www.exposantafe.com.mx/esfm/" target="_blank" rel="noopener noreferrer"&gt;Expo Santa Fe Mexico&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The last time I visited the Expo I was attending Campus Party Mexico 2013, with the Firefox OS launch team, for a pre-launch event, and my last trip to Mexico City was in October 2019, before leaving for London to speak at GitLab Commit London.&lt;/p&gt;
&lt;p&gt;I was at AWS Summit Mexico City just as an attendee, looking forward to learning more about AWS, doing some networking, and meeting some friends I hadn’t seen in a long time or never seen in person.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/aws-summit-mexico-city_hu_8c52896cbd760956.jpg 480w, https://percona.community/blog/2022/10/aws-summit-mexico-city_hu_852762b2aa8ce665.jpg 768w, https://percona.community/blog/2022/10/aws-summit-mexico-city_hu_4e512c4613df7dee.jpg 1400w"
src="https://percona.community/blog/2022/10/aws-summit-mexico-city.jpg" alt="AWS Summit Mexico City" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="the-adventure-had-begun"&gt;The Adventure Had Begun&lt;/h2&gt;
&lt;p&gt;I booked my flight for September 20 at 9 PM, the only non-stop flight available. I arrived early at the airport, just in time for boarding. It was raining, so take-off was delayed until 10 PM. The plane landed in Mexico City at 11:30 PM.&lt;/p&gt;
&lt;p&gt;My hotel was in &lt;a href="https://en.wikipedia.org/wiki/Santa_Fe,_Mexico_City" target="_blank" rel="noopener noreferrer"&gt;Santa Fe&lt;/a&gt;, near the location of the event. Once I arrived I took an Uber and after 35 minutes I was at the hotel. But to my surprise my reservation was cancelled, and there were no rooms available. There was a problem with their payment system and my debit card was declined.&lt;/p&gt;
&lt;p&gt;Had to call other hotels nearby, and finally found one with rooms available, 8 minutes from the Expo. I got a better deal: a lower cost, and a bigger room. Slept just four hours but I was ready for the event.&lt;/p&gt;
&lt;h2 id="visiting-booth"&gt;Visiting Booth&lt;/h2&gt;
&lt;p&gt;On the first day, I arrived early to the Expo for registering and getting my badge. The expo zone would open at 8 AM, so I had to wait. There were booth by sponsors and AWS.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/aws-summit-mexico-city-expo_hu_19bee353c9c4faa6.jpg 480w, https://percona.community/blog/2022/10/aws-summit-mexico-city-expo_hu_d0515664cbaa81db.jpg 768w, https://percona.community/blog/2022/10/aws-summit-mexico-city-expo_hu_c67c5cf344a8b6d2.jpg 1400w"
src="https://percona.community/blog/2022/10/aws-summit-mexico-city-expo.jpg" alt="AWS Summit Mexico City Expo" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;During the first day these were the booths I visited:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://vmware.com/" target="_blank" rel="noopener noreferrer"&gt;VMWare&lt;/a&gt;: When I was in college I tested VMWare for learning about virtualization, and how to install Linux on other operating systems. I haven’t used it since then, but it was interesting to know that VMWare now has some other tools and cloud services like &lt;a href="https://cloudhealth.vmware.com/" target="_blank" rel="noopener noreferrer"&gt;Cloud Health&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://datadog.com/" target="_blank" rel="noopener noreferrer"&gt;Datadog&lt;/a&gt; / &lt;a href="https://dynatrace.com/" target="_blank" rel="noopener noreferrer"&gt;Dynatrace&lt;/a&gt;: While passing by Datadog and Dynatrace booths I had the opportunity to watch a demo of their platforms, specifically those monitoring and observability features they provide for databases, and a wide range of different technologies.&lt;/p&gt;
&lt;p&gt;On Datadog integration with AWS you can get:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://docs.datadoghq.com/infrastructure/" target="_blank" rel="noopener noreferrer"&gt;Infrastructure Monitoring&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://docs.datadoghq.com/tracing/" target="_blank" rel="noopener noreferrer"&gt;App Performance Monitoring (APM)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://docs.datadoghq.com/logs/" target="_blank" rel="noopener noreferrer"&gt;Log Management&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://docs.datadoghq.com/security_platform/" target="_blank" rel="noopener noreferrer"&gt;Security Monitoring&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More information available in the &lt;a href="https://docs.datadoghq.com/" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Dynatrace provides monitoring features for the following areas:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.dynatrace.com/platform/applications-microservices-monitoring/" target="_blank" rel="noopener noreferrer"&gt;Applications and Microservices&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.dynatrace.com/platform/infrastructure-monitoring/" target="_blank" rel="noopener noreferrer"&gt;Infrastructure Monitoring&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.dynatrace.com/platform/digital-experience/" target="_blank" rel="noopener noreferrer"&gt;Digital Experience&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.dynatrace.com/support/help/how-to-use-dynatrace/databases" target="_blank" rel="noopener noreferrer"&gt;Database Monitoring&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More information available in the &lt;a href="https://www.dynatrace.com/platform/" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://hashicorp.com/" target="_blank" rel="noopener noreferrer"&gt;HashiCorp&lt;/a&gt;: I’ve been a &lt;a href="https://www.hashicorp.com/ambassadors" target="_blank" rel="noopener noreferrer"&gt;HashiCorp Ambassador&lt;/a&gt; since 2021 and as part of the program I’ve been creating content related to Vagrant and Packer, including blog posts and a few talks I’ve presented at virtual events. These days I’m learning about Terraform. That’s why I passed by the booth to say hi.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://mongodb.com/" target="_blank" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt;: MongoDB is on my list of technologies I would like to learn more about. The resources they shared with me were so helpful for starting my learning journey.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://n3xgen.io/" target="_blank" rel="noopener noreferrer"&gt;Nextgen.io&lt;/a&gt;: Nexgen.io is a platform that provides support in the following areas:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Transform the traditional monolithic application to micro services&lt;/li&gt;
&lt;li&gt;Provide out-of-the box DevOps features for any development project&lt;/li&gt;
&lt;li&gt;Provide Solid and highly scalable container based platform&lt;/li&gt;
&lt;li&gt;Provide built-in application integration capability with efficient data mapping tool&lt;/li&gt;
&lt;li&gt;Provide out-of-the box B2B capability.&lt;/li&gt;
&lt;li&gt;Managed Services with free upgrade&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I was interested in knowing more about the DevOps solutions they provide and what I learned is that through the platform you can get help on setting up the CI/CD pipelines of your project.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://aws.amazon.com/developer/community/community-builders/" target="_blank" rel="noopener noreferrer"&gt;AWS Community Builders&lt;/a&gt;: Being a member of the GitLab Heroes, GitKraken Ambassadors and HashiCorp Ambassadors programs let me learn more about DevOps and know some tools I use regularly, as well as improving my writing and public speaking skills. Knowing that there’s a AWS Community Builders program where you can be recognized for your contributions to the community, and know more about AWS, the tools and services you have access to when registering on the platform, is an opportunity for anyone looking to be an expert on AWS and expand their network. Applications are not open right now.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Not only was so informative to visit the booths, but allowed me to introduce myself as a Technical Evangelist and be aware of what people know about Percona.&lt;/p&gt;
&lt;h2 id="attending-talks"&gt;Attending Talks&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/aws-summit-mexico-city-kevin-miller_hu_ce5871579cd1399.jpg 480w, https://percona.community/blog/2022/10/aws-summit-mexico-city-kevin-miller_hu_a65f59f0f02c112b.jpg 768w, https://percona.community/blog/2022/10/aws-summit-mexico-city-kevin-miller_hu_263e3ac42f9814a1.jpg 1400w"
src="https://percona.community/blog/2022/10/aws-summit-mexico-city-kevin-miller.jpg" alt="AWS Summit Mexico City - Kevin Miller" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;These are the talks I attended during AWS Summit Mexico City:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Keynote - Kevin Miller / AWS VP - Simple Storage Service&lt;/li&gt;
&lt;li&gt;Should I use Serverles? Myths and realities for developers (Spanish) - David Victoria / Emite - Director of Operations&lt;/li&gt;
&lt;li&gt;Learn about AWS Global Infrastructure extended to the border - Leonardo Solano / AWS Senior Hybrid Cloud Solutions Architect&lt;/li&gt;
&lt;li&gt;How to accelerate containers creation process on AWS with AWS App2Container (A2C) - Oscar Ramírez Vital / AWS Solutions Architect&lt;/li&gt;
&lt;li&gt;Accelerating IT modernization in government agencies - Rosendo Martinez, José Luis Vallín&lt;/li&gt;
&lt;li&gt;Introduction to security in the cloud with IAM - Uriel Enrique Arellano / Cloud Engineer - Bootcamp Institute&lt;/li&gt;
&lt;li&gt;Modernization of education at various levels - Alex Luna / AWS Sr. Solutions Architect, Juan Manuel Zenil / Escuela Bancaria y Comercial - CIO&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;During the past weeks I’ve been using AWS, had to remember how to create and launch an EC2 instance, I’m learning how to create Kubernetes clusters (with no previous knowledge on k8s) on Amazon Elastic Kubernetes Service, I’ve had to read about AWS Identity and Access Management (IAM), and I’m also learning about eksctl and Terraform.&lt;/p&gt;
&lt;p&gt;What I got after attending these talks is having a better understanding of the services and tools available on AWS, knowing about the global infrastructure of the platform, learning about what Serverless is and getting an overview of IAM.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/aws-summit-mexico-city-global-infrastructure_hu_edc203073e06ba81.jpg 480w, https://percona.community/blog/2022/10/aws-summit-mexico-city-global-infrastructure_hu_850460fa23421095.jpg 768w, https://percona.community/blog/2022/10/aws-summit-mexico-city-global-infrastructure_hu_451d7050a5f6d0e7.jpg 1400w"
src="https://percona.community/blog/2022/10/aws-summit-mexico-city-global-infrastructure.jpg" alt="AWS Summit Mexico City - Global Infrastructure" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Containerizing applications is one of the topics I’ve been learning about, but never heard of AWS &lt;a href="https://aws.amazon.com/app2container/" target="_blank" rel="noopener noreferrer"&gt;App2Container&lt;/a&gt;, that is a command line tool for containerizing Java and .NET applications. While I don’t use any of those technologies, it was interesting to know this tool.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/aws-summit-mexico-city-nu_hu_718e5fc1f9708292.jpg 480w, https://percona.community/blog/2022/10/aws-summit-mexico-city-nu_hu_dd6251991e61467a.jpg 768w, https://percona.community/blog/2022/10/aws-summit-mexico-city-nu_hu_d0d0c5d50aeaf1a7.jpg 1400w"
src="https://percona.community/blog/2022/10/aws-summit-mexico-city-nu.jpg" alt="AWS Summit Mexico City - Nu" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;It was also good to hear testimonials of companies and government agencies on how they use AWS, including &lt;a href="https://nubank.com.br/en/" target="_blank" rel="noopener noreferrer"&gt;Nu&lt;/a&gt;, &lt;a href="https://www.contpaqi.com/" target="_blank" rel="noopener noreferrer"&gt;CONTPAQi&lt;/a&gt; and the government of &lt;a href="https://municipiodequeretaro.gob.mx/" target="_blank" rel="noopener noreferrer"&gt;Querétaro&lt;/a&gt; who shared their experience.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/aws-summit-mexico-city-contpaqi_hu_e5c50c9cca1657bf.jpg 480w, https://percona.community/blog/2022/10/aws-summit-mexico-city-contpaqi_hu_80838235092f2ad1.jpg 768w, https://percona.community/blog/2022/10/aws-summit-mexico-city-contpaqi_hu_64737893ab9209c8.jpg 1400w"
src="https://percona.community/blog/2022/10/aws-summit-mexico-city-contpaqi.jpg" alt="AWS Summit Mexico City - CONTPAQi" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Back in March, it was &lt;a href="https://aws.amazon.com/blogs/publicsector/aws-announces-local-zones-latin-america/" target="_blank" rel="noopener noreferrer"&gt;announced&lt;/a&gt; that new AWS Local Zones would be launched across Latin America. The new locations included Bogotá, Colombia; Buenos Aires, Argentina; Lima, Peru; Queretaro, Mexico; Rio de Janeiro, Brazil; and Santiago, Chile. This was one of the announcements made in the talks presented by AWS.&lt;/p&gt;
&lt;p&gt;While the recordings are not available on &lt;a href="https://www.youtube.com/c/amazonwebservices" target="_blank" rel="noopener noreferrer"&gt;Amazon Web Services YouTube channel&lt;/a&gt;, here’s a list of videos I recommend to watch:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=bmAhMewz_pE" target="_blank" rel="noopener noreferrer"&gt;History of success - Municipality of Querétaro (Spanish)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=UuRX2gK0IYw" target="_blank" rel="noopener noreferrer"&gt;AWS Global Infrastructure Explainer Video&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=YMj33ToS8cI" target="_blank" rel="noopener noreferrer"&gt;AWS re:Inforce 2022 - AWS Identity and Access Management (IAM) deep dive (IAM301)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="networking"&gt;Networking&lt;/h2&gt;
&lt;p&gt;Attending AWS Summit Mexico City was also an opportunity for meeting people I hadn’t seen in a long time, like a friend that works at Accenture who I saw the last time back in 2013 at the Firefox OS launch event in Mexico City, and another friend that is also a GitLab Hero who now works at Dynatrace, and I never met in person before.&lt;/p&gt;
&lt;p&gt;Having conversations with sponsors and some attendees gave me an overview of what people know about Percona. Most of the people I talked to never heard of Percona before and some were interested to know more about what we do.&lt;/p&gt;
&lt;p&gt;And I also was able to meet and spend time with &lt;a href="https://twitter.com/EdithPuclla" target="_blank" rel="noopener noreferrer"&gt;Edith&lt;/a&gt; and &lt;a href="https://twitter.com/dberkholz" target="_blank" rel="noopener noreferrer"&gt;Donnie&lt;/a&gt;, Perconians who were also attending the event.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/aws-summit-mexico-city-perconians_hu_16d4737001162a59.jpg 480w, https://percona.community/blog/2022/10/aws-summit-mexico-city-perconians_hu_9b31ba2138ff100c.jpg 768w, https://percona.community/blog/2022/10/aws-summit-mexico-city-perconians_hu_f950f99e605106e5.jpg 1400w"
src="https://percona.community/blog/2022/10/aws-summit-mexico-city-perconians.jpg" alt="AWS Summit Mexico City - Perconians" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="how-was-the-whole-experience"&gt;How Was the Whole Experience?&lt;/h2&gt;
&lt;p&gt;The good:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Getting an overview of AWS: Services and tools, and global infrastructure&lt;/li&gt;
&lt;li&gt;Understanding serverless&lt;/li&gt;
&lt;li&gt;Learning about AWS Identity and Access Management (IAM)&lt;/li&gt;
&lt;li&gt;Hearing testimonials of how AWS is being used&lt;/li&gt;
&lt;li&gt;Simultaneous sessions&lt;/li&gt;
&lt;li&gt;Live translation for talks in English&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Expectations for future AWS events:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;More technical sessions, especially workshops.&lt;/li&gt;
&lt;li&gt;Different networking and after-event activities (by organizers and sponsors, and local AWS communities).&lt;/li&gt;
&lt;li&gt;A venue with Internet connection&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In general, attending AWS Summit Mexico City or any AWS event in the future is something I would recommend to anyone who is starting to use AWS or already have some experience, as this is a place to learn from experts and practitioners, get a better understanding of important concepts, an overview of the platform and the services and tools available, as well as expand the network of contacts and meet local community members, and AWS users.&lt;/p&gt;
&lt;h2 id="after-the-event"&gt;After the Event&lt;/h2&gt;
&lt;p&gt;After two days at AWS Summit Mexico City, Edith and I traveled to Querétaro, a city located two and a half hours from Mexico City, for meeting other Perconians. Once there we gathered together for having breakfast and later worked at a local coworking space. We met Mauricio, Eduardo, and David from the Managed Services team.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/perconians-queretaro_hu_b75394a390abd93c.jpg 480w, https://percona.community/blog/2022/10/perconians-queretaro_hu_a102b847985c37cc.jpg 768w, https://percona.community/blog/2022/10/perconians-queretaro_hu_1ecf907bd03cf25b.jpg 1400w"
src="https://percona.community/blog/2022/10/perconians-queretaro.jpg" alt="Perconians at Querétaro" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="resources"&gt;Resources&lt;/h2&gt;
&lt;p&gt;Some good resources that were shared during the event:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://explore.skillbuilder.aws/learn" target="_blank" rel="noopener noreferrer"&gt;AWS Skill Builder&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-overview" target="_blank" rel="noopener noreferrer"&gt;Overview of Amazon Web Services&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/architecture/well-architected" target="_blank" rel="noopener noreferrer"&gt;AWS Well Architected&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverlessland.com/" target="_blank" rel="noopener noreferrer"&gt;Serverless Land&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://university.mongodb.com/" target="_blank" rel="noopener noreferrer"&gt;MongoDB University&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Mario García</author>
      <category>AWS</category>
      <category>Conference</category>
      <media:thumbnail url="https://percona.community/blog/2022/10/aws-summit-mexico-city_hu_d86c1eb6d16bd83d.jpg"/>
      <media:content url="https://percona.community/blog/2022/10/aws-summit-mexico-city_hu_aba29feb32849439.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL: Tracing a single query with PERFORMANCE_SCHEMA</title>
      <link>https://percona.community/blog/2022/10/18/mysql-tracing-a-single-query-with-performance_schema/</link>
      <guid>https://percona.community/blog/2022/10/18/mysql-tracing-a-single-query-with-performance_schema/</guid>
      <pubDate>Tue, 18 Oct 2022 00:00:00 UTC</pubDate>
      <description>My task is to collect performance data about a single query, using PERFORMANCE_SCHEMA (P_S for short) in MySQL, to ship it elsewhere for integration with other data.</description>
      <content:encoded>&lt;p&gt;My task is to collect performance data about a single query, using &lt;code&gt;PERFORMANCE_SCHEMA&lt;/code&gt; (P_S for short) in MySQL, to ship it elsewhere for integration with other data.&lt;/p&gt;
&lt;p&gt;In a grander scheme of things, I will need to define what performance data from a query I am actually interested in.
I will also need to find a way to attribute the query (as seen on the server) to a point in the codebase of the client, which is not always easy when an ORM or other SQL generator is being used.
And finally I will need to find a way to view the query execution in the context of the client code execution, because the data access is only a part of the system performance.&lt;/p&gt;
&lt;p&gt;But this is about query execution in the server, and the instrumentation available to me in MySQL 8, at least to get things started.
So we take the tour of performance schema, and then run one example query (a simple join) and see what we can find out about this query.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;First published on &lt;a href="https://blog.koehntopp.info/2021/09/15/mysql-tracing-a-single-query-with-performanceschema.html" target="_blank" rel="noopener noreferrer"&gt;https://blog.koehntopp.info/&lt;/a&gt; and syndicated here with permission of the &lt;a href="https://percona.community/contributors/koehntopp/"&gt;author&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;h1 id="performance-schema-the-10000-m-view"&gt;Performance Schema, the 10.000 m view&lt;/h1&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/performance-schema.html" target="_blank" rel="noopener noreferrer"&gt;The Manual&lt;/a&gt; has a major chapter that covers P_S in details.
The original idea of P_S is to have a bunch of preallocated memory areas without locks, presented to the database itself as tables.&lt;/p&gt;
&lt;p&gt;P_S is unusual in the way that P_S “tables” are never locked while you work with them.
That means the values in a “table” can change while you read them.
That is important - if you for example calculate percentages, they may not add up to 100%.
If you &lt;code&gt;ORDER BY&lt;/code&gt;, the sort may or may not be stable.&lt;/p&gt;
&lt;p&gt;These are good properties: P_S will not freeze the server, and you won’t kill the server by working with P_S tables.&lt;/p&gt;
&lt;p&gt;It is a good idea to make a copy of P_S tables while you work with them, by turning off subquery merging with &lt;code&gt;select /*+ NO_MERGE(t) */ &lt;/code&gt;, and then materializing P_S tables in subqueries.&lt;/p&gt;
&lt;p&gt;Originally, P_S also had no secondary indexes, so joining P_S tables against other P_S tables did not work efficiently.
That was probably a good idea, because joining against a table that is changing while you execute the join is probably generating random results anyway.
But because it is so common, and because MySQL itself does this now internally in &lt;code&gt;sys.*&lt;/code&gt;, secondary indexes to join efficiently now exist.
That does not make the joins more correct, but at least you get the result faster.&lt;/p&gt;
&lt;p&gt;I wrote about all this &lt;a href="https://blog.koehntopp.info/2020/12/01/not-joining-on-performance-schema.html" target="_blank" rel="noopener noreferrer"&gt;in an earlier article&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="instruments-objects-actors-threads-and-consumers"&gt;Instruments, Objects, Actors, Threads and Consumers&lt;/h2&gt;
&lt;p&gt;The data P_S collects is centered around three major things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Time consumed. In database servers, that is mostly wait time - waiting on I/O or locks.&lt;/li&gt;
&lt;li&gt;Data transferred. In database servers, that is mostly pages read or written. In a way, this related to I/O wait.&lt;/li&gt;
&lt;li&gt;Memory used. In database servers, that is buffers allocated - how large, and how often, and peak usage.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;P_S collects this in the form of “events” (and takes care to note that P_S events are not binlog events or other any events).
The collection points are in the database server code, which is instrumented, so the collectors are &lt;em&gt;instruments&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The thing that the server code works on may be of a certain kind, for example a table or another &lt;em&gt;object&lt;/em&gt; in the server, but have a variable identity (that would be different tables, with different names).
Instruments can be filtered by using object names.&lt;/p&gt;
&lt;p&gt;The activity done in the server is done on behalf of a database user, in the form of &lt;em&gt;user@host&lt;/em&gt; or, new in MySQL 8, using roles.
The entity on which behalf the server is working on is called the &lt;em&gt;actor&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The activity done in the server is also done in the context of a &lt;em&gt;thread&lt;/em&gt;, some of which are background threads, while the majority in a busy server are usually connection threads.&lt;/p&gt;
&lt;p&gt;And finally, the data collected is put into the in-memory tables of P_S.
These come in various groups, and are called &lt;em&gt;consumers&lt;/em&gt;.&lt;/p&gt;
&lt;img src="https://blog.koehntopp.info/uploads/2021/09/performance_schema_filtering.png" /&gt;
&lt;p&gt;&lt;em&gt;Data is collected from objects using instruments. Instruments can be turned on and off. Their collected data is then filtered by Objects, Actors and Threads, and finally dropped into consumers. Many consumers are aggregates, some collect information specific to one query execution.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For each of these things there is a &lt;code&gt;setup_...&lt;/code&gt; table that controls how event data is collected by the instrumentation, filtered and the consumed in result tables.
In parallel, object identities are collected in &lt;code&gt;..._instances&lt;/code&gt; tables, which are needed to resolve object identities.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;show&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;tables&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;like&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"setup_%"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Tables_in_performance_schema&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;setup_&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;setup_actors&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;setup_consumers&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;setup_instruments&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;setup_objects&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;setup_threads&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;show&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;tables&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;like&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"%_instances"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;--------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Tables_in_performance_schema&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;_instances&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;--------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cond_instances&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;file_instances&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mutex_instances&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;prepared_statements_instances&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;rwlock_instances&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;socket_instances&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;--------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;socket_instances&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;----------------------------------------+-----------------------+-----------+-----------+-----------+-------+--------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;EVENT_NAME&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;OBJECT_INSTANCE_BEGIN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;THREAD_ID&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SOCKET_ID&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;IP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PORT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;STATE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;----------------------------------------+-----------------------+-----------+-----------+-----------+-------+--------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mysqlx&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tcpip_socket&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;106328376&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;18025&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ACTIVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mysqlx&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;unix_socket&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;106328688&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ACTIVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;server_tcpip_socket&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;106329000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;127&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ACTIVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;server_unix_socket&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;106329312&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ACTIVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;client_connection&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;106330560&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;41&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ACTIVE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;----------------------------------------+-----------------------+-----------+-----------+-----------+-------+--------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="current-history-and-history-long-tables-vs-summaries"&gt;Current, History and History Long Tables vs. Summaries&lt;/h2&gt;
&lt;p&gt;P_S collects data in a lot of summary table, which should not interest us that much here.
Our task is to look at the performance data of a single, individual query to better understand what happened when it ran.&lt;/p&gt;
&lt;p&gt;These unaggregated tables are &lt;code&gt;events_transactions&lt;/code&gt;, &lt;code&gt;events_statements&lt;/code&gt;, &lt;code&gt;events_stages&lt;/code&gt; and &lt;code&gt;events_waits&lt;/code&gt;.
For each of them, we have &lt;code&gt;_current&lt;/code&gt;, &lt;code&gt;_history&lt;/code&gt; or &lt;code&gt;_history_long&lt;/code&gt; tables.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;_current&lt;/code&gt; tables contain one entry for the currently running thread.
The &lt;code&gt;_history&lt;/code&gt; tables contain a configurable number of entries for each thread, for example 10 per thread.
And the &lt;code&gt;_history_long&lt;/code&gt; tables contain a configurable number of entries, shared across all threads, for example 10.000 in total.
As the server continues to execute statements and produce events, old entries are discarded and new entries are added, automatically.
Additionally, each query execution is aggregated along several dimensions in summary tables.
Summary tables state these dimension(s) using &lt;code&gt;by_&lt;dimensionname&gt;&lt;/code&gt;, for example &lt;code&gt;_by_user_by_eventname&lt;/code&gt; or similar.&lt;/p&gt;
&lt;p&gt;In current MySQL, P_S is enabled by default.
But not all instruments and consumers are enabled, because some instrumentation slows query execution down, and some consumers can use a lot of memory.
To be fast and safe, all memory is statically allocated at config change so the memory resource usage of P_S is constant and no allocations are made during execution.&lt;/p&gt;
&lt;p&gt;We can enable all instrumentation completely with this SQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;setup_instruments&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ENABLED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'YES'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;TIMED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'YES'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;Query&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;OK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;494&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;affected&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;Rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;matched&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1216&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Changed&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;494&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Warnings&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;setup_consumers&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ENABLED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'YES'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;Query&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;OK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;affected&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;Rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;matched&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Changed&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Warnings&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;When we look at one of these tables, for example &lt;code&gt;events_statements_current&lt;/code&gt;, we see a structure like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;show&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;table&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_statements_current&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_statements_current&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;Create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TABLE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;events_statements_current&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;THREAD_ID&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;unsigned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NOT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;EVENT_ID&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;unsigned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NOT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;END_EVENT_ID&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;unsigned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;EVENT_NAME&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NOT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="k"&gt;SOURCE&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;TIMER_START&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;unsigned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;TIMER_END&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;unsigned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;TIMER_WAIT&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;unsigned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;NESTING_EVENT_ID&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;unsigned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;NESTING_EVENT_TYPE&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;enum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'TRANSACTION'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'STATEMENT'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'STAGE'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'WAIT'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;NESTING_EVENT_LEVEL&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;STATEMENT_ID&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;unsigned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;PRIMARY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;THREAD_ID&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;EVENT_ID&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ENGINE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PERFORMANCE_SCHEMA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;CHARSET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;utf8mb4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;COLLATE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;utf8mb4_0900_ai_ci&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;That is, events are tagged with a &lt;code&gt;THREAD_ID&lt;/code&gt; (which is not a CONNECTION_ID() as seen in processlist), an &lt;code&gt;EVENT_ID/END_EVENT_ID&lt;/code&gt; bracket, various source and timer values and for further dissection, a &lt;code&gt;NESTING_EVENT_ID&lt;/code&gt; and &lt;code&gt;_TYPE&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;We can translate processlist ids into thread ids using the &lt;code&gt;P_S.THREADS&lt;/code&gt; table, and then use this to limit our view on the &lt;code&gt;events_statements_current&lt;/code&gt; table:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;processlist_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;connection_id&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timer_wait&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_text&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_statements_current&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;88463&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;statement&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;init_net_server_extension&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;94&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timer_wait&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;341&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timer_wait&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_text&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_statements_current&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So running this query took 341 Microseconds, or 0.341 ms.
And sources are named after their location in the server sourcecode, &lt;a href="https://github.com/mysql/mysql-server/blob/8.0/sql/conn_handler/init_net_server_extension.cc#L94-L96" target="_blank" rel="noopener noreferrer"&gt;filename and line number&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Events exist in a hierarchy: wait events nest within stage events, which nest within statement events, which nest within transaction events.
Some nested events refer to their own type, for example, statement events can point to other statement events they are nested in.
Other events refer to their enclosing context in the hierarchy.
Nesting Event ID, Type and Level make this clear.&lt;/p&gt;
&lt;h2 id="instrument-names"&gt;Instrument names&lt;/h2&gt;
&lt;p&gt;The manual explains &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/performance-schema-instrument-naming.html" target="_blank" rel="noopener noreferrer"&gt;Instrument Names&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;They have path-like names that group instruments hierarchically, for example &lt;code&gt;wait/io/file/innodb/innodb_data_file&lt;/code&gt;.
This is a &lt;code&gt;wait&lt;/code&gt; event, &lt;code&gt;io&lt;/code&gt; related, specifically &lt;code&gt;file&lt;/code&gt; I/O, more specific &lt;code&gt;innodb&lt;/code&gt; and even more specific &lt;code&gt;innodb_data_file&lt;/code&gt;.
Looking at other fields in the &lt;code&gt;events_waits_history&lt;/code&gt; table, we would see the file name as part of the &lt;code&gt;OBJECT_SCHEMA.OBJECT_NAME&lt;/code&gt; designator for this event.
That means, we can see how long we waited for I/O coming from this specific file or going to the file.&lt;/p&gt;
&lt;p&gt;Further up in the nesting we would see, at the statement level, the actual &lt;code&gt;SQL_TEXT&lt;/code&gt;, and also the number of rows scanned.
That means we can get a rough estimate why this particular statement instance was slow - for example, the plan was good, the number of rows was low, but we see a lot of actual file IO waits, so probably the buffer pool was cold.&lt;/p&gt;
&lt;p&gt;The manual page above discusses the instrument names at length, and it is important to get an overview of what exists and what is measured.
Specifically, for statement level entries the instruments vary during query execution and become more detailed, as they reflect the progress in understanding of the server about the nature of the statement as it is executed.&lt;/p&gt;
&lt;h1 id="an-example-run"&gt;An example run&lt;/h1&gt;
&lt;p&gt;In a freshly restarted idle server, we log in to a shell and &lt;code&gt;use world&lt;/code&gt; for the world sample database.
This is a tiny database, but because the server has been just restarted, nothing of it is cached.
We run a simple query:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;world&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;population&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;copop&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;population&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cipop&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Europe'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+-------------------------------+-----------+------------------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;copop&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cipop&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+-------------------------------+-----------+------------------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Europe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Albania&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3401200&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Tirana&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;270000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Europe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Yugoslavia&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10640000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Beograd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1204000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+-------------------------------+-----------+------------------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now let’s check what we can find out, using a second session:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;processlist_db&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'world'&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;THREAD_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;NAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;one_connection&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TYPE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;FOREGROUND&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESSLIST_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESSLIST_USER&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESSLIST_HOST&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESSLIST_DB&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;world&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;PROCESSLIST_COMMAND&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESSLIST_TIME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESSLIST_STATE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESSLIST_INFO&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PARENT_THREAD_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ROLE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;INSTRUMENTED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;YES&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;HISTORY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;YES&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;CONNECTION_TYPE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Socket&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;THREAD_OS_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;346752&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RESOURCE_GROUP&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;USR_default&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In our case we are only interested in the fact that our &lt;code&gt;thread/sql/one_connection&lt;/code&gt; in the processlist is shown as connection &lt;code&gt;9&lt;/code&gt;, but internally has a thread_id of &lt;code&gt;48&lt;/code&gt;.
The Linux Operating System PID is &lt;code&gt;346752&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;We can use this to check &lt;code&gt;events_transactions_wait&lt;/code&gt;, and find&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timer_wait&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;timer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_type&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_transactions_history&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+-----------+-----------------+-----------+------------------+--------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;state&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;source&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;timer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+-----------+-----------------+-----------+------------------+--------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;179&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;COMMITTED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1328&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;496&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;85&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;STATEMENT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;COMMITTED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1328&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;559&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;99&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;278&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;STATEMENT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5606&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;COMMITTED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1328&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5574&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;STATEMENT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+-----------+-----------------+-----------+------------------+--------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Why are there three statement events? We can check:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_text&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_statements_history&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;85&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;278&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5574&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_text&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;85&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;show&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;databases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;278&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;show&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;tables&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5574&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;population&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;copop&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;population&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cipop&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Europe'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The mysql command line client is running from the sandbox with &lt;code&gt;/home/kris/opt/mysql/8.0.25/bin/mysql --defaults-file=/home/kris/sandboxes/msb_8_0_25/my.sandbox.cnf world&lt;/code&gt;.
Autocompletion for names is not disabled.
So on client startup, the client invisibly runs &lt;code&gt;show databases&lt;/code&gt; to learn the names of all databases for autocompletion.
It then enters the &lt;code&gt;world&lt;/code&gt; database as requested and runs &lt;code&gt;show tables&lt;/code&gt; to learn the names of all tables in the &lt;code&gt;world&lt;/code&gt; database.&lt;/p&gt;
&lt;p&gt;Only then we come to the prompt and can paste our query.&lt;/p&gt;
&lt;p&gt;We are only interested in &lt;code&gt;thread_id = 48 AND event_id = 5574&lt;/code&gt;, our query.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_statements_history&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5574&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;THREAD_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;EVENT_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5574&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;END_EVENT_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6624&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;EVENT_NAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;statement&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SOURCE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;init_net_server_extension&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;94&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;TIMER_START&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;61628458852000&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;TIMER_END&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;61631769329000&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;TIMER_WAIT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3310477000&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;LOCK_TIME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;224000000&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SQL_TEXT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;population&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;copop&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;population&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cipop&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Europe'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DIGEST&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;409&lt;/span&gt;&lt;span class="n"&gt;c336982f0d3d45c4b29da77fe83aed12c6043e8ce9771c11ec82ff347e647&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DIGEST_TEXT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;population&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;copop&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;population&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;cipop&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;JOIN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ON&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;capital&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;WHERE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;continent&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;CURRENT_SCHEMA&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;world&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;OBJECT_TYPE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;OBJECT_SCHEMA&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;OBJECT_NAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;OBJECT_INSTANCE_BEGIN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MYSQL_ERRNO&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;RETURNED_SQLSTATE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;MESSAGE_TEXT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ERRORS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;WARNINGS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ROWS_AFFECTED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ROWS_SENT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ROWS_EXAMINED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;92&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;CREATED_TMP_DISK_TABLES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;CREATED_TMP_TABLES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SELECT_FULL_JOIN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SELECT_FULL_RANGE_JOIN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SELECT_RANGE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SELECT_RANGE_CHECK&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SELECT_SCAN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SORT_MERGE_PASSES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SORT_RANGE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SORT_ROWS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SORT_SCAN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;NO_INDEX_USED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;NO_GOOD_INDEX_USED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;NESTING_EVENT_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;NESTING_EVENT_TYPE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;NESTING_EVENT_LEVEL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STATEMENT_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;125&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The statement is &lt;code&gt;statement/sql/select&lt;/code&gt;.
It took 3310477000 picoseconds (3.31ms) to run.
The &lt;code&gt;sql_text&lt;/code&gt; is the full text of the statement (up to a cutoff point, in order to manage memory consumption).
The parsed statement is called &lt;code&gt;digest_text&lt;/code&gt; - identifiers are quoted, whitespace is normalized, actual constants are replaced with placeholders, and (not shown here) variable length &lt;code&gt;WHERE ... IN (...)&lt;/code&gt; clauses are shortened with ellipses.
This normalized digest is then hashed and produces an identifier for this group of identically formed statements, the &lt;code&gt;digest&lt;/code&gt;.
We learn about the number of rows looked at, &lt;code&gt;92&lt;/code&gt; and sent, &lt;code&gt;46&lt;/code&gt;.
No special flags indicating specific execution modes were set.&lt;/p&gt;
&lt;p&gt;We can use the &lt;code&gt;event_id&lt;/code&gt; to look even deeper:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timer_wait&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;timer&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_stages_history&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5574&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;order&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;by&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+--------------------------------------+----------------------+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;source&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;timer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+--------------------------------------+----------------------+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5610&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;optimizing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;270&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;88&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5611&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;statistics&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;534&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;703&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&lt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5710&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;preparing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;618&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;93&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5712&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;executing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_union&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1126&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&lt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6593&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_select&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;586&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;86&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6594&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_parse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;4542&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;93&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6596&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;waiting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;handler&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;commit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1594&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;closing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;tables&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_parse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;4593&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6621&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;freeing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_parse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;5042&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;29&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6623&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cleaning&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;up&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sql_parse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;2252&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+--------------------------------------+----------------------+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;These are the various execution stages of our statement - we select by &lt;code&gt;thread_id&lt;/code&gt; and with the &lt;code&gt;event_id&lt;/code&gt; of the statement, &lt;code&gt;5574&lt;/code&gt; as a &lt;code&gt;nesting_id&lt;/code&gt;, ordered by &lt;code&gt;event_id&lt;/code&gt;.
Time was consumed by the &lt;code&gt;stage/sql/statistics&lt;/code&gt; phase, looking up table stats for a good execution plan, and then by the actual query execution in &lt;code&gt;stage/sql/executing&lt;/code&gt;.
The former took 0.7ms (703.58us), the latter 2.26ms.&lt;/p&gt;
&lt;p&gt;We are interested in what took so long, specifically, so we look into waits for event_ids 5611 and 5712 - finding nothing, and also nothing particularly time-consuming:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timer_wait&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;timer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;operation&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_waits_history&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;order&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;by&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+------------------------------------------+-----------+--------------------+------------------+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;timer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;nesting_event_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;operation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+------------------------------------------+-----------+--------------------+------------------+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6614&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;synch&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;trx_mutex&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;56&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;lock&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6615&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;synch&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;trx_mutex&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;lock&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6616&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;synch&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;trx_mutex&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;lock&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6617&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;synch&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;trx_mutex&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;88&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;lock&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6618&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;synch&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;trx_mutex&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;lock&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6619&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;synch&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;trx_mutex&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;lock&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6620&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;synch&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;LOCK_table_cache&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;lock&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6622&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;client_connection&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;91&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6621&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;send&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6624&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;synch&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;THD&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;LOCK_thd_query&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;124&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STAGE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6623&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;lock&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6626&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;client_connection&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;WAIT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6625&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;recv&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+----------+------------------------------------------+-----------+--------------------+------------------+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We can see I/O in a global summary, and the timings make sense in the context of the experiment:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;object_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;object_schema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;object_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;count_star&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_timer_read&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="k"&gt;read&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_timer_write&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="k"&gt;write&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format_time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_timer_fetch&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="k"&gt;fetch&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;table_io_waits_summary_by_table&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;object_schema&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'world'&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;G&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;object_type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TABLE&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;object_schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;world&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;object_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;count_star&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;read&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;569&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;write&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ps&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;569&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;***************************&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;object_type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TABLE&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="n"&gt;object_schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;world&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;object_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;count_star&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;read&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;703&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;write&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ps&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;703&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;But why are the I/O times not visible to us?
That’s a bit unclear.
My theory was that the reads happen asynchronously by some background thread.
But a quick query shows no time spend on reader threads.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;events_waits_summary_by_thread_by_event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;max_timer_wait&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'wait/io/file/innodb/innodb_data_file'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+---------------------------------------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+---------------------------------------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io_write_thread&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io_write_thread&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io_write_thread&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;io_write_thread&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;page_flush_coordinator_thread&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;33&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;clone_gtid_thread&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;one_connection&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;FOREGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;one_connection&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;FOREGROUND&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+---------------------------------------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We seem to be unable to attribute time spent loading data from disk to a specific thread, and we seem to be unable to account for the runtime of certain stages by looking at waits.
That’s unexpected.&lt;/p&gt;
&lt;h1 id="memory-only-as-summary"&gt;Memory only as summary&lt;/h1&gt;
&lt;p&gt;Diverse &lt;code&gt;memory_%&lt;/code&gt; tables exist to track memory usage in the server.
All of these tables are summary tables, there are no memory events tables that could trace memory usage per query.
That might be okay.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;show&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;tables&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;like&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'%memory%'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Tables_in_performance_schema&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory_summary_by_account_by_event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory_summary_by_host_by_event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory_summary_by_thread_by_event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory_summary_by_user_by_event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory_summary_global_by_event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We can do interesting things with stuff such as &lt;code&gt;memory_summary_by_thread_by_event_name&lt;/code&gt;, at least on our mostly idle server.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;mysql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8025&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msandbox&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;count_alloc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sum_number_of_bytes_alloc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;high_number_of_bytes_used&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory_summary_by_thread_by_event_name&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;HIGH_NUMBER_OF_BYTES_USED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;order&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;by&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;HIGH_NUMBER_OF_BYTES_USED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;desc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;limit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+-------------------------------+-------------+---------------------------+---------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;count_alloc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sum_number_of_bytes_alloc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;high_number_of_bytes_used&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+-------------------------------+-------------+---------------------------+---------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;THD&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;main_mem_root&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1181008&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;613544&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;226&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1059368&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;250032&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dd&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;objects&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;205&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;47432&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;44648&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ha_innodb&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35784&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35784&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;innodb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;fil0fil&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32800&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;-----------+-------------------------------+-------------+---------------------------+---------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h1 id="no-explain"&gt;No EXPLAIN&lt;/h1&gt;
&lt;p&gt;Another thing that would be useful to collect from P_S is the actual execution plan of a query.
But while we can explain a lot of statements by running &lt;code&gt;EXPLAIN &lt;stmt&gt;&lt;/code&gt;, and while we can &lt;code&gt;EXPLAIN FOR CONNECT ...&lt;/code&gt;, the former is not the recorded execution plan, and the latter only works while the query is running.
It’s the actual execution plan while the query executes, but it is not recorded.&lt;/p&gt;
&lt;h1 id="summary"&gt;Summary&lt;/h1&gt;
&lt;p&gt;A lot of information about query execution can be gathered from P_S.
The query execution can be broken down in statements, stages and waits.
Specifically, statements collect a lot of interesting quality flags.
Stages can collect percentages of completion for long-running queries and give a general feel about where in the query execution time is spent.
Waits should be able to attribute time to individual operations in the database server, but specifically for file I/O this seems to be more complicated, and I have not been able to solve it.&lt;/p&gt;
&lt;p&gt;We can see waits for I/O summary tables, and we can see a lot of other statistical information in other summary tables.
We can also use additional tables not covered here for debugging (for example &lt;code&gt;DATA_LOCKS&lt;/code&gt; for locking behavior).&lt;/p&gt;
&lt;p&gt;Memory instrumentation is interesting, but at this stage it is unclear to me if it is sufficient.&lt;/p&gt;
&lt;p&gt;It seems to be really hard to record execution plans together with statements.&lt;/p&gt;
&lt;p&gt;More experimentation with more complicated queries is necessary to see if it is possible to see things like sorting, temp files and similar operations, and attribute time to these operations.&lt;/p&gt;
&lt;p&gt;The number of queries on P_S necessary to extract information about a single query is staggering, a 10:1 ratio.
At least filters exist and are on by default, so that I do not have to hear my monitoring noise in my monitoring.
That is good.&lt;/p&gt;
&lt;p&gt;I could really use a single large JSON blob containing the entire package with performance data for a query, at once - one query to trace one query.
That is, the information from transaction, statement, stages, waits, the execution plan and the memory consumption for a given transaction or statement, in one go.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;First published on &lt;a href="https://blog.koehntopp.info/2021/09/15/mysql-tracing-a-single-query-with-performanceschema.html" target="_blank" rel="noopener noreferrer"&gt;https://blog.koehntopp.info/&lt;/a&gt; and syndicated here with permission of the &lt;a href="https://percona.community/contributors/koehntopp/"&gt;author&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Kristian Köhntopp</author>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2022/10/mysql-tracing-a-single-query_hu_b2d226be793a1ee7.jpg"/>
      <media:content url="https://percona.community/blog/2022/10/mysql-tracing-a-single-query_hu_bd5f0967e8de6c92.jpg" medium="image"/>
    </item>
    <item>
      <title>What is Open Source and why should you care</title>
      <link>https://percona.community/blog/2022/10/14/what-is-open-source-and-why-should-you-care/</link>
      <guid>https://percona.community/blog/2022/10/14/what-is-open-source-and-why-should-you-care/</guid>
      <pubDate>Fri, 14 Oct 2022 00:00:00 UTC</pubDate>
      <description>The term Open Source Software reminds me of Abraham Lincoln’s widely accepted definition of Democracy. Lincoln said, “Democracy is the government of the people, by the people, and for the people”. Similarly, Open Source is software of the community, by the community, and for the community.</description>
      <content:encoded>&lt;p&gt;The term Open Source Software reminds me of &lt;a href="https://en.wikipedia.org/wiki/Abraham_Lincoln#Gettysburg_Address_%281863%29" target="_blank" rel="noopener noreferrer"&gt;Abraham Lincoln’s&lt;/a&gt; widely accepted definition of Democracy. Lincoln said, “Democracy is the government of the people, by the people, and for the people”. Similarly, Open Source is &lt;strong&gt;software&lt;/strong&gt; of the &lt;strong&gt;community&lt;/strong&gt;, by the &lt;strong&gt;community&lt;/strong&gt;, and for the &lt;strong&gt;community&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Formally, Open Source refers to &lt;strong&gt;projects&lt;/strong&gt; or &lt;strong&gt;programs&lt;/strong&gt; whose source code can be modified, shared, and/or commercialized by the public/community at will.
A &lt;a href="https://choosealicense.com/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;license&lt;/strong&gt;&lt;/a&gt; that is attached to an Open Source project determines the extent to which the project can be consumed, modified, or utilized.&lt;/p&gt;
&lt;p&gt;Open Source is everywhere, the browser you are using to view this page is likely an Open Sourced one. Android OS, Linux Kernel, and JavaScript are examples of OS projects that people like you and I improve daily. As a developer, technical writer, or designer; Open Source opens the doors to many opportunities; jobs, networking, communication skills, etc.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The term Open Source in this article will mostly refer to Open Source Software and Open Source in general.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id="table-of-contents"&gt;&lt;a href="#table-of-contents"&gt;Table of contents&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#benefits-of-open-source"&gt;Benefits of open source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#getting-started-with-open-source"&gt;Getting started with Open Source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#clearing-up-some-misconceptions"&gt;Clearing up some misconceptions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#where-to-find-open-source-projects"&gt;Where to find Open Source projects&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#finding-projects-to-contribute-to"&gt;Finding projects to contribute to&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#knowing-where-to-make-changes-or-contributions-to-a-project"&gt;Knowing where to make changes or contributions in a project&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#what-kind-of-contributions-are-legit"&gt;What kind of contributions are legit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#some-interesting-github-projects-you-can-contribute-to"&gt;Some Interesting GitHub projects you can contribute to&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#joining-the-open-source-community"&gt;Joining the Open Source community&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#notable-open-source-advocates"&gt;Notable Open Source Advocates&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#bonus-hacktoberfest"&gt;Bonus: Hacktoberfest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#conclusion"&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="benefits-of-open-source"&gt;Benefits of open source&lt;/h2&gt;
&lt;p&gt;Being a member of the Open Source Community brings many benefits including job opportunities, sponsorships, networking, improved communication skills, software integrity, and much more. Job recruiters watch out for people who give back to the community. Contributing to OS will boost your chances of getting jobs because it shows that you are willing to foster new relationships and work with others. Remotely interacting with new people also improves your ability to communicate ideas effectively when working on projects.&lt;/p&gt;
&lt;p&gt;In the OS community, you are free to break things and make honest mistakes. Others will fix them and ensure that the software is working correctly.
With Open Source, even when you abandon a project. The community will keep it alive. This saves you a great deal of time to work on other projects. Some major benefits of contributing to Open Source include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Getting free Swag.&lt;/li&gt;
&lt;li&gt;Networking and meeting new people.&lt;/li&gt;
&lt;li&gt;It makes you a better coder/writer - People will correct you whenever you break stuff or find errors in your work.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="getting-started-with-open-source"&gt;Getting started with Open Source&lt;/h2&gt;
&lt;p&gt;Are you new to the Open Source Community? If your answer is yes, then you are in the right place. Even if your answer is No, there are a few tips that you can still take away from this post.&lt;/p&gt;
&lt;h2 id="clearing-up-some-misconceptions"&gt;Clearing up some misconceptions&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Ability to code is necessary for Open Source&lt;/strong&gt;: Open Source is not limited to software, there are many low code/ no-code OS projects that you can find on GitHub today.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;High technical skill is required to start contributing&lt;/strong&gt;: This is not true at all, simple changes like fixing typos, grammatical corrections, and spelling errors are all welcome in the OS community as long as it adds value or improvement.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="where-to-find-open-source-projects"&gt;Where to find Open Source projects&lt;/h2&gt;
&lt;p&gt;If you are looking for Open Source Projects to contribute to, look no further than GitHub - the defacto home of Open Source projects.
GitHub is home to millions of Open Source projects. There are lots of projects that can align with your specialty if you know how to look properly.&lt;/p&gt;
&lt;p&gt;Finding Open Source projects to contribute to can be difficult, especially for newcomers. One reason is that they are not looking in the right places. When it comes to contributing to Open Source, newcomers are often faced with two problems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Finding projects to contribute to&lt;/li&gt;
&lt;li&gt;Knowing where to make changes or contributions to a project&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="finding-projects-to-contribute-to"&gt;Finding projects to contribute to&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://github.com" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Click issues in the navigation panel&lt;/li&gt;
&lt;li&gt;Filter the results by entering a keyword into the search bar e.g good first issue, help-wanted, JavaScript, technical writing, documentation, React, etc&lt;/li&gt;
&lt;li&gt;Search for something specific within your domain such as react, documentation, design, etc &lt;strong&gt;good first issue&lt;/strong&gt; and &lt;strong&gt;first-timers only&lt;/strong&gt; are labels for issues that are appropriate for newcomers to work on. Try these first if you are just starting out.&lt;/li&gt;
&lt;li&gt;Find an issue you would like to work on. If it hasn’t been assigned to someone else, ask the maintainers if you can work on the issue and create a PR for it.&lt;/li&gt;
&lt;li&gt;Many big projects often have abandoned or incomplete issues. You can find these by searching for “todo” or looking through much older issues.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Alternatively, you can visit any of the websites below to find issues without going through the hassle of following the steps above. The following websites are tailor-made for finding issues easily:&lt;/p&gt;&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://up-for-grabs.net" target="_blank" rel="noopener noreferrer"&gt;up-for-grabs.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.firsttimersonly.com/" target="_blank" rel="noopener noreferrer"&gt;firstTimersOnly&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://goodfirstissues.com/" target="_blank" rel="noopener noreferrer"&gt;goodFirstIssues&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://codetriage.com" target="_blank" rel="noopener noreferrer"&gt;codetriage.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://goodfirstissue.dev/" target="_blank" rel="noopener noreferrer"&gt;goodfirstissue.dev&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you are not familiar with Open Source etiquette, then I suggest that you read these briefly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://opensource.guide/how-to-contribute/#finding-a-project-to-contribute-to" target="_blank" rel="noopener noreferrer"&gt;Open Source Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/MDN/Contribute/Open_source_etiquette" target="_blank" rel="noopener noreferrer"&gt;Open Source Etiquette&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="knowing-where-to-make-changes-or-contributions-to-a-project"&gt;Knowing where to make changes or contributions to a project&lt;/h3&gt;
&lt;p&gt;Believe it or not, knowing how to find bugs, errors, mistakes, etc in files in a project is also a skill. If you go into a project’s repo, you’ll likely spot an issue somewhere if you look close enough. some of the most common issues you can find in repos include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Broken links in websites, project documentation, etc&lt;/li&gt;
&lt;li&gt;Grammatical and Spelling errors&lt;/li&gt;
&lt;li&gt;Lacklustre designs&lt;/li&gt;
&lt;li&gt;Lack of translation, a11y, &lt;a href="https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA" target="_blank" rel="noopener noreferrer"&gt;ARIA&lt;/a&gt; etc.&lt;/li&gt;
&lt;li&gt;Absence of comprehensive documentation&lt;/li&gt;
&lt;li&gt;Absence of issue/PR templates in a repo&lt;/li&gt;
&lt;li&gt;Lack of contribution guidelines&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Always make sure you go through the README.md and CONTRIBUTING.md files of a repository before creating Pull Requests. This will ensure that you adhere to a project’s guidelines for making contributions. Sometimes valuable PRs are rejected because contributor’s did not follow a Project’s rules. This could be something as trivial as not giving your PR an appropriate title or description. So be wary of this&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;These are just a few off the top of my head. There are many ways in which you can find issues with a project. Ensure that you do not open irrelevant or low-quality issues. Also, bear these in mind before opening issues.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A good way to know if your issue is a quality one is to ask yourself , “how would this change or suggestion help the users and maintainers of this project.” If your answer does not sound credible to you, do not open it.&lt;/li&gt;
&lt;li&gt;Your issue should tell the maintainers of a project the relevance of your suggestion&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="what-kind-of-contributions-are-legit"&gt;What kind of contributions are legit?&lt;/h2&gt;
&lt;p&gt;Contributions to Open Source are not limited to Pull Requests only. Raising Issues, making Code Reviews, Financial Contributions, etc. are other ways of contributing to Open Source. Until recently, non-code stuff like blog posts, Figma designs, etc was not popular in the Open Source community. Things are beginning to change, however. This year’s &lt;a href="https://hacktoberfest.com" target="_blank" rel="noopener noreferrer"&gt;Hacktoberfest&lt;/a&gt; which is accepting low-code and non-code contributions for the first time after 8 years is a good example of this new trend.&lt;/p&gt;
&lt;h2 id="some-interesting-github-projects-you-can-contribute-to"&gt;Some Interesting GitHub projects you can contribute to&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/percona" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt; - Percona is an Open Source platform for monitoring, securing and optimizing database environments (MySQL, PostgreSQL, MongoDB etc) on any infrastructure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/EddieHubCommunity/LinkFree" target="_blank" rel="noopener noreferrer"&gt;LinkFree&lt;/a&gt; - Free Open Source alternative to Linktree.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/ykdojo/defaang" target="_blank" rel="noopener noreferrer"&gt;defaang&lt;/a&gt; - Free Open Source alternative to LeetCode.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/Dun-sin/Code-Magic" target="_blank" rel="noopener noreferrer"&gt;Code-Magic&lt;/a&gt; - A website for generating performant CSS with GUI. Plus it’s purely based on TypeScript, CSS, and HTML (No frameworks involved).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/freeCodeCamp/Developer_Quiz_Site" target="_blank" rel="noopener noreferrer"&gt;FreeCodeCamp Quiz Site&lt;/a&gt; - You can add new quizzes to the site by following the instructions in the repo.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/Njong392/Abbreve" target="_blank" rel="noopener noreferrer"&gt;Abbreve&lt;/a&gt; - A website for quickly checking the meaning of common abbreviations and slang used for communicating over social media.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="joining-the-open-source-community"&gt;Joining the Open Source community&lt;/h2&gt;
&lt;p&gt;Community, community, community…
We keep mentioning community in the world of Open Source. This is because Open Source would not exist without the amazing group of individuals who are constantly working hard to make Open Source projects free and accessible to people like you and me.&lt;/p&gt;
&lt;p&gt;Below is a list of arguably the most popular Open Source communities on Twitter (that I know of).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://discord.gg/freecodecamp-org-official-fi-fo-692816967895220344" target="_blank" rel="noopener noreferrer"&gt;FreeCodeCamp&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="http://discord.eddiehub.org/" target="_blank" rel="noopener noreferrer"&gt;EddieHub&lt;/a&gt; Everything Open Source (contributions, hackathons, first timers, etc).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://discord.gg/4c-784142072763383858" target="_blank" rel="noopener noreferrer"&gt;4C&lt;/a&gt; A large OS community for OS projects and networking.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://discord.gg/nNtVfKddDD" target="_blank" rel="noopener noreferrer"&gt;defaang / dojo clan&lt;/a&gt; YK Dojo’s OS community.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; - The largest Open Source community in the world, all the communities listed above have their repositories hosted on GitHub.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You should have a good reason for joining any of these communities. If you want to benefit from these communities, you should engage with the members, interact and find people who are within your domain of interests, and do not spam. Also, ensure that you adhere to the rules of these communities too.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h3 id="notable-open-source-advocates"&gt;Notable Open Source Advocates&lt;/h3&gt;
&lt;p&gt;Follow these people on Twitter to get all the latest updates on the happenings in the Open Source community:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://twitter.com/eddiejaoude" target="_blank" rel="noopener noreferrer"&gt;Eddie Jaoude&lt;/a&gt; - Eddie is a devoted member of the OS community, he hosts Twitter spaces regularly to help people who are just getting started in Open Source. He has a YouTube channel where he creates content beyond Open Source (freelancing tips, content creation tips, mini-tutorials, etc)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://mobile.twitter.com/dunsinwebdev" target="_blank" rel="noopener noreferrer"&gt;Dunsin&lt;/a&gt; - Dunsin is the creator of Code-Magic, which is a website for generating CSS code for different effects through GUI. She’s also an active member of the OS community on Twitter.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://twitter.com/ykdojo" target="_blank" rel="noopener noreferrer"&gt;YK Dojo&lt;/a&gt; - YK is also a popular YouTuber and an avid member of the OS community. He often does live-coding Streams on Twitch too&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are just a few names in the OS community on Twitter. There are so many more Open Source advocates on Twitter apart from the 3 individuals listed above.&lt;/p&gt;
&lt;h2 id="bonus-hacktoberfest"&gt;Bonus: Hacktoberfest&lt;/h2&gt;
&lt;p&gt;Hacktoberfest is a yearly celebration of Open Source and the Open Source community throughout October. Hacktoberfest is organized yearly by Digital Ocean.&lt;/p&gt;
&lt;p&gt;A minimum of 4 accepted Pull Requests is required before the 25th of October to win a Hacktoberfest-themed shirt and sticker. Starting from this year, low-code and/or no-code contributions will also be accepted as valid contributions.&lt;/p&gt;
&lt;p&gt;If you’d like to take part in the next Hacktoberfest, set a reminder for October now to avoid missing out on all the fun.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Imagine a world without Open Source. As you can see, there’s nothing to hate about Open Source. The world of Open Source exposes you to opportunities, meeting new friends, winning swag/money . Without Open Source, the world would certainly not be as advanced as it is today in terms of technology.&lt;/p&gt;</content:encoded>
      <author>Teslim Balogun</author>
      <category>open-source</category>
      <category>percona</category>
      <category>github</category>
      <category>beginners</category>
      <media:thumbnail url="https://percona.community/blog/2022/10/open-source-cover_hu_b61db9297ad75df3.jpg"/>
      <media:content url="https://percona.community/blog/2022/10/open-source-cover_hu_17dd52579b3e5495.jpg" medium="image"/>
    </item>
    <item>
      <title>Learning Kubernetes Operators with Percona Operator for MongoDB</title>
      <link>https://percona.community/blog/2022/10/13/learning-kubernetes-operators-with-percona-operator-for-mongodb/</link>
      <guid>https://percona.community/blog/2022/10/13/learning-kubernetes-operators-with-percona-operator-for-mongodb/</guid>
      <pubDate>Thu, 13 Oct 2022 00:00:00 UTC</pubDate>
      <description>One of the topics that have resonated a lot for me since the first KubeCon I attended in 2018 is Kubernetes Operators.</description>
      <content:encoded>&lt;p&gt;One of the topics that have resonated a lot for me since the first KubeCon I attended in 2018 is &lt;strong&gt;Kubernetes Operators&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The concept of Operators was introduced much earlier in 2016 by the CoreOS Linux development team. They were looking for a way to improve automated container management in Kubernetes.&lt;/p&gt;
&lt;h2 id="what-do-we-mean-by-a-kubernetes-operator"&gt;What do we mean by a Kubernetes Operator?&lt;/h2&gt;
&lt;p&gt;We use the &lt;a href="https://www.cncf.io/blog/2022/06/15/kubernetes-operators-what-are-they-some-examples/#:~:text=K8s%20Operators%20are%20controllers%20for,Custom%20Resource%20Definitions%20%28CRD%29." target="_blank" rel="noopener noreferrer"&gt;definition of CNCF&lt;/a&gt;. The Kubernetes project defines &lt;strong&gt;“Operator”&lt;/strong&gt; simply: &lt;strong&gt;“Operators are software extensions that use custom resources to manage applications and their components“&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This means that among the applications that can be run on Kubernetes, there are applications that still require manual operations to manage them and complete the Kubernetes deployment cycle because Kubernetes itself can’t manage all these manual operations. It is what the Operators take care of, to automate those manual processes of the applications deployed in Kubernetes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How can this be possible?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Operators use/extend the &lt;strong&gt;Kubernetes API&lt;/strong&gt; (this API has the basics needed for a user to interact with the Kubernetes cluster) and create &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#:~:text=A%20custom%20resource%20is%20an,resources%2C%20making%20Kubernetes%20more%20modular." target="_blank" rel="noopener noreferrer"&gt;custom resources&lt;/a&gt; to add new functionality according to the needs of an application to be flexible and scalable.&lt;/p&gt;
&lt;p&gt;Once the creation of the &lt;strong&gt;custom resource&lt;/strong&gt; is finished, it creates objects that can be managed using kubectl, as other default Kubernetes resources are managed, such as Deployments, Pods, etc.&lt;/p&gt;
&lt;p&gt;Here we see the difference between the workflows with and without operators.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;With Operators&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/with-operators_hu_25016fe3b8ee1355.png 480w, https://percona.community/blog/2022/13/with-operators_hu_e7bb015fe7e101a6.png 768w, https://percona.community/blog/2022/13/with-operators_hu_858a6341bc137533.png 1400w"
src="https://percona.community/blog/2022/13/with-operators.png" alt="With Operators" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Without Operators&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/without-operators_hu_5377d9c9d2b94577.png 480w, https://percona.community/blog/2022/13/without-operators_hu_f7bbc57613f173b6.png 768w, https://percona.community/blog/2022/13/without-operators_hu_3c994909e3b25a95.png 1400w"
src="https://percona.community/blog/2022/13/without-operators.png" alt="Without Operators" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The above illustration is based on a presentation by &lt;a href="https://youtu.be/i9V4oCa5f9I?t=403" target="_blank" rel="noopener noreferrer"&gt;Sai Vennam&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It is time for an example!
We will see how Percona Operator for MongoDB works.&lt;/p&gt;
&lt;p&gt;Percona Operator for MongoDB contains everything we need to quickly and consistently deploy and scale &lt;a href="https://www.percona.com/software/mongodb/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB instances&lt;/a&gt; into a Kubernetes cluster on-premises or in the cloud.&lt;/p&gt;
&lt;p&gt;You can find Percona Operator for MongoDB officially in:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://artifacthub.io/packages/olm/community-operators/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;Artifact Hub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://operatorhub.io/operator/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;Operator Hub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Why does Percona Server for MongoDB (a database) need a Kubernetes Operator?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes has been designed for stateless applications. Kubernetes in many cases doesn’t require operators for stateless applications because Kubernetes doesn’t need more automation logic. But stateful applications like databases do need operators because they cannot automate the entire process natively.&lt;/p&gt;
&lt;p&gt;One of the main benefits of operators is the automation of repetitive tasks that are often handled by human operators, eliminating errors in application lifecycle management.&lt;/p&gt;
&lt;h2 id="installing-mongodb-percona-operator-using-gke"&gt;Installing MongoDB Percona Operator using GKE&lt;/h2&gt;
&lt;p&gt;This guide shows you how to deploy &lt;strong&gt;Percona Operator for MongoDB&lt;/strong&gt; on &lt;strong&gt;Google Kubernetes Engine (GKE)&lt;/strong&gt;. We use GKE which takes less time to set up Kubernetes in Google Cloud just for the purpose of this demo. This demonstration assumes you have some experience with the platform. For more information on the GKE, see &lt;a href="https://cloud.google.com/kubernetes-engine/docs/deploy-app-cluster." target="_blank" rel="noopener noreferrer"&gt;Kubernetes Engine Quickstart&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As prerequisites, we need &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/gke.html#prerequisites" target="_blank" rel="noopener noreferrer"&gt;Google Cloud shell and Kubectl&lt;/a&gt;. You can find the installation guides for AWS and AZURE in the &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/#advanced-installation-guides" target="_blank" rel="noopener noreferrer"&gt;Percona documentation&lt;/a&gt;. Let´s start!&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating a GKE cluster with three nodes.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gcloud container clusters create my-cluster-name --project percona-product --zone us-central1-a --cluster-version 1.23 --machine-type n1-standard-4 --num-nodes&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/1-operators-gcloud_hu_de723b372159705f.png 480w, https://percona.community/blog/2022/13/1-operators-gcloud_hu_ea395a17e92310cb.png 768w, https://percona.community/blog/2022/13/1-operators-gcloud_hu_861f73d47bb21ff9.png 1400w"
src="https://percona.community/blog/2022/13/1-operators-gcloud.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Now you should configure the command-line access to your newly created cluster to make kubectl able to use it.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gcloud container clusters get-credentials my-cluster-name --zone us-central1-a --project percona-product&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/2-operators-get-credentials_hu_8097b375aa73fbcf.png 480w, https://percona.community/blog/2022/13/2-operators-get-credentials_hu_e5b63776963c800b.png 768w, https://percona.community/blog/2022/13/2-operators-get-credentials_hu_e169e84be299341e.png 1400w"
src="https://percona.community/blog/2022/13/2-operators-get-credentials.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Finally, use your &lt;a href="https://cloud.google.com/iam" target="_blank" rel="noopener noreferrer"&gt;Cloud Identity and Access Management [Cloud IAM]&lt;/a&gt; to control access to the cluster. The following command will give you the ability to create Roles and RoleBindings:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user &lt;span class="k"&gt;$(&lt;/span&gt;gcloud config get-value core/account&lt;span class="k"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/3-kubectl-create-cluisterrolebinding_hu_f39b8053e9ddaeca.png 480w, https://percona.community/blog/2022/13/3-kubectl-create-cluisterrolebinding_hu_f39683c329e3ef82.png 768w, https://percona.community/blog/2022/13/3-kubectl-create-cluisterrolebinding_hu_eee4cd83e1a80d7.png 1400w"
src="https://percona.community/blog/2022/13/3-kubectl-create-cluisterrolebinding.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="install-the-operator-and-deploy-your-mongodb-cluster"&gt;Install the Operator and deploy your MongoDB cluster&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Create a new namespace called &lt;strong&gt;percona-demo-namespace&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl create namespace percona-demo-namespace&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/13/4-kubectl-create-namespace.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Set the context for the namespace&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl config set-context &lt;span class="k"&gt;$(&lt;/span&gt;kubectl config current-context&lt;span class="k"&gt;)&lt;/span&gt; --namespace&lt;span class="o"&gt;=&lt;/span&gt;percona-demo-namespace&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/5-kubectl-config-set-contex_hu_1080c21e94cf84db.png 480w, https://percona.community/blog/2022/13/5-kubectl-config-set-contex_hu_4ea4341c3e043911.png 768w, https://percona.community/blog/2022/13/5-kubectl-config-set-contex_hu_deef002f9d32086f.png 1400w"
src="https://percona.community/blog/2022/13/5-kubectl-config-set-contex.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deploy the Operator&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.13.0/deploy/bundle.yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/6-kubectl-apply-f-bundle_hu_b7cfd39a938c998.png 480w, https://percona.community/blog/2022/13/6-kubectl-apply-f-bundle_hu_51373f5156f5b8fb.png 768w, https://percona.community/blog/2022/13/6-kubectl-apply-f-bundle_hu_80edbe4bcf31daa8.png 1400w"
src="https://percona.community/blog/2022/13/6-kubectl-apply-f-bundle.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The operator has been started, and you can deploy your MongoDB cluster:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.13.0/deploy/cr.yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/7-kubectl-apply-f-cr_hu_e15f0b4f5f83c92b.png 480w, https://percona.community/blog/2022/13/7-kubectl-apply-f-cr_hu_beee7a6c3f1bcc2f.png 768w, https://percona.community/blog/2022/13/7-kubectl-apply-f-cr_hu_b93531c29e27188.png 1400w"
src="https://percona.community/blog/2022/13/7-kubectl-apply-f-cr.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When the process is over, your cluster will obtain the ready status. Check with:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; kubectl get psmdb.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/8-kubectl-get-psmdb_hu_ab0b66e14d3e0dce.png 480w, https://percona.community/blog/2022/13/8-kubectl-get-psmdb_hu_3b90fcee2f23f22e.png 768w, https://percona.community/blog/2022/13/8-kubectl-get-psmdb_hu_d1b125d7248e7401.png 1400w"
src="https://percona.community/blog/2022/13/8-kubectl-get-psmdb.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; “psmdb” stands for &lt;a href="https://www.percona.com/software/mongodb/percona-server-for-mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MongoDB&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="verifying-the-cluster-operation"&gt;Verifying the cluster operation&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;You will need the login and password for the admin user to access the cluster. Use kubectl get secrets command to see the list of Secrets objects&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get secret my-cluster-name-secrets -o yaml&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/13/9-kubectl-get-secret.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Bring it back to a human-readable form to &lt;strong&gt;MONGODB_DATABASE_ADMIN_PASSWORD&lt;/strong&gt; and &lt;strong&gt;MONGODB_DATABASE_ADMIN_USER&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/10-decode_hu_da6ebad61ea4ad21.png 480w, https://percona.community/blog/2022/13/10-decode_hu_29ea244e25562ac0.png 768w, https://percona.community/blog/2022/13/10-decode_hu_769410ebe71992c5.png 1400w"
src="https://percona.community/blog/2022/13/10-decode.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We check the details of the Services, before testing the connection to the cluster&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/11-get-services_hu_e92af432fa97bcfe.png 480w, https://percona.community/blog/2022/13/11-get-services_hu_42b31a5d2454eb28.png 768w, https://percona.community/blog/2022/13/11-get-services_hu_6c04eb35762d6405.png 1400w"
src="https://percona.community/blog/2022/13/11-get-services.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run a Docker container with a MongoDB client and connect its console output to your terminal. The following command will do this, naming the new Pod percona-client:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl run -i --rm --tty percona-client --image&lt;span class="o"&gt;=&lt;/span&gt;percona/percona-server-mongodb:4.4.16-16 --restart&lt;span class="o"&gt;=&lt;/span&gt;Never -- bash -il&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/12-run-docker-container_hu_356e62773d644d48.png 480w, https://percona.community/blog/2022/13/12-run-docker-container_hu_bff79e73a0bb2abe.png 768w, https://percona.community/blog/2022/13/12-run-docker-container_hu_4a6102786c9b3333.png 1400w"
src="https://percona.community/blog/2022/13/12-run-docker-container.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Now run mongo tool in the percona-client command shell using the login (which is normally clusterAdmin)&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo &lt;span class="s2"&gt;"mongodb://clusterAdmin:Dgqjc1HElUvvGnH9@my-cluster-name-mongos.percona-demo-namespace.svc.cluster.local/admin?ssl=false"&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/13/13-mongo_hu_63f0d96c6ef06538.png 480w, https://percona.community/blog/2022/13/13-mongo_hu_7d8ef4e37f86ba6e.png 768w, https://percona.community/blog/2022/13/13-mongo_hu_758cba1cf4d7227c.png 1400w"
src="https://percona.community/blog/2022/13/13-mongo.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Woolaaa!&lt;/strong&gt; We have deployed MongoDB in Kubernetes using Operator, It works! &lt;strong&gt;:)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now that you have the MongoDB cluster, you have full control to configure and manage MongoDB deployment from a single Kubernetes control plane, which means that you can manage MongoDB instances in the same way you manage default objects in Kubernetes like Deployments, Pods, or Services. For advanced configuration, topics see our guide &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/users.html" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="conclusion"&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Kubernetes Operators extend the Kubernetes API to automate processes that cannot be achieved natively with Kubernetes. This is the case for stateful applications like MongoDB.
Percona develops &lt;a href="https://github.com/percona/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt; that contains everything you need to quickly and consistently deploy and scale Percona Server for MongoDB instances into a Kubernetes cluster on-premises or in the cloud. You can try it on different cloud providers and &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/#advanced-installation-guides" target="_blank" rel="noopener noreferrer"&gt;tutorials&lt;/a&gt; for more advanced configurations.&lt;/p&gt;
&lt;p&gt;You can find &lt;strong&gt;Percona Operator for MongoDB&lt;/strong&gt; in Hacktoberfest! If you’re looking to improve your Kubernetes skills, this is a &lt;a href="https://www.percona.com/blog/contribute-to-open-source-with-percona-and-hacktoberfest/" target="_blank" rel="noopener noreferrer"&gt;great project to start contributing&lt;/a&gt; to.&lt;/p&gt;
&lt;p&gt;We also have a &lt;strong&gt;&lt;a href="https://github.com/percona/roadmap/projects/1" target="_blank" rel="noopener noreferrer"&gt;public roadmap&lt;/a&gt;&lt;/strong&gt; of Percona Kubernetes Operators. If you have any feedback or want to draw our attention to a particular feature, feel free to be part of it and vote for issues! :)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=HZ9yaS-ZS48&amp;t=2809s" target="_blank" rel="noopener noreferrer"&gt;Installation of MongoDB via Kubernetes Operator by Sergey Pronin - MongoDB Kubernetes operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.percona.com/percona-operator-for-mongodb/gke.html" target="_blank" rel="noopener noreferrer"&gt;Install Percona Server for MongoDB on Google Kubernetes Engine (GKE)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;Percona Server Mongodb Operator GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cncf.io/blog/2022/06/15/kubernetes-operators-what-are-they-some-examples/#:~:text=K8s%20Operators%20are%20controllers%20for,Custom%20Resource%20Definitions%20%28CRD%29." target="_blank" rel="noopener noreferrer"&gt;Kubernetes Operators: what are they? Some examples CNCF.IO&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=i9V4oCa5f9I" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Operators Explained by Sai Vennam&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://iximiuz.com/en/series/working-with-kubernetes-api/" target="_blank" rel="noopener noreferrer"&gt;Working with Kubernetes API Ivan Velichko&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>kubernetes</category>
      <category>operators</category>
      <category>databases</category>
      <category>mongodb</category>
      <category>docker</category>
      <media:thumbnail url="https://percona.community/blog/2022/13/with-operators_hu_bdd003d62e36fb66.jpg"/>
      <media:content url="https://percona.community/blog/2022/13/with-operators_hu_59b4693fcad8842d.jpg" medium="image"/>
    </item>
    <item>
      <title>Recap Monthly Percona Developer Meetup Hacktoberfest</title>
      <link>https://percona.community/blog/2022/10/05/recap-monthly-percona-developer-meetup-hacktoberfest/</link>
      <guid>https://percona.community/blog/2022/10/05/recap-monthly-percona-developer-meetup-hacktoberfest/</guid>
      <pubDate>Wed, 05 Oct 2022 00:00:00 UTC</pubDate>
      <description>The Monthly Percona Developer Meetup is an opportunity to get a behind-the-scenes view of different projects in Percona and directly interact with the experts to exchange ideas, ask questions, etc.</description>
      <content:encoded>&lt;p&gt;The &lt;a href="https://percona.community/blog/2022/09/26/monthly-percona-developer-meetup/" target="_blank" rel="noopener noreferrer"&gt;Monthly Percona Developer Meetup&lt;/a&gt; is an opportunity to get a behind-the-scenes view of different projects in Percona and directly interact with the experts to exchange ideas, ask questions, etc.&lt;/p&gt;
&lt;p&gt;From now on, Monthly Percona Developer Meetups will take place monthly to have open discussions and in-person (online) communication. You can join our Developer Meetup via Restream, the &lt;a href="https://www.youtube.com/channel/UCLJ0Ok4HeUBrRYF4irturVA" target="_blank" rel="noopener noreferrer"&gt;Percona YouTube&lt;/a&gt; channel, and &lt;a href="https://www.linkedin.com/company/percona/" target="_blank" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; events.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/10/recap-mpdm-hacktoberfest-intro.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The topic of the first Monthly Percona Developer Meetup was &lt;a href="https://hacktoberfest.com/" target="_blank" rel="noopener noreferrer"&gt;Hacktoberfest&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is Hacktoberfest?&lt;/strong&gt; Hacktoberfest is an annual event hosted by DigitalOcean that encourages people to contribute to Open Source throughout October. It is inclusive, everyone can join to work on Open Source projects, and you can participate by choosing your favorite project.&lt;/p&gt;
&lt;p&gt;You need two essential things to participate. First, register on the Hacktoberfest website anytime between September 26 and October 31, 2022. Second, look through the participating projects, choose your favorites, and consult the documentation guiding you to send your first contribution.&lt;/p&gt;
&lt;p&gt;Forty thousand participants who complete Hacktoberfest and have at least four pull/merge requests accepted between October 1 and October 31 can have a tree planted in their name or the Hacktoberfest T-shirt. In that case, you can pick a tree planted in your name or the Hacktoberfest 2022 T-shirt.&lt;/p&gt;
&lt;p&gt;Let’s ask the Percona experts!&lt;/p&gt;
&lt;h2 id="which-percona-projects-joined-hacktoberfest"&gt;Which Percona projects joined Hacktoberfest?&lt;/h2&gt;
&lt;p&gt;All the &lt;a href="https://github.com/search?q=org%3Apercona+hacktoberfest" target="_blank" rel="noopener noreferrer"&gt;Percona GitHub Projects&lt;/a&gt; with the label: “hacktoberfest” are ready to contribute.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/10/recap-mpdm-hacktoberfest-youtube.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="how-do-i-find-issues"&gt;How do I find issues?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;On &lt;strong&gt;GitHub&lt;/strong&gt;, the issues are tagged with the &lt;strong&gt;good-first-issue&lt;/strong&gt; tag.&lt;/li&gt;
&lt;li&gt;On Jira, tagged with &lt;strong&gt;newbie&lt;/strong&gt;, &lt;strong&gt;hacktoberfest&lt;/strong&gt; and &lt;strong&gt;onboarding&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;You can also pick any other issue you like or create new ones.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="what-types-of-contributions-count"&gt;What types of contributions count?&lt;/h2&gt;
&lt;p&gt;You can contribute in several ways: coding, documentation, testing, design, discussions, Content Creator (Blog posts, videos), etc.; they all count.&lt;/p&gt;
&lt;h2 id="how-can-you-reach-the-percona-team"&gt;How can you reach the Percona team?&lt;/h2&gt;
&lt;p&gt;The best way to do it is directly in the Percona projects, or you can join our &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Community Forum&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/10/recap-mdpdm-hacktoberfest-repositories_hu_a449d5fff5c26b51.png 480w, https://percona.community/blog/2022/10/recap-mdpdm-hacktoberfest-repositories_hu_a4e8725f8f3da19.png 768w, https://percona.community/blog/2022/10/recap-mdpdm-hacktoberfest-repositories_hu_9cce59d5833879ab.png 1400w"
src="https://percona.community/blog/2022/10/recap-mdpdm-hacktoberfest-repositories.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Look at some of the Percona projects added to Hacktoberfest this year.&lt;/p&gt;
&lt;p&gt;Starting for &lt;a href="https://github.com/percona/percona-docker" target="_blank" rel="noopener noreferrer"&gt;percona/percona-docker&lt;/a&gt;. Supported by &lt;strong&gt;Evgeniy Patlan&lt;/strong&gt;, Manager, Build &amp; Release Engineering. There are images for basic scenarios. The idea is to create more images and improve the existing ones or extend them to Docker Compose. Also, contributions to improve the Docker setup are welcome. You can find issues in Jira or GitHub, where you can also participate in discussions about Percona Docker images.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/10/recap-percona-docker.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Our next project is &lt;a href="https://github.com/percona/pmm" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt;. This project is supported by &lt;strong&gt;Artem Gavrilov&lt;/strong&gt;, Backend Software Engineer, and &lt;strong&gt;Nurlan Moldomurov&lt;/strong&gt;, Full-Stack Engineer. PMM is a great project to contribute to during Hacktoberfest. There are minor and easy-to-do issues in &lt;a href="https://github.com/percona/pmm/issues" target="_blank" rel="noopener noreferrer"&gt;Github&lt;/a&gt;; it is not necessary to register them in Jira. These contributions are welcome if you have any good ideas, want to improve something or simplify some process. Send your PR; the maintainers will review it as soon as possible. We also have advanced issues if you want to go for more advanced tasks.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/10/recap-pmm.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;For our third project, we have &lt;a href="https://github.com/percona/mongodb_exporter" target="_blank" rel="noopener noreferrer"&gt;percona/mongodb_exporter&lt;/a&gt;. &lt;strong&gt;Carlos Salguero&lt;/strong&gt; is the maintainer for this project. He says it is a project easy to get started, and there is not much-complicated logic behind this.
It is about running some MongoDB internal commands to get statistics like diagnostic data or replica status, passing JSON to produce metrics from these commands. You use a complete Makefile to start sandbox instances to test almost everything; you don’t have a virtual machine or different MongoDB instances. The issues are in &lt;strong&gt;GitHub&lt;/strong&gt; and &lt;strong&gt;Jira&lt;/strong&gt;. The primary programming language is Go.&lt;/p&gt;
&lt;p&gt;It’s the turn of &lt;a href="https://github.com/percona/percona-server-mongodb-operator" target="_blank" rel="noopener noreferrer"&gt;percona/percona-server-mongodb-operator&lt;/a&gt;; &lt;strong&gt;Denys Kondratenko&lt;/strong&gt; is the maintainer of this project. It is an excellent opportunity to learn about Kubernetes and how to extend it and maintain stateful databases inside Kubernetes. Most of the things are tracked in GitHub and Jira.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/10/recap-operator.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Our next project is &lt;a href="https://github.com/percona/pg_stat_monitor" target="_blank" rel="noopener noreferrer"&gt;percona/pg_stat_monitor&lt;/a&gt;, a Query Performance Monitoring tool for PostgreSQL. The maintainer is &lt;strong&gt;Ibrar Ahmed&lt;/strong&gt;, Sr. Software Engineer (PostgreSQL). The primary area to contribute is the releases. If there are contributions with ideas on improving the information provided for &lt;strong&gt;pg_stat_monitor&lt;/strong&gt;, they are welcome. The issues are defined in Jira.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;In summary&lt;/strong&gt;, those are some of our projects that are in Hacktoberfest.
This month is Open Source month; it’s Hacktoberfest month party!&lt;/p&gt;
&lt;p&gt;Remember that you can interact with the maintainers of each project through &lt;a href="https://github.com/search?q=org%3Apercona+hacktoberfest" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;/&lt;a href="https://perconadev.atlassian.net/browse/DISTMYSQL-228?filter=-4" target="_blank" rel="noopener noreferrer"&gt;Jira&lt;/a&gt; or our &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Community Forum&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you haven’t sent a PR, this is the time to do it. Write us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; if you have any questions.&lt;/p&gt;
&lt;p&gt;Happy Hacktoberfest with Percona!&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>hacktoberfest</category>
      <category>percona</category>
      <category>databases</category>
      <category>Meetup</category>
      <media:thumbnail url="https://percona.community/blog/2022/10/recap-mpdm-hacktoberfest-intro_hu_b03d7c312b7e80f8.jpg"/>
      <media:content url="https://percona.community/blog/2022/10/recap-mpdm-hacktoberfest-intro_hu_441ea955777e895b.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.31 preview release</title>
      <link>https://percona.community/blog/2022/09/19/preview-release/</link>
      <guid>https://percona.community/blog/2022/09/19/preview-release/</guid>
      <pubDate>Mon, 19 Sep 2022 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.31 preview release Hello folks! Percona Monitoring and Management (PMM) 2.31 is now available as a preview release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-231-preview-release"&gt;Percona Monitoring and Management 2.31 preview release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.31 is now available as a preview release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM preview release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Release notes can be found in &lt;a href="https://pmm-v2-31-0-pr-868.onrender.com/release-notes/2.31.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="known-issue"&gt;Known issue&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-10735" target="_blank" rel="noopener noreferrer"&gt;PMM-10735&lt;/a&gt;: OVF stopped working in a few minutes.&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker"&gt;Percona Monitoring and Management server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.31.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; In order to use the DBaaS functionality during the Percona Monitoring and Management preview release, you should add the following environment variablewhen starting PMM server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PERCONA_TEST_DBAAS_PMM_CLIENT=perconalab/pmm-client:2.31.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client release candidate tarball for 2.31 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-4348.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable percona testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;hr&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Creating a Kubernetes cluster on Amazon EKS with eksctl</title>
      <link>https://percona.community/blog/2022/09/13/creating-a-kubernetes-cluster-on-amazon-eks-with-eksctl/</link>
      <guid>https://percona.community/blog/2022/09/13/creating-a-kubernetes-cluster-on-amazon-eks-with-eksctl/</guid>
      <pubDate>Tue, 13 Sep 2022 00:00:00 UTC</pubDate>
      <description>Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS and on-premises. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Amazon EKS is certified Kubernetes-conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://aws.amazon.com/eks/" target="_blank" rel="noopener noreferrer"&gt;Amazon Elastic Kubernetes Service&lt;/a&gt; (Amazon EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS and on-premises. &lt;a href="https://kubernetes.io/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; is an open-source system for automating deployment, scaling, and management of containerized applications. Amazon EKS is certified Kubernetes-conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS.&lt;/p&gt;
&lt;p&gt;Getting started guides available in the AWS documentation explain two different procedures for creating an EKS cluster. One using eksctl, a simple command line utility for creating and managing Kubernetes clusters on Amazon EKS, and the other one using the AWS Management Console and AWS CLI.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html" target="_blank" rel="noopener noreferrer"&gt;Getting started with Amazon EKS - eksctl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html" target="_blank" rel="noopener noreferrer"&gt;Getting started with Amazon EKS – AWS Management Console and AWS CLI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Through this article, you will learn how to use eksctl for creating a Kubernetes cluster on Amazon EKS.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://eksctl.io" target="_blank" rel="noopener noreferrer"&gt;eksctl&lt;/a&gt; is a simple CLI tool for creating and managing clusters on EKS - Amazon’s managed Kubernetes service for EC2. It is written in Go, uses CloudFormation, and was created by &lt;a href="https://www.weave.works/" target="_blank" rel="noopener noreferrer"&gt;Weaveworks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For using eksctl, you must:.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install &lt;a href="https://kubernetes.io/docs/reference/kubectl/" target="_blank" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://github.com/kubernetes-sigs/aws-iam-authenticator" target="_blank" rel="noopener noreferrer"&gt;AWS IAM Authenticator for Kubernetes&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://aws.amazon.com/cli/" target="_blank" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create a user with &lt;a href="https://eksctl.io/usage/minimum-iam-policies/" target="_blank" rel="noopener noreferrer"&gt;minimal IAM policies&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After running eksctl, you will get a cluster with default configuration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Exciting auto-generated name&lt;/li&gt;
&lt;li&gt;Two m5.large worker nodes&lt;/li&gt;
&lt;li&gt;Use the official AWS EKS AMI&lt;/li&gt;
&lt;li&gt;Default us-west-2 region&lt;/li&gt;
&lt;li&gt;A dedicated VPC&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="creating-iam-user"&gt;Creating IAM user&lt;/h2&gt;
&lt;p&gt;Go to &lt;a href="https://console.aws.amazon.com/iamv2" target="_blank" rel="noopener noreferrer"&gt;console.aws.amazon.com/iamv2&lt;/a&gt;, create a user group, named EKS, and attach the policies described in the &lt;a href="https://eksctl.io/usage/minimum-iam-policies/" target="_blank" rel="noopener noreferrer"&gt;minimal IAM policies&lt;/a&gt; section from the eksctl documentation.&lt;/p&gt;
&lt;p&gt;These policies already exist, and you must attach them as they are.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;AmazonEC2FullAccess (AWS managed)&lt;/li&gt;
&lt;li&gt;AWSCloudFormationFullAccess (AWS managed)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In addition to previous policies, you must create:&lt;/p&gt;
&lt;details&gt;
&lt;summary&gt;&lt;b&gt;EksAllAccess&lt;/b&gt; (click to expand)&lt;/summary&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Version": "2012-10-17",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Statement": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Effect": "Allow",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Action": "eks:*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Resource": "*"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Action": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "ssm:GetParameter",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "ssm:GetParameters"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Resource": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:ssm:*:&lt;account_id&gt;:parameter/aws/*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:ssm:*::parameter/aws/*"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Effect": "Allow"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Action": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "kms:CreateGrant",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "kms:DescribeKey"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Resource": "*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Effect": "Allow"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Action": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "logs:PutRetentionPolicy"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Resource": "*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Effect": "Allow"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/details&gt;
&lt;details&gt;
&lt;summary&gt;&lt;b&gt;IAMLimitedAccess&lt;/b&gt; (click to expand)&lt;/summary&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Version": "2012-10-17",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Statement": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Effect": "Allow",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Action": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:CreateInstanceProfile",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:DeleteInstanceProfile",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:GetInstanceProfile",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:RemoveRoleFromInstanceProfile",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:GetRole",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:CreateRole",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:DeleteRole",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:AttachRolePolicy",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:PutRolePolicy",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:ListInstanceProfiles",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:AddRoleToInstanceProfile",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:ListInstanceProfilesForRole",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:PassRole",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:DetachRolePolicy",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:DeleteRolePolicy",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:GetRolePolicy",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:GetOpenIDConnectProvider",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:CreateOpenIDConnectProvider",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:DeleteOpenIDConnectProvider",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:TagOpenIDConnectProvider",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:ListAttachedRolePolicies",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:TagRole",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:GetPolicy",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:CreatePolicy",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:DeletePolicy",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:ListPolicyVersions"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Resource": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:iam::&lt;account_id&gt;:instance-profile/eksctl-*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:iam::&lt;account_id&gt;:role/eksctl-*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:iam::&lt;account_id&gt;:policy/eksctl-*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:iam::&lt;account_id&gt;:oidc-provider/*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:iam::&lt;account_id&gt;:role/aws-service-role/eks-nodegroup.amazonaws.com/AWSServiceRoleForAmazonEKSNodegroup",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:iam::&lt;account_id&gt;:role/eksctl-managed-*"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Effect": "Allow",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Action": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:GetRole"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Resource": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "arn:aws:iam::&lt;account_id&gt;:role/*"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Effect": "Allow",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Action": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:CreateServiceLinkedRole"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Resource": "*",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "Condition": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "StringEquals": {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "iam:AWSServiceName": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "eks.amazonaws.com",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "eks-nodegroup.amazonaws.com",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "eks-fargate.amazonaws.com"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/details&gt;
&lt;p&gt;Replace &lt;code&gt;&lt;account_id&gt;&lt;/code&gt;, in both policies, with your AWS account ID, you can find it in the upper right corner, in the navigation bar. For other ways of getting your account ID, go to &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html" target="_blank" rel="noopener noreferrer"&gt;Your AWS account ID and its alias&lt;/a&gt; in the docs.&lt;/p&gt;
&lt;p&gt;Add a new user, named eksctl, to the group previously created.&lt;/p&gt;
&lt;p&gt;Don’t forget to download or copy your credentials, Access Key ID and Secret Access Key, as you will need them for setting up authentication.&lt;/p&gt;
&lt;h2 id="installing-aws-cli"&gt;Installing AWS CLI&lt;/h2&gt;
&lt;p&gt;On Linux, download the installer:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Unzip the installer:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ unzip awscliv2.zip&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;And run the installer:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo ./aws/install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For instructions on how to install AWS CLI on other operating systems, go to &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" target="_blank" rel="noopener noreferrer"&gt;Installing or updating the latest version of the AWS CLI&lt;/a&gt; in the documentation.&lt;/p&gt;
&lt;p&gt;After installing AWS CLI, run the following command for setting up authentication locally:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ aws configure --profile eksctl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It will ask you for your AWS credentials and default region.&lt;/p&gt;
&lt;h2 id="installing-aws-iam-authenticator"&gt;Installing AWS IAM Authenticator&lt;/h2&gt;
&lt;p&gt;On Linux, run the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ curl -o aws-iam-authenticator https://s3.us-west-2.amazonaws.com/amazon-eks/1.21.2/2021-07-05/bin/linux/amd64/aws-iam-authenticator&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Apply execute permissions to the binary:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ chmod +x ./aws-iam-authenticator&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create a folder in your &lt;code&gt;$HOME&lt;/code&gt; directory and add it to the &lt;code&gt;$PATH&lt;/code&gt; variable:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mkdir -p $HOME/bin &amp;&amp; cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator &amp;&amp; export PATH=$PATH:$HOME/bin&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add &lt;code&gt;$HOME/bin&lt;/code&gt; to your &lt;code&gt;.bashrc&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ echo 'export PATH=$PATH:$HOME/bin' &gt;&gt; ~/.bashrc&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For Mac and Windows, check &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html" target="_blank" rel="noopener noreferrer"&gt;Installing aws-iam-authenticator&lt;/a&gt; in the documentation.&lt;/p&gt;
&lt;h2 id="installing-kubectl"&gt;Installing kubectl&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt; From the documentation - You must use a &lt;code&gt;kubectl&lt;/code&gt; version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a &lt;code&gt;1.22&lt;/code&gt; &lt;code&gt;kubectl&lt;/code&gt; client works with Kubernetes &lt;code&gt;1.21&lt;/code&gt;, &lt;code&gt;1.22&lt;/code&gt;, and &lt;code&gt;1.23&lt;/code&gt; clusters.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;As of today, the latest version of Kubernetes used by eksctl is 1.21. Run the following command for installing the corresponding version of kubectl:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.21.2/2021-07-05/bin/linux/amd64/kubectl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Apply execute permissions to the binary:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ chmod +x ./kubectl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Copy the binary to &lt;code&gt;$HOME/bin&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cp ./kubectl $HOME/bin/kubectl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you’re using another version of Kubernetes, check &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html" target="_blank" rel="noopener noreferrer"&gt;Installing or updating kubectl&lt;/a&gt; in the documentation, where you can also find instructions for other operating systems.&lt;/p&gt;
&lt;h2 id="installing-eksctl-and-creating-a-kubernetes-cluster"&gt;Installing eksctl and creating a Kubernetes cluster&lt;/h2&gt;
&lt;p&gt;Download the binary and copy it to &lt;code&gt;/usr/local/bin&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mv /tmp/eksctl /usr/local/bin&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;On Mac and Windows, you can install eksctl following the instructions in the GitHub &lt;a href="https://github.com/weaveworks/eksctl" target="_blank" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Once installed, create a cluster with default configuration, and authenticate to AWS using IAM user created previously.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ eksctl create cluster --profile eksctl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt; From the documentation - That command will create an EKS cluster in your default region (as specified by your AWS CLI configuration) with one managed nodegroup containing two m5.large nodes.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;For a cluster with custom configuration, create a config file, named &lt;code&gt;cluster.yaml&lt;/code&gt;, with the following content:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;eksctl.io/v1alpha5&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ClusterConfig&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;basic-cluster&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;eu-north-1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;nodeGroups&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ng-1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;instanceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;m5.large&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;desiredCapacity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumeSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;allow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# will use ~/.ssh/id_rsa.pub as the default ssh key&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ng-2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;instanceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;m5.xlarge&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;desiredCapacity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumeSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;100&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;publicKeyPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;~/.ssh/ec2_id_rsa.pub&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Run eksctl to create the cluster as follows:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ eksctl create cluster -f cluster.yaml --profile eksctl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;While running, eksctl will create the cluster and all the necessary resources.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/9/eksctl_running_hu_c15eff3eca8ba295.png 480w, https://percona.community/blog/2022/9/eksctl_running_hu_8bb0aeea46f4047d.png 768w, https://percona.community/blog/2022/9/eksctl_running_hu_bf12dad224322e69.png 1400w"
src="https://percona.community/blog/2022/9/eksctl_running.png" alt="eksctl running" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;It will take a few minutes to complete. After the command is executed, you can go to &lt;a href="https://us-east-1.console.aws.amazon.com/eks/home?region=us-east-1#/clusters" target="_blank" rel="noopener noreferrer"&gt;us-east-1.console.aws.amazon.com/eks/home?region=us-east-1#/clusters&lt;/a&gt; to see the cluster.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/9/eks_cluster_hu_ff679d01ae9bc6d3.png 480w, https://percona.community/blog/2022/9/eks_cluster_hu_4ab0e18f253db1d7.png 768w, https://percona.community/blog/2022/9/eks_cluster_hu_b29bc1f177b5650d.png 1400w"
src="https://percona.community/blog/2022/9/eks_cluster.png" alt="EKS Cluster" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Don’t forget to replace &lt;code&gt;us-east-1&lt;/code&gt; in the URL, if your default region is different.&lt;/p&gt;
&lt;p&gt;Cluster credentials can be found in &lt;code&gt;~/.kube/config&lt;/code&gt;. Try &lt;code&gt;kubectl get nodes&lt;/code&gt; to verify that this file is valid, as suggested by eksctl.&lt;/p&gt;
&lt;p&gt;If, for any reason, you need to delete your cluster, just run:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ eksctl delete cluster --name=ferocious-painting-1660755039 --profile eksctl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;name&lt;/code&gt; with corresponding value.&lt;/p&gt;
&lt;p&gt;You’ve created your first Kubernetes cluster using eksctl. Check the documentation for more information on how to &lt;a href="https://eksctl.io/usage/creating-and-managing-clusters/" target="_blank" rel="noopener noreferrer"&gt;create and manage clusters&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Mario García</author>
      <category>Linux</category>
      <category>Kubernetes</category>
      <category>AWS</category>
      <category>Amazon EKS</category>
      <media:thumbnail url="https://percona.community/blog/2022/9/eksctl_running_hu_b6470d877e89d74e.jpg"/>
      <media:content url="https://percona.community/blog/2022/9/eksctl_running_hu_44229ef62aeab1fe.jpg" medium="image"/>
    </item>
    <item>
      <title>Running PMM with Docker on Ubuntu 20.04</title>
      <link>https://percona.community/blog/2022/08/05/installing-pmm-with-docker-on-ubuntu-20/</link>
      <guid>https://percona.community/blog/2022/08/05/installing-pmm-with-docker-on-ubuntu-20/</guid>
      <pubDate>Fri, 05 Aug 2022 00:00:00 UTC</pubDate>
      <description>I started at Percona a few weeks ago and was looking for a quick way to learn about PMM (Percona Monitoring and Management), which is one of my favorite technologies within Percona to monitor the health of our database infrastructure, explore new patterns in the database behavior, manage and improve the performance of our databases, all with customizable dashboards and real-time alerts using Grafana and VictoriaMetrics.</description>
      <content:encoded>&lt;p&gt;I started at Percona a few weeks ago and was looking for a quick way to learn about PMM (Percona Monitoring and Management), which is one of my favorite technologies within Percona to monitor the health of our database infrastructure, explore new patterns in the database behavior, manage and improve the performance of our databases, all with customizable dashboards and real-time alerts using &lt;a href="https://grafana.com/" target="_blank" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt; and &lt;a href="https://victoriametrics.com/" target="_blank" rel="noopener noreferrer"&gt;VictoriaMetrics&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The best of all is that PMM is Open Source, you can check the &lt;a href="https://github.com/percona/pmm" target="_blank" rel="noopener noreferrer"&gt;PMM repository&lt;/a&gt; in case you want to contribute.&lt;/p&gt;
&lt;p&gt;There are many flavors for PMM installation, here I will describe the steps to install PMM on Ubuntu 20.04, using Docker for PMM Server on an Amazon EC2 instance.&lt;/p&gt;
&lt;p&gt;This image summarizes our goal.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-ubuntu-overview_hu_130c66d5b3f9665a.png 480w, https://percona.community/blog/2022/8/pmm-ubuntu-overview_hu_7a58c512dd9cc0f9.png 768w, https://percona.community/blog/2022/8/pmm-ubuntu-overview_hu_82edbc1ac276728a.png 1400w"
src="https://percona.community/blog/2022/8/pmm-ubuntu-overview.png" alt="Overview" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;An Amazon EC2 instance with Ubuntu 20.04
&lt;ul&gt;
&lt;li&gt;This instance is configured with a Security Group with TCP port 443 open.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Docker
&lt;ul&gt;
&lt;li&gt;You can install Docker by following this &lt;a href="https://docs.docker.com/engine/install/ubuntu/" target="_blank" rel="noopener noreferrer"&gt;guide&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Manage Docker as a non-root user: &lt;strong&gt;&lt;em&gt;sudo usermod -aG docker $USER&lt;/em&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;MySQL
&lt;ul&gt;
&lt;li&gt;I am using Percona Server for MySQL from &lt;a href="https://docs.percona.com/percona-server/8.0/installation/apt_repo.html" target="_blank" rel="noopener noreferrer"&gt;Percona apt repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="installing-pmm-server-with-docker"&gt;Installing PMM Server with Docker&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Download PMM server Docker image&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker pull percona/pmm-server:2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Create the data volume container&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker create --volume /srv --name pmm-data percona/pmm-server:2 /bin/true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Run PMM server container&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run --detach --restart always --publish 443:443 --volumes-from pmm-data --name pmm-server percona/pmm-server:2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="4"&gt;
&lt;li&gt;Verify the creation of the container.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker ps&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-ubuntu-docker-ps_hu_778334322893c109.png 480w, https://percona.community/blog/2022/8/pmm-ubuntu-docker-ps_hu_ea22912ba5ddf4b4.png 768w, https://percona.community/blog/2022/8/pmm-ubuntu-docker-ps_hu_73f8a686032ac833.png 1400w"
src="https://percona.community/blog/2022/8/pmm-ubuntu-docker-ps.png" alt="docker ps" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Start a web browser and in the address bar enter the IP address of the &lt;strong&gt;PMM server&lt;/strong&gt; host: https://&lt;PUBLIC_IP&gt;:443/. For example, https://172.31.53.46. If you are running on your local machine use https://localhost:443/.
Woohoo! We have a PMM Server running and we can see our dashboard!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-dashboard_hu_f3e2a7e99a4fad50.png 480w, https://percona.community/blog/2022/8/pmm-ubuntu-pmm-dashboard_hu_5a53ccebcba11bba.png 768w, https://percona.community/blog/2022/8/pmm-ubuntu-pmm-dashboard_hu_db72bae5247480bf.png 1400w"
src="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-dashboard.png" alt="pmm-ubuntu-pmm-dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Some browsers may not trust the self-signed SSL certificate when you first open the URL. If this is the case, Chrome users may want to type &lt;strong&gt;thisisunsafe&lt;/strong&gt; to bypass the warning.&lt;/p&gt;
&lt;p&gt;The user and password are &lt;strong&gt;“admin”&lt;/strong&gt; and &lt;strong&gt;“admin”&lt;/strong&gt;, It will ask you to change the password after login in for the first time, for this demo I will use &lt;strong&gt;admin2020&lt;/strong&gt; as a password. We will use these credentials to register the node in PMM Server later.&lt;/p&gt;
&lt;p&gt;Until now we have only PMM Server. To monitor a database, we need a PMM client.&lt;/p&gt;
&lt;h2 id="installing-pmm-client"&gt;Installing PMM client&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;PMM Client&lt;/strong&gt; is a collection of agents and exporters that run on the host being monitored. Let´s install it using the repository package.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Download Percona Repo Package&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wget https://repo.percona.com/apt/percona-release_latest.&lt;span class="k"&gt;$(&lt;/span&gt;lsb_release -sc&lt;span class="k"&gt;)&lt;/span&gt;_all.deb&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Install Percona Repo Package&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt install ./percona-release_latest.&lt;span class="k"&gt;$(&lt;/span&gt;lsb_release -sc&lt;span class="k"&gt;)&lt;/span&gt;_all.deb&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Update apt cache&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt update&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="4"&gt;
&lt;li&gt;Install Percona Monitoring and Management Client&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt install pmm2-client&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="5"&gt;
&lt;li&gt;Checking the installation. We will use pmm-admin in the next steps.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo pmm-admin -v&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-v.png" alt="pmm-ubuntu-pmm-dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="creating-a-user-for-monitoring"&gt;Creating a user for monitoring&lt;/h2&gt;
&lt;p&gt;Let’s create a user in MySQL.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Login in MySQL for use the command-line: &lt;strong&gt;&lt;em&gt;mysql -uroot -p&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a “pmm” user with “welcOme1!” As a password&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE USER &lt;span class="s1"&gt;'pmm'&lt;/span&gt;@&lt;span class="s1"&gt;'localhost'&lt;/span&gt; IDENTIFIED BY &lt;span class="s1"&gt;'welcOme1!'&lt;/span&gt; WITH MAX_USER_CONNECTIONS 10&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Give “pmm” user with specific permission to monitor the database&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD, BACKUP_ADMIN ON *.* TO &lt;span class="s1"&gt;'pmm'&lt;/span&gt;@&lt;span class="s1"&gt;'localhost'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Checking if the user was created correctly with the respective permissions, use&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; show grants &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s1"&gt;'pmm'&lt;/span&gt;@&lt;span class="s1"&gt;'localhost'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-ubuntu-show-grants_hu_eaaf118b3b5e1001.png 480w, https://percona.community/blog/2022/8/pmm-ubuntu-show-grants_hu_6e04aa1898999067.png 768w, https://percona.community/blog/2022/8/pmm-ubuntu-show-grants_hu_e2b21b7188491d1b.png 1400w"
src="https://percona.community/blog/2022/8/pmm-ubuntu-show-grants.png" alt="pmm-ubuntu-show-grants" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="connect-client-to-server"&gt;Connect Client to Server&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Register Percona Monitoring and Management client with server, use the default admin/admin username and password.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo pmm-admin config --server-insecure-tls --server-url&lt;span class="o"&gt;=&lt;/span&gt;https://admin:admin2020@172.17.0.1:443&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I am using &lt;strong&gt;172.17.0.1&lt;/strong&gt; because this is the private IP where the PMM Server is running. You can get this IP by entering the docker container and typing &lt;strong&gt;“hostname -I”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/8/pmm-ubuntu-hostname-i.png" alt="pmm-ubuntu-hostname-i" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;After registering your client with the server you will see this information:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-config_hu_2c92046682bc7a77.png 480w, https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-config_hu_671dd038024060eb.png 768w, https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-config_hu_a94187a076dfb2be.png 1400w"
src="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-config.png" alt="pmm-ubuntu-pmm-admin-config" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Check if the node was registered&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-admin inventory list nodes&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;A new node should appear in the list which is &lt;strong&gt;pmm-server&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-inventory_hu_58d197706bc000c5.png 480w, https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-inventory_hu_beb71ec4c1d61705.png 768w, https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-inventory_hu_5a81dc5f4682635d.png 1400w"
src="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-inventory.png" alt="pmm-ubuntu-hostname-i" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="adding-a-mysql-database-to-monitoring"&gt;Adding a MySQL Database to monitoring&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Use pmm-admin to register the database with the user we created in MySQL&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo pmm-admin add mysql --username&lt;span class="o"&gt;=&lt;/span&gt;pmm --password&lt;span class="o"&gt;=&lt;/span&gt;welcOme1! --query-source&lt;span class="o"&gt;=&lt;/span&gt;perfschema&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-add-sql_hu_aa3fe358f469ad93.png 480w, https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-add-sql_hu_4888b439ca1fd46a.png 768w, https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-add-sql_hu_762478fafd9b7c38.png 1400w"
src="https://percona.community/blog/2022/8/pmm-ubuntu-pmm-admin-add-sql.png" alt="pmm-ubuntu-pmm-admin-add-sql" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ol start="2"&gt;
&lt;li&gt;In the dashboard, we will see that our node and database are registered and ready to be monitored by PMM.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-ubuntu-last-dashboard_hu_7f077b03e0f2bfa4.png 480w, https://percona.community/blog/2022/8/pmm-ubuntu-last-dashboard_hu_bddc5b9da8193857.png 768w, https://percona.community/blog/2022/8/pmm-ubuntu-last-dashboard_hu_20a42dc5d29c7d3b.png 1400w"
src="https://percona.community/blog/2022/8/pmm-ubuntu-last-dashboard.png" alt="pmm-ubuntu-last-dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;That’s it! :) We learned how to monitor our databases for free with Percona Monitoring Database (PMM). Additionally, you can go to the next level by registering a PMM instance with &lt;a href="https://docs.percona.com/percona-platform/" target="_blank" rel="noopener noreferrer"&gt;Percona Platform&lt;/a&gt; and receive more information.&lt;/p&gt;
&lt;p&gt;I hope you’ve enjoyed this tutorial, and if you need help following it, feel free to contact the &lt;a href="https://percona.community/blog/2022/02/10/how-to-publish-blog-post/#assistance-and-support" target="_blank" rel="noopener noreferrer"&gt;Percona team support&lt;/a&gt;. We will be happy to help.&lt;/p&gt;</content:encoded>
      <author>Edith Puclla</author>
      <category>PMM</category>
      <category>DevOps</category>
      <category>MySQL</category>
      <category>Docker</category>
      <media:thumbnail url="https://percona.community/blog/2022/8/pmm-ubuntu-overview_hu_5ad125c14b3da007.jpg"/>
      <media:content url="https://percona.community/blog/2022/8/pmm-ubuntu-overview_hu_6bd40fe72fd83535.jpg" medium="image"/>
    </item>
    <item>
      <title>Setting up PMM for monitoring MySQL on a local environment</title>
      <link>https://percona.community/blog/2022/08/05/setting-up-pmm-for-monitoring-mysql-on-a-local-environment/</link>
      <guid>https://percona.community/blog/2022/08/05/setting-up-pmm-for-monitoring-mysql-on-a-local-environment/</guid>
      <pubDate>Fri, 05 Aug 2022 00:00:00 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/8/pmm-dashboard_hu_eeaae8257f6da4fe.png 480w, https://percona.community/blog/2022/8/pmm-dashboard_hu_9dc49d98a8d66708.png 768w, https://percona.community/blog/2022/8/pmm-dashboard_hu_ec34be7f15cd5e5e.png 1400w"
src="https://percona.community/blog/2022/8/pmm-dashboard.png" alt="PMM Dashboard" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Percona Monitoring and Management (&lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;PMM&lt;/a&gt;) is an open source database monitoring, observability, and management tool that can be used for monitoring the health of your database infrastructure, exploring new patterns in database behavior, and managing and improving the performance of your databases no matter where they are located or deployed.&lt;/p&gt;
&lt;p&gt;PMM is designed to work with MySQL (including Percona Server for MySQL, Percona XtraDB Cluster, Oracle MySQL Community Edition, Oracle MySQL Enterprise Edition, and MariaDB), PostgreSQL (including Percona Distribution for PostgreSQL), MongoDB (including Percona Server for MongoDB), Amazon RDS, Amazon Aurora, Proxy SQL, and Percona XtraDB Cluster.&lt;/p&gt;
&lt;p&gt;Debian, Ubuntu, and Red Hat (AlmaLinux, Oracle Linux or Rocky Linux may also work) are supported. If you try installing on another distribution, might get the following message when trying to activate &lt;code&gt;ps80&lt;/code&gt;, &lt;code&gt;pdps-8.0&lt;/code&gt; or &lt;code&gt;pdpxc-8.0&lt;/code&gt; repositories:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo percona-release setup ps80
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Specified repository is not supported &lt;span class="k"&gt;for&lt;/span&gt; current operating system!&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Meaning your OS is not supported yet.&lt;/p&gt;
&lt;p&gt;While PMM, both the server and the client, can be installed on most operating systems, if you want to set up MySQL on an OS that is not supported, you must consider configuring a virtual machine for Percona Server for MySQL.&lt;/p&gt;
&lt;p&gt;Check the documentation for more details about &lt;a href="https://docs.percona.com/percona-software-repositories/repository-location" target="_blank" rel="noopener noreferrer"&gt;repositories&lt;/a&gt; maintained by Percona and &lt;a href="https://www.percona.com/services/policies/percona-software-support-lifecycle" target="_blank" rel="noopener noreferrer"&gt;supported platforms&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can find system requirements for PMM in the &lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/1.x/faq.html#what-are-the-minimum-system-requirements-for-pmm" target="_blank" rel="noopener noreferrer"&gt;Frequently Asked Questions&lt;/a&gt;. PMM Server and PMM clients communicate through the ports specified in the &lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/1.x/glossary.terminology.html#ports" target="_blank" rel="noopener noreferrer"&gt;Terminology&lt;/a&gt; section.&lt;/p&gt;
&lt;p&gt;Note: Instructions for installing PMM and Percona Server for MySQL, described in the following sections, are for Debian, Ubuntu and derivatives. For Red Hat and derivatives, check the &lt;a href="https://www.percona.com/software/pmm/quickstart" target="_blank" rel="noopener noreferrer"&gt;Quickstart&lt;/a&gt; guide and &lt;a href="https://docs.percona.com/percona-server/latest/installation/yum_repo.html" target="_blank" rel="noopener noreferrer"&gt;Installing Percona Server for MySQL on Red Hat Enterprise Linux and CentOS&lt;/a&gt; from the documentation.&lt;/p&gt;
&lt;h2 id="configuring-a-virtual-machine-for-mysql"&gt;Configuring a virtual machine for MySQL&lt;/h2&gt;
&lt;p&gt;If you’re on Linux and using a distribution that is not supported, configure a virtual machine before installing Percona Server for MySQL, otherwise continue with the “Installing and Configuring MySQL” section.&lt;/p&gt;
&lt;h3 id="multipass"&gt;Multipass&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://multipass.run/" target="_blank" rel="noopener noreferrer"&gt;Multipass&lt;/a&gt; is an open source tool to generate cloud-style Ubuntu VMs quickly on Linux, macOS, and Windows.&lt;/p&gt;
&lt;p&gt;It gives you a simple but powerful CLI that allows you to quickly access an Ubuntu command line or create your own local mini-cloud.
On Linux, Multipass must be installed through a snap package. If Snap is not installed on your system, check the documentation for instructions on &lt;a href="https://snapcraft.io/docs/installing-snapd" target="_blank" rel="noopener noreferrer"&gt;how to install&lt;/a&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo snap install multipass&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Then, create your virtual machine:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ multipass launch lts --name percona&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;By default, when running &lt;code&gt;multipass launch lts –name percona&lt;/code&gt;, Multipass will create a virtual machine with 1 GB of RAM and a 4.7 GB disk. A new installation of MySQL only uses 2.4 GB, along with the operating system. A VM created with default configuration of Multipass would be enough for running MySQL.&lt;/p&gt;
&lt;p&gt;If you need a virtual machine with more resources, you can create a custom one with desired memory, storage and CPUs.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ multipass launch lts --name percona --mem 2G --disk 10G --cpus &lt;span class="m"&gt;2&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The previous command will create a VM with 2 GB of RAM, a 10 GB disk and 2 CPUs.&lt;/p&gt;
&lt;p&gt;Once your VM is created and launched, you can access it by running &lt;code&gt;multipass shell percona&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;No additional configuration is required. Ports will be open automatically, and you can connect to any service configured on your VM through the IP address assigned by Multipass.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;multipass info percona&lt;/code&gt; will give you information about your VM, including IP address.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Name: percona
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;State: Running
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;IPv4: 10.203.227.64
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Release: Ubuntu 20.04.4 LTS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Image hash: 692406940d6a &lt;span class="o"&gt;(&lt;/span&gt;Ubuntu 20.04 LTS&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Load: 0.09 0.09 0.10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Disk usage: 2.3G out of 9.5G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Memory usage: 550.4M out of 1.9G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Mounts: –&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;10.203.227.64&lt;/code&gt; is the IP address of your virtual machine. You will need this value to set up PMM for monitoring MySQL.&lt;/p&gt;
&lt;p&gt;Run &lt;code&gt;ip route show&lt;/code&gt; to know the IP address that the host is identified by when logging into the VM. You will see a line similar to this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;10.203.227.0/24 dev mpqemubr0 proto kernel scope link src 10.203.227.1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;10.203.227.1&lt;/code&gt; is the IP address that Multipass uses to identify the host.&lt;/p&gt;
&lt;p&gt;Both host and virtual machine IP addresses are required for configuring PMM.&lt;/p&gt;
&lt;p&gt;Log into your virtual machine for continuing with installation of Percona Server for MySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ multipass shell percona&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="installing-required-packages"&gt;Installing required packages&lt;/h2&gt;
&lt;h3 id="install-curl-and-gnupg2"&gt;Install curl and gnupg2&lt;/h3&gt;
&lt;p&gt;Before installing PMM or Percona Server for MySQL, make sure &lt;code&gt;curl&lt;/code&gt; and &lt;code&gt;gnupg2&lt;/code&gt; are installed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt install -y curl gnupg2&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="install-percona-release"&gt;Install percona-release&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://docs.percona.com/percona-software-repositories/percona-release.html" target="_blank" rel="noopener noreferrer"&gt;percona-release&lt;/a&gt; configuration tool allows users to automatically configure which &lt;a href="https://docs.percona.com/percona-software-repositories/repository-location.html" target="_blank" rel="noopener noreferrer"&gt;Percona Software repositories&lt;/a&gt; are enabled or disabled. It supports both apt and yum repositories. Percona Server for MySQL will be installed from the &lt;code&gt;ps80&lt;/code&gt; repository and &lt;code&gt;percona-release&lt;/code&gt; is necessary for activating this repository.&lt;/p&gt;
&lt;p&gt;A good resource to learn more about this tool is &lt;a href="https://www.percona.com/blog/2020/12/15/the-hidden-magic-of-configuring-percona-repositories-with-a-percona-release-package/" target="_blank" rel="noopener noreferrer"&gt;this article&lt;/a&gt; from Percona blog.&lt;/p&gt;
&lt;p&gt;Get the repository packages:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ wget https://repo.percona.com/apt/percona-release_latest.&lt;span class="k"&gt;$(&lt;/span&gt;lsb_release -sc&lt;span class="k"&gt;)&lt;/span&gt;_all.deb&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Install the downloaded package with dpkg:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo dpkg -i percona-release_latest.&lt;span class="k"&gt;$(&lt;/span&gt;lsb_release -sc&lt;span class="k"&gt;)&lt;/span&gt;_all.deb&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="installing-and-configuring-mysql"&gt;Installing and Configuring MySQL&lt;/h2&gt;
&lt;h3 id="install-percona-server-for-mysql"&gt;Install Percona Server for MySQL&lt;/h3&gt;
&lt;p&gt;Enable the &lt;code&gt;ps80&lt;/code&gt; repository:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo percona-release setup ps80&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Install &lt;code&gt;percona-server-server&lt;/code&gt;, the package that provides the Percona Server for MySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt install percona-server-server&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After installation, confirm that the service is running. You can check the service status by running:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ service mysql status&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If the server is running, you will get the following output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;● mysql.service - Percona Server
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/mysql.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendor preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Mon 2022-08-01 08:20:59 CDT&lt;span class="p"&gt;;&lt;/span&gt; 1h 20min ago
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Main PID: &lt;span class="m"&gt;15552&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;mysqld&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Status: &lt;span class="s2"&gt;"Server is operational"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Tasks: &lt;span class="m"&gt;38&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;limit: 2339&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Memory: 362.7M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; CGroup: /system.slice/mysql.service
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; └─15552 /usr/sbin/mysqld
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Aug &lt;span class="m"&gt;01&lt;/span&gt; 08:20:57 percona systemd&lt;span class="o"&gt;[&lt;/span&gt;1&lt;span class="o"&gt;]&lt;/span&gt;: Starting Percona Server...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Aug &lt;span class="m"&gt;01&lt;/span&gt; 08:20:59 percona systemd&lt;span class="o"&gt;[&lt;/span&gt;1&lt;span class="o"&gt;]&lt;/span&gt;: Started Percona Server.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Otherwise, start the server:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo service mysql start&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="install-mysql-shell"&gt;Install MySQL Shell&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/" target="_blank" rel="noopener noreferrer"&gt;MySQL Shell&lt;/a&gt; is an advanced client and code editor for MySQL. This document describes the core features of MySQL Shell. In addition to the provided SQL functionality, similar to &lt;code&gt;mysql&lt;/code&gt;, MySQL Shell provides scripting capabilities for JavaScript and Python and includes APIs for working with MySQL&lt;/p&gt;
&lt;p&gt;Install MySQL Shell by running:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt install percona-mysql-shell&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;MySQL Shell will be used for configuring PMM to monitor MySQL. When necessary, just log into MySQL Shell as root:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqlsh root@localhost&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It will ask you for the password you assigned to the root user during the installation of Percona Server for MySQL.&lt;/p&gt;
&lt;h2 id="installing-and-configuring-pmm"&gt;Installing and Configuring PMM&lt;/h2&gt;
&lt;p&gt;PMM runs from a container, so Docker must be installed if not already on your system. Percona has an &lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/easy-install.html" target="_blank" rel="noopener noreferrer"&gt;easy-install&lt;/a&gt; script that would install Docker and any other required packages, as well as installing PMM Server.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;easy-install&lt;/code&gt; script provided by Percona checks if Docker is already on your system, otherwise it uses the &lt;a href="https://get.docker.com/" target="_blank" rel="noopener noreferrer"&gt;get-docker&lt;/a&gt; script that will create a &lt;code&gt;docker.list&lt;/code&gt; file inside the &lt;code&gt;/etc/apt/sources.list.d&lt;/code&gt; directory, containing the official repository, and it will install and configure Docker on your system.&lt;/p&gt;
&lt;p&gt;Run the following command to get PMM Server:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ curl -fsSL https://www.percona.com/get/pmm | /bin/bash&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Install PMM client:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt install pmm2-client&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Connect client to server:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo pmm-admin config --server-insecure-tls --server-url&lt;span class="o"&gt;=&lt;/span&gt;https://admin:&lt;password&gt;@pmm.example.com&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replace &lt;code&gt;&lt;password&gt;&lt;/code&gt; with default password (&lt;code&gt;admin&lt;/code&gt;) and &lt;code&gt;pmm.example.com&lt;/code&gt; with &lt;code&gt;localhost&lt;/code&gt;. Once you set up PMM and log into the dashboard from the browser, you will be required to change your password.&lt;/p&gt;
&lt;p&gt;Go to &lt;code&gt;https://localhost&lt;/code&gt; in the browser.&lt;/p&gt;
&lt;p&gt;Note: If you’re running MySQL from a virtual machine, log into your VM before running the following instructions.&lt;/p&gt;
&lt;p&gt;Log into MySQL Shell as root and change to SQL mode:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysqlsh root@localhost
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="se"&gt;\s&lt;/span&gt;ql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create a PMM user for monitoring MySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'pmm'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'localhost'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;IDENTIFIED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;BY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'pass'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;WITH&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MAX_USER_CONNECTIONS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;GRANT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PROCESS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SUPER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;REPLICATION&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;CLIENT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;RELOAD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;BACKUP_ADMIN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;ON&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;TO&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'pmm'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'localhost'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Replacing &lt;code&gt;'pass'&lt;/code&gt; with your desired password.&lt;/p&gt;
&lt;p&gt;Note: Replace &lt;code&gt;'localhost'&lt;/code&gt; with the IP address of the host, if you installed Percona Server for MySQL on a virtual machine.&lt;/p&gt;
&lt;p&gt;Register the server for monitoring:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-22" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-22"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo pmm-admin add mysql --username&lt;span class="o"&gt;=&lt;/span&gt;pmm --password&lt;span class="o"&gt;=&lt;/span&gt;&lt;password&gt; --query-source&lt;span class="o"&gt;=&lt;/span&gt;perfschema&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Where &lt;code&gt;&lt;password&gt;&lt;/code&gt; is the password you assigned to the user created for monitoring MySQL.&lt;/p&gt;
&lt;p&gt;Note: if you installed Percona Server for MySQL on a virtual machine, replace above command as follows: &lt;code&gt;sudo pmm-admin add mysql --username=pmm --password=&lt;password&gt; --host &lt;virtual-machine-IP-address&gt; --query-source=perfschema&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;PMM is now configured and monitoring MySQL.&lt;/p&gt;</content:encoded>
      <author>Mario García</author>
      <category>Linux</category>
      <category>PMM</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2022/8/pmm-dashboard_hu_f9790b2462a77b1f.jpg"/>
      <media:content url="https://percona.community/blog/2022/8/pmm-dashboard_hu_60b4ee1edbf9d9df.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.29.0 Preview Release</title>
      <link>https://percona.community/blog/2022/07/12/preview-release/</link>
      <guid>https://percona.community/blog/2022/07/12/preview-release/</guid>
      <pubDate>Tue, 12 Jul 2022 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.29.0 Preview Release Hello folks! Percona Monitoring and Management (PMM) 2.29.0 is now available as a Preview Release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-2290-preview-release"&gt;Percona Monitoring and Management 2.29.0 Preview Release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.29.0 is now available as a Preview Release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM Preview Release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Release Notes can be found in &lt;a href="https://pmm-doc-release-pr-811.onrender.com/release-notes/2.29.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="known-issues"&gt;Known issues&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-10312" target="_blank" rel="noopener noreferrer"&gt;PMM-10312&lt;/a&gt;: Metrics are not displayed on Experimental Overview and Summary dashboards&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker"&gt;Percona Monitoring and Management server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.29.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client Release Candidate tarball for 2.29.0 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-4028.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable percona testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://percona-vm.s3.amazonaws.com/PMM2-Server-2.29.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.29.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ami-0e68224439dd6f200&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Optimizing the Storage of Large Volumes of Metrics for a Long Time in VictoriaMetrics</title>
      <link>https://percona.community/blog/2022/06/02/long-time-keeping-metrics-victoriametrics/</link>
      <guid>https://percona.community/blog/2022/06/02/long-time-keeping-metrics-victoriametrics/</guid>
      <pubDate>Thu, 02 Jun 2022 00:00:00 UTC</pubDate>
      <description>Introduction Nowadays, the main tools for monitoring the operation of any application are metrics and logs. An important role is played by the time of their storage. Often, in order to understand certain processes and predict their development in the future, we need to analyze metrics over a fairly long period of time. In the case, when the project is just starting, their volume is relatively small, but over time it becomes larger and larger and there is a need for optimization. In this article, I will touch upon the mechanisms for processing, storing and optimizing metrics during their long-term storage.</description>
      <content:encoded>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Nowadays, the main tools for monitoring the operation of any application are metrics and logs. An important role is played by the time of their storage. Often, in order to understand certain processes and predict their development in the future, we need to analyze metrics over a fairly long period of time. In the case, when the project is just starting, their volume is relatively small, but over time it becomes larger and larger and there is a need for optimization. In this article, I will touch upon the mechanisms for processing, storing and optimizing metrics during their long-term storage.&lt;/p&gt;
&lt;h2 id="victoriametrics-at-a-glance"&gt;VictoriaMetrics at a Glance&lt;/h2&gt;
&lt;p&gt;The monitoring solution and the base for storing time series VictoriaMetrics was released relatively recently - in 2018, but has already gained popularity.&lt;/p&gt;
&lt;p&gt;Initially, VictoriaMetrics was designed as a time series database, but over time it has grown into a full-fledged alternative to Prometheus with its own ecosystem.&lt;/p&gt;
&lt;p&gt;VictoriaMetrics is currently a fast, cost-effective and scalable monitoring solution. You can deploy the application either from a binary file, a docker image or a snap package, or build it yourself from the source code. Single-node and cluster versions are available.&lt;/p&gt;
&lt;h2 id="why-choose-victoriametrics"&gt;Why Choose VictoriaMetrics?&lt;/h2&gt;
&lt;p&gt;The main reasons for switching from Prometheus to VictoriaMetrics for us were significant savings in system requirements and the ability to work in Push mode.&lt;/p&gt;
&lt;p&gt;Despite many external tests, we wanted to get our own data. Test bench configuration had 8 CPU, 32 GB RAM, SSD drive. The test lasted 24 hours. 25 virtual machines were the source, each of which emulated 10 MySQL instances. In terms of the volume of metrics, there were to 96100 metrics per second, the total volume was about 8.5 billion metrics per day.&lt;/p&gt;
&lt;p&gt;The result of testing was about three times less disk space usage with VictoriaMetrics (8.44 GB against 23.11 with Prometheus), approximately twice less the amount of RAM. The CPU requirements were about the same.&lt;/p&gt;
&lt;p&gt;As for the push mode, it works as follows: exporters work on the target host and collect metrics, then in the classical scheme of work, Prometheus polls the exporters and collects metrics at specified intervals. This scheme of operation has a significant disadvantage as we need to maintain several ports open. The new scheme uses the VMAgent component, which is installed on the client side and collects metrics from exporters, after which it pushes to the VictoriaMetrics server.&lt;/p&gt;
&lt;p&gt;Also, important factors in favor of VictoriaMetrics were: ease of installation and subsequent support, the possibility of more flexible performance settings and the availability of features that are not available in other applications. For example, Prometheus lacks downsampling.&lt;/p&gt;
&lt;h2 id="victoriametrics-single-node-and-cluster-versions"&gt;VictoriaMetrics Single-Node and Cluster Versions&lt;/h2&gt;
&lt;p&gt;VictoriaMetrics can work in two versions: single-node and cluster.&lt;/p&gt;
&lt;p&gt;The single-node version is used for relatively small amounts of data (less than a million metrics per second) and does not provide scalability and fault tolerance, since all application components are connected into a monolith.&lt;/p&gt;
&lt;p&gt;VictoriaMetrics consists of the following components:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;vmstorage&lt;/strong&gt; - the storage itself;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;vminsert&lt;/strong&gt; - endpoint for receiving metrics based on the Prometheus remote_write API;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;vmselect&lt;/strong&gt; - a component that allows you to make queries using the Prometheus querying API.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In the official documentation, developers recommend using the single-node version and use clustered version only if there is a real need, and you understand the consequences of such a decision.&lt;/p&gt;
&lt;h2 id="general-principles-of-tsdb-work"&gt;General Principles of TSDB Work&lt;/h2&gt;
&lt;p&gt;TSDB (a time series database) is used as a database for storing metrics. TSDB has many differences compared to relational databases - write operations prevail over read operations, there are no relationships between data. Since the metric has one value at a certain point in time, there is no need for nested structures. Typically, an amount of data is large.&lt;/p&gt;
&lt;p&gt;The unit of data in such a database is a time point. The data structure of such a point consists of:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;timestamp - time in Unix format&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;name&lt;/strong&gt; field, from which the name of the metric is taken. This field can be missing, but this is an antipattern, because in any case we need to know the name of the metric we are tracking.&lt;/li&gt;
&lt;li&gt;Additional Label fields that are needed for any actions with metrics (aggregation by some attribute, filtering, etc.)&lt;/li&gt;
&lt;li&gt;Field with metric value.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;When data is recorded, time series are formed. A time series is a sequence of strictly monotonically increasing data points over time that can be accessed using a metric.&lt;/p&gt;
&lt;p&gt;Thus, we can say that the database is relatively “static”, because it contains a certain amount of data (metrics) that do not change over time. That means that, the data in these time series increases over time, but their number remains the same. This is the basis for optimization examples that will be discussed later.&lt;/p&gt;
&lt;h2 id="optimization-of-large-queries"&gt;Optimization of Large Queries&lt;/h2&gt;
&lt;p&gt;If the number of metrics and their storage time increases, the amount of required resources for the application inevitably increases too. VictoriaMetrics has mechanisms for adjusting consumed resources.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;memory.allowedPercent&lt;/code&gt; and &lt;code&gt;memory.allowedBytes&lt;/code&gt; keys allow you to limit the amount of memory for external buffers and query caching data.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;search.maxUniqueTimeseries&lt;/code&gt; key prevents excessive resource consumption when executing large queries, and in some cases can prevent the application from crashing with an out of memory error when executing large queries. This parameter is set to 300000 by default and reflects the number of unique series returned in response to the request to &lt;code&gt;/api/v1/query&lt;/code&gt; and &lt;code&gt;/api/v1/query_range&lt;/code&gt; endpoints.&lt;/p&gt;
&lt;p&gt;Also, the &lt;code&gt;search.maxSamplesPerQuery&lt;/code&gt; key can be very useful, which limits the number of returned metrics in one query.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;search.maxQueueDuration&lt;/code&gt; key is responsible for the time to wait for a response to a request.&lt;/p&gt;
&lt;p&gt;In general, there are a fairly large number of keys that affect performance. In this post, I mention only whose that we use in our practice.&lt;/p&gt;
&lt;h2 id="what-downsampling-is-and-how-it-works"&gt;What Downsampling Is and How It Works&lt;/h2&gt;
&lt;p&gt;An important feature of VictoriaMetrics is downsampling - the ability to delete data as it becomes obsolete. This functionality is available only in the Enterprise version. But it is also built into PMM - Percona Monitoring and Management.&lt;/p&gt;
&lt;p&gt;As I mentioned above (in the paragraph describing the features of the TSDB work), time series must have a large amount of data and be unchanged in order to obtain maximum sampling efficiency.&lt;/p&gt;
&lt;p&gt;The -downsampling.period key is responsible for the work. Example: the frequency of collecting metrics in our case is once every 5 seconds, but in this case, the volume of the database will grow very quickly if we have a large amount of metrics. So we define a policy for storing metrics - after one hour we store metrics with an interval of 10 seconds, every other day with an interval of 30 seconds, after a week - with an interval of 1 minute, after a month - with 5 minutes, after a year - 1 hour. So it will look like this:
&lt;code&gt;-downsampling.period=1h:10s,1d:30s,1w:1m,30d:5m,360d:1h&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="what-deduplication-is-and-how-it-works"&gt;What Deduplication Is and How It Works&lt;/h2&gt;
&lt;p&gt;Deduplication is a technology that allows you to analyze duplicate data and replace it with an appropriate reference. The use of deduplication can significantly reduce the amount of data. It is used when Prometheus or vmagent are working in HA mode and write to one VictoriaMetrics instance.&lt;/p&gt;
&lt;p&gt;In this case, we definitely need deduplication, since the database stores overlapping data, which significantly increases its volume and, in case of large volumes, the data request time. The &lt;code&gt;dedup.minScrapeInterval&lt;/code&gt; key is responsible for the operation of deduplication.&lt;/p&gt;
&lt;p&gt;For example, &lt;code&gt;-dedup.minScrapeInterval=60s&lt;/code&gt; means that within the same time series, all data will be collapsed and only the first point within 60 seconds will be saved. Since version 1.77 leave the last raw sample per each -dedup.minScrapeInterval discrete interval.&lt;/p&gt;
&lt;p&gt;It is recommended to set this parameter to scrape_interval for metrics. According to best practice, scrape_interval should be the same for all metrics, but this is a topic is for a separate post.&lt;/p&gt;
&lt;h2 id="example-of-deduplication"&gt;Example of Deduplication&lt;/h2&gt;
&lt;p&gt;As an example, let’s consider the case where scrape_interval=10s and minScrapeInterval=15s.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Before deduplication:&lt;/strong&gt; 05, 10, 15, 25, 35, 45, 55&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;interval:&lt;/strong&gt; [00…15] [15…30] [30…45] [45…60]
&lt;strong&gt;timestamp:&lt;/strong&gt; [05 10] [15 25] [35 ] [45 55]&lt;/p&gt;
&lt;p&gt;Thus, after deduplication, only those points will remain: 05, 15, 35, 45.&lt;/p&gt;
&lt;h2 id="rotation-of-metrics"&gt;Rotation of Metrics&lt;/h2&gt;
&lt;p&gt;Rotation of metrics is their removal when they become obsolete. The retentionPeriod key is responsible for rotation in the VM. By default, this period is 30 days. Therefore, you should immediately set the required storage period when launching VictoriaMetrics. Let’s dive a little deeper into the features of data storage. Example: we set the metrics rotation time to 1 year, 4 months, 2 weeks, 3 days and 5 hours &lt;code&gt;-retentionPeriod=1.3y2w3d5h&lt;/code&gt;. The year is a fractional number here to avoid the confusion with “m” which can mean both the month and the minute.&lt;/p&gt;
&lt;p&gt;When writing, data is stored in directories like ../data/{small,big}. This directory contains data like rowsCount_blocksCount_minTimestamp_maxTimestamp. The directories are rotated as follows: in the first unit of time of the selected period (day, week, month, year), metrics for the period preceding the previous one are deleted. Example: metric rotation is set to 1 month by default. On March 1, the directory containing the data for January is deleted. An important feature is that it is possible to increase the rotation time without any data loss of the running instance. If the rotation time is reduced accordingly, the data that goes beyond this time will be deleted.&lt;/p&gt;
&lt;p&gt;Based on personal experience, a sufficient data retention period for metrics will be one and a half years.&lt;/p&gt;
&lt;p&gt;It is also worth emphasizing that if you plan to store data indefinitely, then you still need to set the data retention period, in this case it is set to a very large number, for example, 900 years.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this post, we looked into the possibilities that VictoriaMetrics can offer to optimize the storage of metrics and reduce the usage of disk space and RAM. It can help you to reduce your costs significantly, especially if you need to store a lot of metrics. But be mindful regarding the parameters to be set, with clear understanding of your goals.&lt;/p&gt;</content:encoded>
      <author>Anton Bystrov</author>
      <category>blog</category>
      <category>metrics</category>
      <category>VictoriaMetrics</category>
      <media:thumbnail url="https://percona.community/blog/2022/6/VictoriaMetrics_hu_935a9cc1c8f27e32.jpg"/>
      <media:content url="https://percona.community/blog/2022/6/VictoriaMetrics_hu_239f0c8b38cab899.jpg" medium="image"/>
    </item>
    <item>
      <title>Reduce Replication Lag</title>
      <link>https://percona.community/blog/2022/06/01/speed-up-replication-lag/</link>
      <guid>https://percona.community/blog/2022/06/01/speed-up-replication-lag/</guid>
      <pubDate>Wed, 01 Jun 2022 00:00:00 UTC</pubDate>
      <description>Replication Lag is just a fact of life with async-replication. We can’t stop lag, but we can help to reduce it. Many times the Seconds_Behind_Source can be very deciving, I have seen it go from 1 hour behind to 0 lag in the blink of an eye. There are many factors that can add to replica lag. Some of these are:</description>
      <content:encoded>&lt;p&gt;Replication Lag is just a fact of life with async-replication. We can’t stop lag, but we can help to reduce it. Many times the Seconds_Behind_Source can be very deciving, I have seen it go from 1 hour behind to 0 lag in the blink of an eye. There are many factors that can add to replica lag. Some of these are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Network IO&lt;/li&gt;
&lt;li&gt;Disk IO&lt;/li&gt;
&lt;li&gt;Database Workload&lt;/li&gt;
&lt;li&gt;Database settings&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this blog we will look a few database settings to help reduce lag. The settings we will look at are listed below.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;blinlog_transaction_dependency_tracking&lt;/li&gt;
&lt;li&gt;binlog_group_commit_sync_delay&lt;/li&gt;
&lt;li&gt;replica_parallel_type&lt;/li&gt;
&lt;li&gt;replica_parallel_workers&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="hardware"&gt;Hardware:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Two Raspberry Pi 4 with 8GB of RAM.&lt;/li&gt;
&lt;li&gt;Sandisk 128GB Extreme microSDXC card.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="software"&gt;Software:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;OS Raspbian Bullseye 64bit.&lt;/li&gt;
&lt;li&gt;Percona Server version 8.0.26.&lt;/li&gt;
&lt;li&gt;Sysbench 1.0.20.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="testing-setup"&gt;Testing Setup&lt;/h2&gt;
&lt;p&gt;Using Sysbench I set up 10 tables with 250,000 rows of data. If interested here is the command I used:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;text&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sysbench /usr/share/sysbench/oltp_read_write.lua \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--mysql-db=YOUR-DB --threads=4 --mysql-host=YOUR-HOST \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--mysql-user=YOUR-USER --mysql-password=YOUR-PASSWORD --tables=10 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--table-size=250000 prepare&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="test-one-with-default-settings"&gt;Test one with default settings&lt;/h2&gt;
&lt;p&gt;If you have not changed any of the default setting you can skip the blow changes. If your
not sure verify and change as needed.&lt;/p&gt;
&lt;p&gt;The initial testing was done with the following default settings. Make sure you make
these settings on both the primary and the replica. You will need to stop replication
on the replica before making the changes. Once changes are complete restart replication.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;text&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global binlog_transaction_dependency_tracking = 'COMMIT_ORDER';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global binlog_group_commit_sync_delay = 0;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global replica_parallel_type = 'DATABASE';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global replica_parallel_workers = 0;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Using sysbench I ran the OLTP read/write test. I used the following setting for the test.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;text&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sysbench --db-driver=mysql --report-interval=2 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --threads=4 --time=300 --mysql-host=YOUR-HOST \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --mysql-user=YOUR-USER --mysql-password=PASSWD \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --mysql-db=YOUR-DB /usr/share/sysbench/oltp_read_write.lua run&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;At the end of this test, replicacation lag was &lt;strong&gt;20 minutes&lt;/strong&gt; behind the primary.&lt;/p&gt;
&lt;h2 id="test-two-with-adjusted-settings"&gt;Test two with adjusted settings&lt;/h2&gt;
&lt;p&gt;In the second test we will apply the new setting to help reduce replication lag time. Just like in test one
you will want to make these changes on both primary and replica. Make sure to stop replication on the replica
before applying the changes. Once changes are applied restart replication on the repliica.&lt;/p&gt;
&lt;p&gt;Make the following settings on your primary:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;text&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global binlog_transaction_dependency_tracking = 'writeset';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global replica_parallel_type = 'LOGICAL_CLOCK';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global replica_parallel_workers = 4;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Make the following changes on your replica:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;text&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global binlog_group_commit_sync_delay = 3000;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global replica_parallel_type = 'LOGICAL_CLOCK';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql &gt;set global replica_parallel_workers = 4;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I repeated the test from above. At the end of this test, replicacation lag was &lt;strong&gt;6 minutes&lt;/strong&gt; behind the primary.&lt;/p&gt;
&lt;h2 id="blinlog_transaction_dependency_tracking--writeset"&gt;blinlog_transaction_dependency_tracking = writeset&lt;/h2&gt;
&lt;p&gt;I repeated the test from above. At the end of this test, replication lag was &lt;strong&gt;6 minutes&lt;/strong&gt; behind the primary.&lt;/p&gt;
&lt;h2 id="setting-details"&gt;Setting Details:&lt;/h2&gt;
&lt;h2 id="blinlog_transaction_dependency_tracking--writeset-1"&gt;blinlog_transaction_dependency_tracking = writeset&lt;/h2&gt;
&lt;p&gt;This allows for transactions that are marked as indipendent to be applied in parallel on the replica. Note to take advantage of
this you need to set replica_parallel_workers to a non 0 value.&lt;/p&gt;
&lt;h2 id="binlog_group_commit_sync_delay--3000"&gt;binlog_group_commit_sync_delay = 3000&lt;/h2&gt;
&lt;p&gt;This controls how many microseconds the binary log commit waits before syncing the binlog to disk. Change this to a non 0 value
will enable more transactions to be synced at one time to disk. This will reduce the overall time to commit.&lt;/p&gt;
&lt;h2 id="replica_parallel_type--logical_clock"&gt;replica_parallel_type = logical_clock&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;As of version 8.0.27 the default value is logical_clock.&lt;/strong&gt;
Transaction will be applied in parrallel on the replica based on the source timestamp in the binlog.&lt;/p&gt;
&lt;h2 id="replica_parallel_workers--4"&gt;replica_parallel_workers = 4&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;As of 8.0.27 this is a defaul value of 4.&lt;/strong&gt;
This allows for multithreading on the replica, and set the number of applier threads.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;As we look at the results of both tests we saw a very big difference in lag. The Sysbench workload might not reflect a real world
database uses, but it does provide us with baseline numbers to compare.&lt;/p&gt;
&lt;p&gt;In the first test we saw a lag of 20 minutes at the end of the sysbench test. In our second test we say just 6 minutes of lag at
the end of the sysbench test.&lt;/p&gt;
&lt;p&gt;Dropping lag from 20 minutes down to 6 minutes is a decrease of more than &lt;strong&gt;50%&lt;/strong&gt;. That is a huge decrease.&lt;/p&gt;
&lt;p&gt;As I high lighted above two of these variables will become default values with 8.0.27. I did my testing on 8.0.26 so I could demo these changes
on a version what did not have the new standards.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Percona</category>
      <category>MySQL</category>
      <category>Replication</category>
      <category>LAG</category>
      <category>performance</category>
      <media:thumbnail url="https://percona.community/blog/2022/6/snail_hu_71ed8a478bb0a978.jpg"/>
      <media:content url="https://percona.community/blog/2022/6/snail_hu_e154e573474114d6.jpg" medium="image"/>
    </item>
    <item>
      <title>How and Why Contribute to Communities</title>
      <link>https://percona.community/blog/2022/05/30/csi-minikube-multinode/</link>
      <guid>https://percona.community/blog/2022/05/30/csi-minikube-multinode/</guid>
      <pubDate>Mon, 30 May 2022 00:00:00 UTC</pubDate>
      <description>Why Lets start with a simple question “Why to contribute?”.</description>
      <content:encoded>&lt;h2 id="why"&gt;Why&lt;/h2&gt;
&lt;p&gt;Lets start with a simple question “Why to contribute?”.&lt;/p&gt;
&lt;p&gt;In our day to day development’s and user’s life we use tons of OSS (open source) software. Ppl develop that software together to have ability to use them in more standard and open way, so they spend less time negotiating on interfaces and tools (and that is not the main reason, one of the reasons for OSS).&lt;/p&gt;
&lt;p&gt;As any sustainable process, OSS development also needs not only users but contributors to be able to move project forward as well as to sustain bugs, time and new tech trends. As users we have different use cases that might not be yet implemented but could be very valuable for other users.&lt;/p&gt;
&lt;p&gt;As an example I use &lt;code&gt;minikube&lt;/code&gt; for the development and testing of &lt;a href="https://docs.percona.com/percona-monitoring-and-management/using/dbaas.html" target="_blank" rel="noopener noreferrer"&gt;PMM DBaaS solution&lt;/a&gt;. That tool allows me to run Kubernetes (k8s) locally and run &lt;a href="https://www.percona.com/software/percona-kubernetes-operators" target="_blank" rel="noopener noreferrer"&gt;Percona operators&lt;/a&gt; with help of DBaaS.&lt;/p&gt;
&lt;p&gt;One of the great &lt;code&gt;minikube&lt;/code&gt; features is to run real multi-node k8s clusters (see this &lt;a href="https://percona.community/blog/2021/12/20/pmm-minikube-postgres/" target="_blank" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; for details):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube start --nodes&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt; --cpus&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt; --memory&lt;span class="o"&gt;=&lt;/span&gt;8G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get nodes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME STATUS ROLES AGE VERSION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube Ready control-plane,master 2d22h v1.22.3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube-m02 Ready &lt;none&gt; 2d22h v1.22.3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube-m03 Ready &lt;none&gt; 2d22h v1.22.3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube-m04 Ready &lt;none&gt; 2d22h v1.22.3&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I usually run integration test with &lt;code&gt;--driver=kvm&lt;/code&gt; and some simple sanity tests with &lt;code&gt;--driver=podman&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;During my testing I found out that I can’t deploy operators with DBaaS on &lt;code&gt;minikube&lt;/code&gt; multi-node cluster and I found similar &lt;a href="https://perconadev.atlassian.net/browse/K8SPXC-879" target="_blank" rel="noopener noreferrer"&gt;Jira issue about it&lt;/a&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;console&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-console" data-lang="console"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="gp"&gt;$&lt;/span&gt; kubectl get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;percona-server-mongodb-operator-fcc5c8d6-rphcs 1/1 Running 0 3h11m
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;percona-xtradb-cluster-operator-566848cf48-zm28g 1/1 Running 0 3h11m
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;pmm-0 1/1 Running 0 8m19s
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;test-haproxy-0 2/3 Running 0 9s
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;test-pxc-0 0/2 Init:CrashLoopBackOff 1 (5s ago) 9s
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt; kubectl logs test-pxc-0 pxc-init
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;++ id -u
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;++ id -g
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;+ install -o 2 -g 2 -m 0755 -D /pxc-entrypoint.sh /var/lib/mysql/pxc-entrypoint.sh
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;install: cannot create regular file '/var/lib/mysql/pxc-entrypoint.sh': Permission denied
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So that is Why - ability to use &lt;code&gt;minikube&lt;/code&gt; to test operator’s DB deployments.&lt;/p&gt;
&lt;h2 id="community-hackdays"&gt;Community Hackdays&lt;/h2&gt;
&lt;p&gt;Percona engineering management came with idea of dedicating a Focus day (we have those in Percona :) to community contributions. That was a great initiative, even if community contribution is our routine (we do it day to day when needed), having dedicated day is a nice way to educate others on how to do it on a good set of examples.&lt;/p&gt;
&lt;p&gt;I run with my &lt;code&gt;minikube&lt;/code&gt; multi-node issue as an example of both day to day work and what could be achieved during one community hackday.&lt;/p&gt;
&lt;h3 id="day-to-day-community-hacking"&gt;Day to day community hacking&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;minikube&lt;/code&gt; issue affects me as a developer so I spent a day to investigate it and half a day to find out workaround and next steps.&lt;/p&gt;
&lt;p&gt;First I spent quite a time to understand what is going on and if that issue of &lt;code&gt;minikube&lt;/code&gt; or DBaaS, or maybe operator’s issue. It was interesting detective work and I found out that it is indeed &lt;code&gt;minikube&lt;/code&gt; related issue and similar issue already exists in GitHub: &lt;a href="https://github.com/kubernetes/minikube/issues/12360" target="_blank" rel="noopener noreferrer"&gt;kubernetes/minikube #12360&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I have described my findings in &lt;a href="https://github.com/kubernetes/minikube/issues/12360#issuecomment-1123247475" target="_blank" rel="noopener noreferrer"&gt;this comment&lt;/a&gt; and later found workaround that enables me and my colleagues to continue to use &lt;code&gt;minikube&lt;/code&gt; in &lt;a href="https://github.com/kubernetes/minikube/issues/12360#issuecomment-1123794143" target="_blank" rel="noopener noreferrer"&gt;multi-node setup&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;That was day to day community hacking, I also spent a little time to find out how to fix it correctly and joined &lt;a href="https://minikube.sigs.k8s.io/docs/contrib/triage/" target="_blank" rel="noopener noreferrer"&gt;Minikube Triage party&lt;/a&gt; to discuss the issue (sorry folks, still need to find time to join it regularly and help with triaging).&lt;/p&gt;
&lt;p&gt;And there I left it to the next opportunity to contribute.&lt;/p&gt;
&lt;h3 id="hackday"&gt;Hackday&lt;/h3&gt;
&lt;p&gt;Opportunity presented itself quite quickly with new Community Hackday initiative and I decided that it would be a great time to fix part of the issue as the complete fix would take longer than a day.&lt;/p&gt;
&lt;p&gt;First step in fixing &lt;a href="https://github.com/kubernetes/minikube/issues/12360" target="_blank" rel="noopener noreferrer"&gt;kubernetes/minikube #12360&lt;/a&gt; is to fix &lt;a href="https://github.com/kubernetes-csi/csi-driver-host-path" target="_blank" rel="noopener noreferrer"&gt;kubernetes-csi/csi-driver-host-path&lt;/a&gt; to support unprivileged containers.&lt;/p&gt;
&lt;p&gt;So I took it for the day and here describe my progress…&lt;/p&gt;
&lt;h2 id="contributing-to-the-community-project"&gt;Contributing to the community project&lt;/h2&gt;
&lt;p&gt;So your first help on how to contribute usually are &lt;a href="https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/README.md" target="_blank" rel="noopener noreferrer"&gt;README.md&lt;/a&gt; and &lt;a href="https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/CONTRIBUTING.md" target="_blank" rel="noopener noreferrer"&gt;CONTRIBUTING.md&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I started with forking the repo on GH (GitHub) UI and cloning it locally:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ git clone git@github.com:denisok/csi-driver-host-path.git&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;First what I would like to do is to compile the code, create container and reproduce the issue.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;cd&lt;/span&gt; csi-driver-host-path
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make container
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;./release-tools/verify-go-version.sh &lt;span class="s2"&gt;"go"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;======================================================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; WARNING
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This projects is tested with Go v1.18.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Your current Go version is v1.16.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; This may or may not be close enough.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; In particular test-gofmt and test-vendor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; are known to be sensitive to the version of
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Go.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;======================================================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mkdir -p bin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# os_arch_seen captures all of the $os-$arch-$buildx_platform seen for the current binary&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# that we want to build, if we've seen an $os-$arch-$buildx_platform before it means that&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# we don't need to build it again, this is done to avoid building&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# the windows binary multiple times (see the default value of $BUILD_PLATFORMS)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="nv"&gt;os_arch_seen&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="o"&gt;&amp;&amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; tr &lt;span class="s1"&gt;';'&lt;/span&gt; &lt;span class="s1"&gt;'\n'&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; -r os arch buildx_platform suffix base_image addon_image&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="nv"&gt;os_arch_seen_pre&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;os_arch_seen&lt;/span&gt;&lt;span class="p"&gt;%%&lt;/span&gt;&lt;span class="nv"&gt;$os&lt;/span&gt;&lt;span class="p"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;$arch&lt;/span&gt;&lt;span class="p"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;$buildx_platform&lt;/span&gt;&lt;span class="p"&gt;*&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; ! &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="si"&gt;${#&lt;/span&gt;&lt;span class="nv"&gt;os_arch_seen_pre&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="si"&gt;${#&lt;/span&gt;&lt;span class="nv"&gt;os_arch_seen&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="k"&gt;fi&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; ! &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;set&lt;/span&gt; -x&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; ./cmd/hostpathplugin &lt;span class="o"&gt;&amp;&amp;&lt;/span&gt; &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="nv"&gt;GOOS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$os&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;GOARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$arch&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; go build -a -ldflags &lt;span class="s1"&gt;' -X main.version=v1.8.0-6-g50b99a39 -extldflags "-static"'&lt;/span&gt; -o &lt;span class="s2"&gt;"/home/dkondratenko/Workspace/github/csi-driver-host-path/bin/hostpathplugin&lt;/span&gt;&lt;span class="nv"&gt;$suffix&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Building hostpathplugin for GOOS=&lt;/span&gt;&lt;span class="nv"&gt;$os&lt;/span&gt;&lt;span class="s2"&gt; GOARCH=&lt;/span&gt;&lt;span class="nv"&gt;$arch&lt;/span&gt;&lt;span class="s2"&gt; failed, see error(s) above."&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="nb"&gt;exit&lt;/span&gt; 1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="k"&gt;fi&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; &lt;span class="nv"&gt;os_arch_seen&lt;/span&gt;&lt;span class="o"&gt;+=&lt;/span&gt;&lt;span class="s2"&gt;";&lt;/span&gt;&lt;span class="nv"&gt;$os&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;$arch&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;$buildx_platform&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt;&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ &lt;span class="nb"&gt;cd&lt;/span&gt; ./cmd/hostpathplugin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ &lt;span class="nv"&gt;GOOS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ &lt;span class="nv"&gt;GOARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ go build -a -ldflags &lt;span class="s1"&gt;' -X main.version=v1.8.0-6-g50b99a39 -extldflags "-static"'&lt;/span&gt; -o /home/dkondratenko/Workspace/github/csi-driver-host-path/bin/hostpathplugin .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker build -t hostpathplugin:latest -f Dockerfile --label &lt;span class="nv"&gt;revision&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;v1.8.0-6-g50b99a39 .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;STEP 1/7: FROM alpine
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;STEP 2/7: LABEL &lt;span class="nv"&gt;maintainers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Kubernetes Authors"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; Using cache 9172a5d022e2a2550bcb0f6f7faa0b6a2126dcf7c1a0266924f4989370fbf80e
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; 9172a5d022e
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;STEP 3/7: LABEL &lt;span class="nv"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"HostPath Driver"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; Using cache 532cdc0c943df037d70368de6b7e90adb39dda3c6f9d7645c7ca6a9bd8d50abd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; 532cdc0c943
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;STEP 4/7: ARG &lt;span class="nv"&gt;binary&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./bin/hostpathplugin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; Using cache 762a2b09549d02f9cd3d1dd8220c1b6890ae48efc155ae7aff276ae53bf7836b
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; 762a2b09549
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;STEP 5/7: RUN apk add util-linux coreutils &lt;span class="o"&gt;&amp;&amp;&lt;/span&gt; apk update &lt;span class="o"&gt;&amp;&amp;&lt;/span&gt; apk upgrade
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; Using cache 4bd7cf3998cc06cfdc780d3abdf6cedc452170ad93cf46cd3f4d12a8f5f97f09
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; 4bd7cf3998c
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;STEP 6/7: COPY &lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;binary&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; /hostpathplugin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; a8e75bbeab1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;STEP 7/7: ENTRYPOINT &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/hostpathplugin"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;COMMIT hostpathplugin:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--&gt; b0014a637af
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Successfully tagged localhost/hostpathplugin:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;b0014a637af31632b48f39def813637ad0d83d11d008d5b89edb52f28498b805
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ podman images
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;REPOSITORY TAG IMAGE ID CREATED SIZE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;none&gt; &lt;none&gt; 1ec47f8d8558 &lt;span class="m"&gt;46&lt;/span&gt; seconds ago 35.6 MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;localhost/hostpathplugin latest f36f889fb57b &lt;span class="m"&gt;2&lt;/span&gt; minutes ago 35.6 MB&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It appears to be super easy, I had Go 1.18 and podman already setup on my machine.&lt;/p&gt;
&lt;p&gt;So I have an image and now need to reproduce the issue. I need k8s cluster, setup CSI driver and upload my custom container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube start --nodes&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt; --cpus&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt; --memory&lt;span class="o"&gt;=&lt;/span&gt;2G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube addons disable storage-provisioner
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🌑 &lt;span class="s2"&gt;"The 'storage-provisioner' addon is disabled"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl delete storageclass standard
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;storageclass.storage.k8s.io &lt;span class="s2"&gt;"standard"&lt;/span&gt; deleted
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;cd&lt;/span&gt; deploy/kubernetes-distributed/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;kubernetes-distributed&lt;span class="o"&gt;]&lt;/span&gt;$ ./deploy.sh
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;applying RBAC rules
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl https://raw.githubusercontent.com/kubernetes-csi/external-provisioner/v3.1.0/deploy/kubernetes/rbac.yaml --output /tmp/tmp.yXGWmlOXv9/rbac.yaml --silent --location
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply --kustomize /tmp/tmp.yXGWmlOXv9
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;serviceaccount/csi-provisioner created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;role.rbac.authorization.k8s.io/external-provisioner-cfg created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rolebinding.rbac.authorization.k8s.io/csi-provisioner-role-cfg created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;csistoragecapacities.v1beta1.storage.k8s.io:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; No resources found in default namespace.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;deploying with CSIStorageCapacity v1beta1: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;deploying hostpath components
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ./hostpath/csi-hostpath-driverinfo.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;csidriver.storage.k8s.io/hostpath.csi.k8s.io created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ./hostpath/csi-hostpath-plugin.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; using image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; using image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; using image: k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; using image: k8s.gcr.io/sig-storage/livenessprobe:v2.6.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;daemonset.apps/csi-hostpathplugin created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ./hostpath/csi-hostpath-storageclass-fast.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;storageclass.storage.k8s.io/csi-hostpath-fast created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ./hostpath/csi-hostpath-storageclass-slow.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;storageclass.storage.k8s.io/csi-hostpath-slow created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ./hostpath/csi-hostpath-testing.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; using image: docker.io/alpine/socat:1.7.4.3-r0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/hostpath-service created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/csi-hostpath-socat created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl patch storageclass csi-hostpath-fast -p &lt;span class="s1"&gt;'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;storageclass.storage.k8s.io/csi-hostpath-fast patched&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;There I have k8s cluster with 2 nodes, disabled standard &lt;code&gt;minikube&lt;/code&gt; storage-provisioned (which doesn’t support multi-node) deleted &lt;code&gt;storageclass&lt;/code&gt; that was working with that storage-provisioner and setup CSI hostpathplugin. Also enabled &lt;code&gt;default&lt;/code&gt; flag on the &lt;code&gt;storageclass&lt;/code&gt; for hostpathplugin so it would provision PVCs for me.&lt;/p&gt;
&lt;p&gt;Lets create test manifest &lt;code&gt;perm_test.yaml&lt;/code&gt; to reproduce the issue:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;apps/v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;StatefulSet&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;perm-test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;perm-test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;perm-test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;matchLabels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;perm-test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;perm-test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;securityContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;fsGroup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;65534&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;runAsGroup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;65534&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;runAsNonRoot&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;runAsUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;65534&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;busybox&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;perm-test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/bin/sh"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;"-c"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; touch /mnt/perm_test/file_test &amp;&amp; echo passed &amp;&amp; sleep 3600 &amp;&amp; exit 0
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; echo failed
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; exit 1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumeMounts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;mountPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;/mnt/perm_test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;perm-test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumeClaimTemplates&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;perm-test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;accessModes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ReadWriteOnce"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;1G&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;And test it to see that we really have a problem with unprivileged container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl apply -f perm_test.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;statefulset.apps/perm-test created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl logs perm-test-0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;touch: /mnt/perm_test/file_test: Permission denied
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;failed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get pods -o wide
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;csi-hostpath-socat-0 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 24h 10.244.1.13 minikube-m02 &lt;none&gt; &lt;none&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;csi-hostpathplugin-fnhvr 4/4 Running &lt;span class="m"&gt;0&lt;/span&gt; 2m27s 10.244.0.24 minikube &lt;none&gt; &lt;none&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;csi-hostpathplugin-w5rxt 4/4 Running &lt;span class="m"&gt;0&lt;/span&gt; 2m30s 10.244.1.55 minikube-m02 &lt;none&gt; &lt;none&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;perm-test-0 0/1 Error &lt;span class="m"&gt;0&lt;/span&gt; 2m18s 10.244.1.56 minikube-m02 &lt;none&gt; &lt;none&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If we put &lt;code&gt;sleep 3600&lt;/code&gt; before &lt;code&gt;exit 1&lt;/code&gt; we actually could jump into the container and inspect the permissions:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; --stdin --tty perm-test-0 -- sh
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ id
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;uid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;65534&lt;span class="o"&gt;(&lt;/span&gt;nobody&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;gid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;65534&lt;span class="o"&gt;(&lt;/span&gt;nobody&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;groups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;65534&lt;span class="o"&gt;(&lt;/span&gt;nobody&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ stat /mnt/perm_test
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;File: /mnt/perm_test
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Size: &lt;span class="m"&gt;40&lt;/span&gt; Blocks: &lt;span class="m"&gt;0&lt;/span&gt; IO Block: &lt;span class="m"&gt;4096&lt;/span&gt; directory
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Device: 10h/16d Inode: &lt;span class="m"&gt;82570&lt;/span&gt; Links: &lt;span class="m"&gt;2&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Access: &lt;span class="o"&gt;(&lt;/span&gt;0755/drwxr-xr-x&lt;span class="o"&gt;)&lt;/span&gt; Uid: &lt;span class="o"&gt;(&lt;/span&gt; 0/ root&lt;span class="o"&gt;)&lt;/span&gt; Gid: &lt;span class="o"&gt;(&lt;/span&gt; 0/ root&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Access: 2022-05-27 13:21:56.905860356 +0000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Modify: 2022-05-27 13:21:56.905860356 +0000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Change: 2022-05-27 13:21:56.905860356 +0000&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As we see that directory has &lt;code&gt;Access: (0755/drwxr-xr-x)&lt;/code&gt; and when we would like to write to it we have not enough permissions for &lt;code&gt;nobody&lt;/code&gt; user and file creation fails. We also could see that there are couple of pods running for the CSI plugin that actually provision PV/Cs.&lt;/p&gt;
&lt;p&gt;Clean up:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl delete -f perm_test.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl delete pvc perm-test-perm-test-0&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I did code changes to add more logging to understand the program flow better and to see when the permissions would change if they actually.
During changes I learned a little bit on &lt;a href="https://github.com/google/glog#verbose-logging" target="_blank" rel="noopener noreferrer"&gt;glog&lt;/a&gt; and that it has &lt;code&gt;-v=5&lt;/code&gt; in arguments for containers, so Info level by default.&lt;/p&gt;
&lt;p&gt;Lets create new image with those changes which we upload to the minikube and modify DeamonSet (csi driver):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make container
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ rm hostpath.tar
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ podman save --format docker-archive -o hostpath.tar localhost/hostpathplugin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 4fc242d58285 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 89f8b151f422 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 57a9469e70ba &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying config 29ba4a1533 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Writing manifest to image destination
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Storing signatures
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube image load ./hostpath.tar
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube image ls
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker.io/localhost/hostpathplugin:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl &lt;span class="nb"&gt;set&lt;/span&gt; image ds/csi-hostpathplugin &lt;span class="nv"&gt;hostpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost/hostpathplugin:latest&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Another way to modify DeamonSet is to run edit &lt;code&gt;$ kubectl edit ds csi-hostpathplugin&lt;/code&gt;, and change something. For example I was changing &lt;code&gt;-v=6&lt;/code&gt; and back to &lt;code&gt;-v=5&lt;/code&gt; so it would restart all containers with new image (that I uploaded).&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;stuck&lt;/strong&gt;: I actually spent 2h trying to understand why I don’t see logs that I have added, and that actually led me to learn &lt;code&gt;glog&lt;/code&gt;, but it was quite simple. By default &lt;code&gt;kubectl logs csi-hostpathplugin-w5rxt&lt;/code&gt; shows logs for default container, not for hostpath. So I just needed to path right parameters &lt;code&gt;kubectl logs csi-hostpathplugin-w5rxt -c hostpath&lt;/code&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Adding volume to the pod happens in couple of stages, &lt;code&gt;hostpath.go&lt;/code&gt; creates directory on a needed node and &lt;code&gt;nodeserver.go&lt;/code&gt; publishes this volume to the pod by &lt;code&gt;bind&lt;/code&gt; mounting target pod &lt;code&gt;mount&lt;/code&gt; directory to the volume directory created by &lt;code&gt;hostpath.go&lt;/code&gt;.
Please check &lt;a href="https://github.com/container-storage-interface/spec/blob/master/spec.md" target="_blank" rel="noopener noreferrer"&gt;Spec&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It actually showed me that permission didn’t change from stage to stage but weren’t setup correctly on dir creation:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MountAccess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MkdirAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mo"&gt;0777&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I have mode log before and after it, as it looked 0777 should be right one (allowing everyone to rwx on the directory):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;console&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-console" data-lang="console"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;I0527 19:09:38.234437 1 hostpath.go:177] VolumePath: /csi-data-dir/8dc9889d-ddf0-11ec-b319-7e80679203b2 AccessType: 0
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="go"&gt;I0528 07:07:57.543195 1 hostpath.go:187] mode info: -rwxr-xr-x for user: 0 group: 0
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So actually mode is 0755 instead of 0777 as requested in MkdirAll, and documentation clarifies:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MkdirAll creates a directory named path, along with any necessary parents, and returns nil, or else returns an error.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;The permission bits perm (before umask) are used for all directories that MkdirAll creates.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;If path is already a directory, MkdirAll does nothing and returns nil.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Lets check umask for the root user (&lt;code&gt;minikube ssh -n minikube-m02&lt;/code&gt;):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;umask&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="m"&gt;0022&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ getfacl --default /tmp/hostpath-provisioner/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;getfacl: Removing leading &lt;span class="s1"&gt;'/'&lt;/span&gt; from absolute path names
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# file: tmp/hostpath-provisioner/&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# owner: root&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# group: root&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ getfacl /tmp/hostpath-provisioner/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;getfacl: Removing leading &lt;span class="s1"&gt;'/'&lt;/span&gt; from absolute path names
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# file: tmp/hostpath-provisioner/&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# owner: root&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# group: root&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;user::rwx
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;group::r-x
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;other::r-x&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;mkdir syscall actually accounts mask, which is 022. Or even mask is ignored as ACL from parent dir could be propagated:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://man7.org/linux/man-pages/man2/mkdir.2.html" target="_blank" rel="noopener noreferrer"&gt;https://man7.org/linux/man-pages/man2/mkdir.2.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://man7.org/linux/man-pages/man2/umask.2.html" target="_blank" rel="noopener noreferrer"&gt;https://man7.org/linux/man-pages/man2/umask.2.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In my case there are no default ACLs but umask is set to 022 so: (0777 &amp; ~0022 &amp; 0777) actually gives us 0755.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;umask&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="m"&gt;0022&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ getfacl --default /tmp/hostpath-provisioner/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;getfacl: Removing leading &lt;span class="s1"&gt;'/'&lt;/span&gt; from absolute path names
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# file: tmp/hostpath-provisioner/&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# owner: root&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# group: root&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ getfacl /tmp/hostpath-provisioner/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;getfacl: Removing leading &lt;span class="s1"&gt;'/'&lt;/span&gt; from absolute path names
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# file: tmp/hostpath-provisioner/&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# owner: root&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# group: root&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;user::rwx
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;group::r-x
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;other::r-x&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So that was it, we need to get rid of a mask and proposed fix is:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Chmod&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mo"&gt;0777&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;glog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;V&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Couldn't change volume permissions: %w"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Cleaned up once again, compiled, created and pushed container. Tested it - It works!&lt;/p&gt;
&lt;p&gt;I created the branch on my fork, pushed it to my repo and followed PR procedure to create &lt;a href="https://github.com/kubernetes-csi/csi-driver-host-path/pull/356" target="_blank" rel="noopener noreferrer"&gt;kubernetes-csi/csi-driver-host-path #356&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;That was the end of my Hackday and one step in solving issue in more general way.&lt;/p&gt;
&lt;h2 id="value"&gt;Value&lt;/h2&gt;
&lt;p&gt;The excersise has a lot of value for me and Percona. I learned a lot of new things about k8s PV/PVC provisioning and CSI. For Percona we enabled development (devs and ci/cd) to run deployments on multi-node k8s local clusters.&lt;/p&gt;
&lt;p&gt;And hopefully for everyone else who needs to run unprivilege containers in multi-node with PVC.&lt;/p&gt;
&lt;p&gt;All together ppl developing OSS projects to benefit from each other and use better inovating Open-Source Software as well as to have a lot of fun :) .&lt;/p&gt;</content:encoded>
      <author>Denys Kondratenko</author>
      <category>PMM</category>
      <category>Minikube</category>
      <category>CSI</category>
      <category>Kubernetes</category>
      <category>k8s</category>
      <category>Operator</category>
      <media:thumbnail url="https://percona.community/blog/2022/5/how_and_why_contirbute_hu_93f4f14383ebc0a8.jpg"/>
      <media:content url="https://percona.community/blog/2022/5/how_and_why_contirbute_hu_daebc0884a499935.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.28.0 Preview Release</title>
      <link>https://percona.community/blog/2022/05/05/preview-release/</link>
      <guid>https://percona.community/blog/2022/05/05/preview-release/</guid>
      <pubDate>Thu, 05 May 2022 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.28.0 Preview Release Hello folks! Percona Monitoring and Management (PMM) 2.28.0 is now available as a Preview Release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-2280-preview-release"&gt;Percona Monitoring and Management 2.28.0 Preview Release&lt;/h2&gt;
&lt;p&gt;Hello folks! Percona Monitoring and Management (PMM) 2.28.0 is now available as a Preview Release.&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM Preview Release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Release Notes can be found in &lt;a href="https://pmm-doc-release-pr-781.onrender.com/release-notes/2.28.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-server-docker"&gt;Percona Monitoring and Management server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.28.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="percona-monitoring-and-management-client-package-installation"&gt;Percona Monitoring and Management client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client Release Candidate tarball for 2.28.0 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-3776.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable percona testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact: &lt;a href="http://percona-vm.s3.amazonaws.com/PMM2-Server-2.28.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.28.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.percona.com/percona-monitoring-and-management/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact: &lt;code&gt;ami-09ce0dc58b2f81889&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>The MySQL Workshop Book Review</title>
      <link>https://percona.community/blog/2022/05/03/the-mysql-workshop-book-review/</link>
      <guid>https://percona.community/blog/2022/05/03/the-mysql-workshop-book-review/</guid>
      <pubDate>Tue, 03 May 2022 00:00:00 UTC</pubDate>
      <description>Good books on MySQL for beginners are rare and excellent ones are even rarer. I often get requests from novices starting with MySQL or intermediates looking to level up on recommendations on books targeted at their level. The MySQL Workshop (Amazon link) by Thomas Pettit and Scott Cosentino is a must buy for those two groups, or those of us who would like a handy reference.</description>
      <content:encoded>&lt;p&gt;Good books on MySQL for beginners are rare and excellent ones are even rarer. I often get requests from novices starting with MySQL or intermediates looking to level up on recommendations on books targeted at their level. The MySQL Workshop (&lt;a href="https://www.amazon.com/MySQL-Workshop-Interactive-Approach-Learning-ebook/dp/B084T32T3B/" target="_blank" rel="noopener noreferrer"&gt;Amazon link&lt;/a&gt;) by Thomas Pettit and Scott Cosentino is a must buy for those two groups, or those of us who would like a handy reference.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/5/TheMySQLWorkshopBook.jpg" alt="The MySQL Workshop Book Review" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is a great book and I recommend getting a copy regardless of your MySQL expertise.&lt;/p&gt;
&lt;h2 id="the-basics"&gt;The Basics&lt;/h2&gt;
&lt;p&gt;At seven hundred pages, this book has a wide scope that starts with background concepts like data normalization, proceeds into creating databases, SQL, and administration. And there are sections on programming with Node.js, working with Microsoft applications, loading data, DBA tasks, and logical backups. There are exercises at the end of the chapters with solutions at the end of the book.&lt;/p&gt;
&lt;p&gt;Writing such a book is a tremendous task and the authors need to be applauded as they have produced a great book.&lt;/p&gt;
&lt;h2 id="the-nitty-gritty"&gt;The Nitty-Gritty&lt;/h2&gt;
&lt;p&gt;MySQL is a complex product and introducing concepts with a fresh approach is hard to do but this book does it consistently. Complex topics like creating functions are explained thoroughly without being bogged down in minute details.&lt;/p&gt;
&lt;p&gt;Does it cover everything? Nope, and no book under a few thousand pages will ever do that (while keeping pace with product development). There are minor omissions like constraint checks which a still fairly new but I would like to point you to the section on triggers that is the clearest explanation on the subject I have found.&lt;/p&gt;
&lt;p&gt;The writing style is concise, the formatting easy on the eyes, and I am sure the book will be very popular.&lt;/p&gt;</content:encoded>
      <author>David Stokes</author>
      <category>blog</category>
      <category>books</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2022/5/TheMySQLWorkshop_hu_9d42aaa4854afa5a.jpg"/>
      <media:content url="https://percona.community/blog/2022/5/TheMySQLWorkshop_hu_ddc11a064cd4a3d2.jpg" medium="image"/>
    </item>
    <item>
      <title>Liquibase Data is Git for Databases</title>
      <link>https://percona.community/blog/2022/04/25/liquibase-data-is-git-for-databases/</link>
      <guid>https://percona.community/blog/2022/04/25/liquibase-data-is-git-for-databases/</guid>
      <pubDate>Mon, 25 Apr 2022 00:00:00 UTC</pubDate>
      <description>Author’s Note: Robert will be demoing Liquibase Data at Percona Live 2022 on Wednesday, May 18 at 11:50am. Add this presentation to your schedule.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Author’s Note: Robert will be demoing Liquibase Data at Percona Live 2022 on Wednesday, May 18 at 11:50am. &lt;a href="https://sched.co/10JOM" target="_blank" rel="noopener noreferrer"&gt;Add this presentation to your schedule.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Git is an amazing tool for collaboration — developers can work together to build better software faster. However, the usual Git workflow neglects the database. With &lt;a href="https://github.com/liquibase/liquibase-data" target="_blank" rel="noopener noreferrer"&gt;Liquibase Data&lt;/a&gt; we’re bringing git to the database so you can easily version containerized databases, share changes with team members, store versions in remote locations, and tag versions.&lt;/p&gt;
&lt;h2 id="the-vanilla-git-workflow"&gt;The Vanilla Git Workflow&lt;/h2&gt;
&lt;p&gt;The standard Git workflow is simple. A developer can &lt;code&gt;git init&lt;/code&gt; to create a local repository. Next, after making changes, &lt;code&gt;git commit&lt;/code&gt; creates a local version. Then, the developer pushes to a remote branch using &lt;code&gt;git push&lt;/code&gt;. Finally, another developer can &lt;code&gt;git pull&lt;/code&gt; to see the new code updates.&lt;/p&gt;
&lt;h2 id="liquibase-data-workflow"&gt;Liquibase Data Workflow&lt;/h2&gt;
&lt;p&gt;We created the same Git workflow in Liquibase Data. Using the &lt;a href="https://github.com/liquibase/liquibase-data" target="_blank" rel="noopener noreferrer"&gt;Liquibase Data extension&lt;/a&gt;, Liquibase users can initialize a new database in a Docker container using &lt;code&gt;liquibase data run&lt;/code&gt;. Which databases? ALL of them. All it requires is a database Docker image that has a volume mount for the data. Liquibase takes it from there. If you already run your development databases via Docker, you will find that Liquibase Data parallels the &lt;code&gt;docker run&lt;/code&gt; command.&lt;/p&gt;
&lt;p&gt;Here’s what you’ll be able to do:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Clone from remote repositories&lt;/li&gt;
&lt;li&gt;Make changes to the database&lt;/li&gt;
&lt;li&gt;Commit and push your changes to share with team members&lt;/li&gt;
&lt;li&gt;Tag commits&lt;/li&gt;
&lt;li&gt;Easily view the difference between two database commits to identify changes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our team thinks this will be useful for test data management and supporting developer database workflows.&lt;/p&gt;
&lt;p&gt;Just like you commit after changing your code, you can do the same with Liquibase Data. After you add data to your database or change the schema, run &lt;code&gt;liquibase data commit&lt;/code&gt;. Commands such as &lt;code&gt;push&lt;/code&gt;, &lt;code&gt;remote&lt;/code&gt;, and &lt;code&gt;log&lt;/code&gt; are also available.&lt;/p&gt;
&lt;h2 id="easily-compare-databases"&gt;Easily Compare Databases&lt;/h2&gt;
&lt;p&gt;Determining what has changed in your database schema can be very difficult. Liquibase Data makes it simple to find schema differences between commits using the &lt;code&gt;diff&lt;/code&gt; command. With Liquibase Data, the required database starts automatically for you to create the diff.&lt;/p&gt;
&lt;h2 id="watch-liquibase-data-demos"&gt;Watch Liquibase Data Demos&lt;/h2&gt;
&lt;p&gt;Robert Reeves, CTO of Liquibase, &lt;a href="https://www.youtube.com/watch?v=k4m2UCqddHo" target="_blank" rel="noopener noreferrer"&gt;demonstrates how to quickly provision a developer instance of MongoDB&lt;/a&gt;, make changes to MongoDB, and then commit the change. You’ll see how easy it is to roll your changes backward and forward.&lt;/p&gt;
&lt;p&gt;Check out our other Liquibase Data demos for &lt;a href="https://www.youtube.com/watch?v=AByPvVoWIXM" target="_blank" rel="noopener noreferrer"&gt;Oracle&lt;/a&gt; and &lt;a href="https://www.youtube.com/watch?v=gLub_7Fcnh4" target="_blank" rel="noopener noreferrer"&gt;SQL Server&lt;/a&gt;! Liquibase Data works with ANY database in a Docker Container.&lt;/p&gt;
&lt;p&gt;Try Liquibase Data
We think Liquibase Data will be helpful for developers sharing databases among team members. Just imagine — you’ll be able to share datasets you’re working on early in the process and share a separate one later in the process. The distribution of valid test data amongst Dev and QA will speed testing cycles and help find bugs sooner.&lt;/p&gt;
&lt;p&gt;Of course, we want to hear from you! Tell us what you would like to see in Liquibase Data and share with us how you are using it. Our &lt;a href="https://github.com/liquibase/liquibase-data/tree/main/beta" target="_blank" rel="noopener noreferrer"&gt;Open Beta program&lt;/a&gt; is a great way to experience the benefits and give us input to make it work even better. We have a tutorial that will walk you through, step by step, how to use Liquibase Data. Along the way, you will have an opportunity to provide your thoughts.&lt;/p&gt;
&lt;p&gt;Finally, all of us at Liquibase thank you for your support over the past 15 years of open source greatness. We could not have done it with you. And, the best is yet to come!&lt;/p&gt;</content:encoded>
      <author>Robert Reeves</author>
      <category>blog</category>
      <category>PerconaLive</category>
      <category>PerconaLive2022</category>
      <category>DevOps</category>
      <media:thumbnail url="https://percona.community/blog/2022/4/liquibase-data-gitflow-580x296_hu_43885590fb372c6c.jpg"/>
      <media:content url="https://percona.community/blog/2022/4/liquibase-data-gitflow-580x296_hu_4e632644633ce561.jpg" medium="image"/>
    </item>
    <item>
      <title>A Quick Guide To Austin For Percona Live 2022 Attendees</title>
      <link>https://percona.community/blog/2022/04/11/percona-live-austin-guide/</link>
      <guid>https://percona.community/blog/2022/04/11/percona-live-austin-guide/</guid>
      <pubDate>Mon, 11 Apr 2022 00:00:00 UTC</pubDate>
      <description>Percona Live returns again to Austin May 16th through the 18th will find the city vibrant, charming, and weird. The semi-official motto for the city is ‘Keep Austin Weird’ and during your visit you will indeed see many of the residents working hard to do just that. Not in a bad way. Austin is at the intersection of so many cultural, artistic, and lifestyle modes that there are a fair amount of many different things happening at the same time to ensure that any dull moments you have will have to be an active choice on your part.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live&lt;/a&gt; returns again to Austin May 16th through the 18th will find the city vibrant, charming, and weird. The semi-official motto for the city is ‘Keep Austin Weird’ and during your visit you will indeed see many of the residents working hard to do just that. Not in a bad way. Austin is at the intersection of so many cultural, artistic, and lifestyle modes that there are a fair amount of many different things happening at the same time to ensure that any dull moments you have will have to be an active choice on your part.&lt;/p&gt;
&lt;p&gt;The following is a quick guide for those new to Austin or looking for activities for the days before or after the show&lt;/p&gt;
&lt;h2 id="what-to-wear"&gt;What To Wear?&lt;/h2&gt;
&lt;p&gt;Austin in May averages 86F/30C (which is better than the August 96F/35C) so shorts, t-shirts, and comfortable shoes are a must. Bring sunscreen and water if you plan to spend time outdoors.&lt;/p&gt;
&lt;h2 id="what-to-eat"&gt;What to Eat?&lt;/h2&gt;
&lt;p&gt;The two main choices are barbeque and Tex-Mex. But you will find any type of cuisine you desire either in the restaurants or the food trucks (mostly on South Congress Street but found throughout the city). You may see many celebrities but remember in Austin that Matthew McConaughey is just another professor at the University of Texas and that Elon Musk builds pickup trucks.&lt;/p&gt;
&lt;h2 id="bbq"&gt;BBQ?&lt;/h2&gt;
&lt;p&gt;Barbecue is almost considered a religion in Texas and you will find many recommendations on where to go (see &lt;a href="https://austin.eater.com/maps/best-barbecue-austin-restaurants" target="_blank" rel="noopener noreferrer"&gt;https://austin.eater.com/maps/best-barbecue-austin-restaurants&lt;/a&gt;) but everyone has their favorite. Major competitions are run each year to determine who is the best. My personal favorite is the Salt Lick has two locations that are sadly out of Austin proper and they are known for moderating the heat of their barbeque pits by using pecans which adds a unique flavor. And they do have a location at the airport too; the food is good but the ambience is lacking with all the flight announcements.&lt;/p&gt;
&lt;p&gt;But the other places are pretty good too. Stubb’s, Franklin, Black’s are all excellent. If you do see a line at another place where salivating people are somewhat impatiently waiting to order, then join the queue. And it is okay to salivate too.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Big hint:&lt;/strong&gt; If you are not used to Texas Sweet Tea start with half sweetened and half unsweetened until your gums and your dentist have time to adjust.&lt;/p&gt;
&lt;h2 id="tex-mex"&gt;Tex-Mex&lt;/h2&gt;
&lt;p&gt;This style of food is a tasty combination of Chihuahuan Mexican food and frontier based ingredients with lots of cheese and chili. What started as simple staple foods for settlers on the frontier made from the available commodities has evolved into a tasty treat.&lt;br&gt;
Chuy’s original restaurant is a top pick and features theme rooms. Sadly we have already missed the birthday of Elvis Presley where all who dress like the King or his wife Priscilla dine free in their Elvis room. For the less decor oriented try Matt’s El Rancho.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Big tip:&lt;/strong&gt; You will generally get big portions, especially if you order fajitas or margaritas.&lt;/p&gt;
&lt;p&gt;For breakfast, go to Snooze AM for the pineapple upside pancakes or an omelet.&lt;/p&gt;
&lt;h2 id="museums"&gt;Museums&lt;/h2&gt;
&lt;p&gt;The Bob Bullock Museum is at the state capital building and is the state’s official history museum. The Museum of the Weird is just as the name implies and, while not official, provides a look into the odder parts of Austin. Not too far away is the Alamo in San Antonio (nearby the Alamo is the Buckhorn Saloon which actually has two floors of oddities that are weirder than the Museum of the Weird). The Museum of the Pacific War in Fredericksburg is a must for history fans. The Contemporary Austin is great for art fans while the Texas Toy museum will appeal to your inner child.&lt;/p&gt;
&lt;h2 id="outdoors-activity"&gt;Outdoors Activity&lt;/h2&gt;
&lt;p&gt;Swim in spring fed Barton Springs, ride the bike trails, and hike your feet off before you rent a paddleboard to tour Lake Austin. Lots of things for the physically active with appealing hikes and you may actually run into an Armadillo.&lt;/p&gt;
&lt;p&gt;You can rent inner tubes to float the nearby Guadalupe or Comal Rivers. Rent another inner tube for your cooler of drinks. Or visit the Schlitterbahn water park. All three are a short drive away and worth an extra day on your trip for time to spend with family or friends.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Big hint:&lt;/strong&gt; Stay hydrated as the heat is deceiving.&lt;/p&gt;
&lt;h2 id="other-activities"&gt;Other Activities&lt;/h2&gt;
&lt;p&gt;Gruene (pronounced ‘green’) Hall is the oldest dancehall in Texas and is the place where many top stars got their starts. Currently there are no acts scheduled during the time of Percona Live (well, I expect they will be at Percona Live learning about databases!) but fans of ZZ Top, George Strait, Willy Nelson, or Greg Allman will relish the history of the place before heading to the Grist Mill for a meal. The dance hall itself has not changed much since being built in 1878 and they will open the side flaps when the dancers need fresh air.&lt;/p&gt;
&lt;p&gt;Sixth Street is the live music capital of Texas and you will find any genre there. This is where Stevie Ray Vaughn rose to fame and where Willie Nelson rebuilt his career after leaving Nashville.Ear plugs recommended but optional.&lt;/p&gt;
&lt;p&gt;Yes, the bats do fly out from under the Congress Street bridge at sunset which is amazing when millions of them fly out. The last physical Percona Live they were shy and only a few appeared. I assume that they were intimidated by having so many DBAs nearby.&lt;/p&gt;
&lt;p&gt;Austin is an awesome town and not just for Percona Live itself. I have only touched the proverbial iceberg tip on things to see and do there. If you have questions, find me at Percona Live or email me at &lt;a href="mailto:david.stokes@percona.com"&gt;david.stokes@percona.com&lt;/a&gt; and hopefully we can try one of the local craft brews together.&lt;/p&gt;</content:encoded>
      <author>David Stokes</author>
      <category>blog</category>
      <category>PerconaLive</category>
      <category>PerconaLive2022</category>
      <category>Conference</category>
      <media:thumbnail url="https://percona.community/blog/2022/4/Guide-PL-2022_hu_e1bd6c54bf4e82b.jpg"/>
      <media:content url="https://percona.community/blog/2022/4/Guide-PL-2022_hu_d41ab1d8990e6564.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Monitoring and Management 2.27.0 Preview Release</title>
      <link>https://percona.community/blog/2022/04/08/preview-release/</link>
      <guid>https://percona.community/blog/2022/04/08/preview-release/</guid>
      <pubDate>Fri, 08 Apr 2022 00:00:00 UTC</pubDate>
      <description>Percona Monitoring and Management 2.27.0 Preview Release Percona Monitoring and Management 2.27.0 is now available as a Preview Release.</description>
      <content:encoded>&lt;h2 id="percona-monitoring-and-management-2270-preview-release"&gt;Percona Monitoring and Management 2.27.0 Preview Release&lt;/h2&gt;
&lt;p&gt;Percona Monitoring and Management 2.27.0 is now available as a Preview Release.&lt;/p&gt;
&lt;p&gt;PMM team really appreciates your feedback!&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM Preview Release in &lt;strong&gt;testing environments only&lt;/strong&gt;, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Known issues:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-9797" target="_blank" rel="noopener noreferrer"&gt;PMM-9797&lt;/a&gt; - Wrong Plot on Stat Panels for DB Conns and Disk Reads at Home Dashboard&lt;/li&gt;
&lt;li&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-9820" target="_blank" rel="noopener noreferrer"&gt;PMM-9820&lt;/a&gt; - QAN page disappeared after upgrade via UI&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Release Notes can be found &lt;a href="https://pmm-doc-release-pr-726.onrender.com/release-notes/2.27.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker"&gt;PMM server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;perconalab/pmm-server:2.27.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client Release Candidate tarball for 2.27.0 by this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-3622.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable percona testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact: &lt;a href="http://percona-vm.s3.amazonaws.com/PMM2-Server-2.27.0.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2.27.0.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact: &lt;code&gt;ami-05592e370cca655b9&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Please also check out our Engineering Monthly Meetings &lt;a href="https://percona.community/contribute/engineeringmeetings/" target="_blank" rel="noopener noreferrer"&gt;https://percona.community/contribute/engineeringmeetings/&lt;/a&gt; and join us on our journey in OpenSource! Contact us in &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;https://forums.percona.com/&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>Raspberry Pi Bullseye Percona Server 64bit</title>
      <link>https://percona.community/blog/2022/04/05/percona-server-raspberry-pi/</link>
      <guid>https://percona.community/blog/2022/04/05/percona-server-raspberry-pi/</guid>
      <pubDate>Tue, 05 Apr 2022 00:00:00 UTC</pubDate>
      <description>I love the Raspberry Pi, and I love Percona server. The combination of the two can provide a nice home database. I have been running a Percona Server database since 2019 to hold all the weather information, that I collect from several of my Weather Stations.</description>
      <content:encoded>&lt;p&gt;I love the Raspberry Pi, and I love Percona server. The combination of the two can provide a nice home database. I have been running a Percona Server database since 2019 to hold all the weather information, that I collect from several of my Weather Stations.&lt;/p&gt;
&lt;p&gt;I did a my first blog post on installing Percona Server 5.7 on the Raspberry Pi 3+.&lt;/p&gt;
&lt;p&gt;You can read that blog post here:
&lt;a href="https://percona.community/blog/2019/08/01/how-to-build-a-percona-server-stack-on-a-raspberry-pi-3/" target="_blank" rel="noopener noreferrer"&gt;How to Build a Percona Server “Stack” on a Raspberry Pi 3+&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Fast forward to 2022 and we now have the resources to build Percona Server 8.0 64-bit on the Raspberry Pi. In this post I will cover building and installing Percona Server 8.0.29 and Percona XtraBackup 8.0.29.&lt;/p&gt;
&lt;p&gt;Prereqs:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Raspberry Pi 3B+, 4 or 400 (any memory size will work).&lt;/li&gt;
&lt;li&gt;128GB or 256GB microSD card. Of course you can go bigger.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;When installing the Raspberry Pi OS on a Pi 4 or 400 make sure to choose the 64-bit image.
&lt;a href="https://raspberrytips.com/install-raspbian-raspberry-pi/" target="_blank" rel="noopener noreferrer"&gt;Install Raspberry Pi OS Bullseye on Raspberry Pi&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="the-builds"&gt;The Builds&lt;/h2&gt;
&lt;p&gt;One step I found which will help to increase the speed and success of your build is to add a larger swap file.&lt;/p&gt;
&lt;p&gt;Create a new swap file. A 4GB swap file worked just fine. I created the swap file on the / partition.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo dd if=/dev/zero of=/swapfile4GB bs=1M count=4096
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mkswap /swapfile4GB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo swapon /swapfile4GB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chmod 0600 /swapfile4GB&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You will need to install these additional packages listed below:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt update
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt upgrade
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo apt install build-essential pkg-config cmake devscripts debconf debhelper automake bison ca-certificates \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;libcurl4-gnutls-dev libaio-dev libncurses-dev libssl-dev libtool libgcrypt20-dev zlib1g-dev lsb-release \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;python3-docutils build-essential rsync libdbd-mysql-perl libnuma1 socat librtmp-dev libtinfo5 liblz4-tool \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;liblz4-1 liblz4-dev libldap2-dev libsasl2-dev libsasl2-modules-gssapi-mit libkrb5-dev apt-get \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;libreadline-dev libudev-dev libev-dev libev4 libprocps-dev&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s download Percona Server and some additional tools.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ wget https://downloads.percona.com/downloads/Percona-Server-LATEST/Percona-Server-8.0.29-21/source/tarball/percona-server-8.0.29-21.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ tar -zxvf percona-server-8.0.29-21.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ wget https://boostorg.jfrog.io/artifactory/main/release/1.77.0/source/boost_1_77_0.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ tar -zxvf boost_1_77_0.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ wget https://downloads.percona.com/downloads/Percona-XtraBackup-LATEST/Percona-XtraBackup-8.0.29-22/source/tarball/percona-xtrabackup-8.0.29-22.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ tar -zxvf percona-xtrabackup-8.0.29-22.tar.gz&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="build-percona-server"&gt;Build Percona Server&lt;/h2&gt;
&lt;p&gt;At the time of writing 8.0.29-21 is the current version. If you have a USB 3 external drive, you might find the build will perform better from that device. In my build I used a 500GB SSD drive.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cd percona-server-8.0.29-21
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mkdir arm64-build
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cd arm64-build
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cmake .. -DCMAKE_BUILD_TYPE=Release -DWITH_BOOST=/home/pi/boost_1_77_0 -DCMAKE_INSTALL_PREFIX=/usr/local/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo make -j2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo make install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;With the 4GB swap file you created above you can use make -j2 for the compile. Depending on which Pi you are using build time should be around 3 hours.&lt;/p&gt;
&lt;h2 id="build-xtrabackup"&gt;Build XtraBackup&lt;/h2&gt;
&lt;p&gt;At the time of writing 8.0.29-22 is the current version.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cd percona-xtrabackup-8.0.29-22
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mkdir arm64-build
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cd arm64-build
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cmake .. -DCMAKE_BUILD_TYPE=Release -DWITH_BOOST=$HOME/boost_1_77_0 -DCMAKE_INSTALL_PREFIX=/usr/local/xtrabackup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo make -j3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo make install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The builds are now complete. Since we created everything from source they are a
few last things that need to be done.&lt;/p&gt;
&lt;p&gt;We need to create the mysql user and set its home directory. We need to update the /usr/local/mysql to be owned by mysql.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo useradd mysql -d /usr/local/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chown -R mysql:mysql /usr/local/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo mkdir -p /var/log/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo chown -R mysql:mysql /var/log/mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;One last thing we need before to start MySQL for the 1st time is an /etc/my.cnf.&lt;/p&gt;
&lt;p&gt;Here is a sample you can work with.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo vi /etc/my.cnf&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Copy and paste the contents below into your my.cnf.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;character-set-server = utf8mb4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;port = 3306
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;socket = /usr/local/mysql/mysql.sock
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pid-file = /usr/local/mysql/mysqld.pid
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;basedir = /usr/local/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;datadir = /data0/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tmpdir = /data0/mysql/tmp
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;general_log_file = /var/log/mysql/mysql-general.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log-error = /var/log/mysql/mysqld.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log_file =/var/log/mysql/slow_query.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log = 0 # Slow query log off
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;expire_logs_days = 5
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log_error_verbosity = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lower_case_table_names = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;max_allowed_packet = 32M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;max_connections = 50
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;max_user_connections = 40
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;skip-external-locking
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;skip-name-resolve
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;table_open_cache=500
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;thread_cache_size=16
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;thread_pool_size=16
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_data_home_dir = /data0/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_group_home_dir = /data0/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_size = 2048M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_files_in_group = 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_file_size = 128M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_buffer_size = 16M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_flush_log_at_trx_commit = 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_lock_wait_timeout = 50
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_flush_method = O_DIRECT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_file_per_table = 1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You will want to set the following setting to match your needs.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;datadir = /your/data/location/&lt;/li&gt;
&lt;li&gt;innodb_data_home_dir = &lt;strong&gt;this should match your datadir&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;innodb_log_group_home_dir = &lt;strong&gt;this should match your datadir&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now you will want to create a mysqld.server service file in /lib/systemd/system&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo vi /lib/systemd/system/mysqld.service&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add the below contents to your mysqld.service.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[Unit]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Description=Percona Server 8.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;After=syslog.target
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;After=network.target
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[Install]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WantedBy=multi-user.target
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[Service]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;User=mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Group=mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/etc/my.cnf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;TimeoutSec=300
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WorkingDirectory=/usr/local/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Restart=on-failure
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#RestartPreventExitStatus=1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PrivateTmp=true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s setup Percona Server to stop and start with the OS.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo systemctl enable mysqld.Service&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="finish-your-build"&gt;Finish your build.&lt;/h2&gt;
&lt;p&gt;Once you have completed all the above steps. You can follow this blog post
&lt;a href="https://percona.community/blog/2021/09/06/lost-art-of-database-server-initialization/" target="_blank" rel="noopener noreferrer"&gt;The lost art of Database Server Initialization.&lt;/a&gt;. Start at step 4.&lt;/p&gt;
&lt;p&gt;Thats it. You have a new Percona Server 8.0 running on your Raspberry Pi 4.&lt;/p&gt;
&lt;p&gt;This process does take some patience, but if you like the Raspberry Pi and&lt;/p&gt;
&lt;p&gt;Percona Server this is well worth the time it takes.&lt;/p&gt;
&lt;h2 id="now-for-some-screen-shots"&gt;Now for some screen shots.&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Percona Server:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/4/percona-systemctl-status.png" alt="Percona Status" /&gt;&lt;/figure&gt;&lt;/li&gt;
&lt;li&gt;Command Line Interface:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/4/percona-server-running.png" alt="CLI Example" /&gt;&lt;/figure&gt;&lt;/li&gt;
&lt;li&gt;XtraBackup complete:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2022/4/percona-xtrabackup.png" alt="Complete Backup" /&gt;&lt;/figure&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Percona</category>
      <category>MySQL</category>
      <category>64bit</category>
      <category>Raspberry Pi</category>
      <category>Bullseye</category>
      <media:thumbnail url="https://percona.community/blog/2022/4/bullseye_hu_aa603fab7773c2f0.jpg"/>
      <media:content url="https://percona.community/blog/2022/4/bullseye_hu_a6f9c8fad6527035.jpg" medium="image"/>
    </item>
    <item>
      <title>The Ins and Outs of PostgreSQL Default Configuration Tuning</title>
      <link>https://percona.community/blog/2022/03/31/the-ins-and-outs-of-postgresql-default-configuration-tuning/</link>
      <guid>https://percona.community/blog/2022/03/31/the-ins-and-outs-of-postgresql-default-configuration-tuning/</guid>
      <pubDate>Thu, 31 Mar 2022 00:00:00 UTC</pubDate>
      <description>If you’re wondering what the optimal settings for a newly installed Postgres database are, here are some simple steps to take to tune it right from the start. Matt Yonkovit discussed them with Charly Batista, Postgres Tech Lead at Percona during the live-streamed meetup. Watch the recording to see how Charly tunes a default installation of Percona Distribution for PostgreSQL 13.</description>
      <content:encoded>&lt;p&gt;If you’re wondering what the optimal settings for a newly installed Postgres database are, here are some simple steps to take to tune it right from the start. Matt Yonkovit discussed them with Charly Batista, Postgres Tech Lead at Percona during the live-streamed meetup. Watch the &lt;a href="https://percona.community/events/percona-meetups/2022-01-27-percona-meetup-for-postgresql/" target="_blank" rel="noopener noreferrer"&gt;recording&lt;/a&gt; to see how Charly tunes a default installation of Percona Distribution for PostgreSQL 13.&lt;/p&gt;
&lt;p&gt;Most of the default settings have been defined long, long time ago, where one gigabyte of RAM was very expensive. So, they are not optimal. There are lots of things that we can change, but let’s have a look at the basic things to make your box more reliable and raise both in speed and performance. They can be divided into 2 groups: OS settings (Linux kernel) and database settings.&lt;/p&gt;
&lt;h2 id="os-linux-settings"&gt;OS (Linux) Settings&lt;/h2&gt;
&lt;p&gt;No matter that is your workload, these things you want to make sure you have set from the operating system perspective out of the gate to make your box healthier.&lt;/p&gt;
&lt;h3 id="swap-and-swappiness"&gt;Swap and Swappiness&lt;/h3&gt;
&lt;p&gt;Allocate swap to prevent the kernel from killing the database. But keep in mind that swappines should not be too high. What is this swappiness? The swappiness tells the kernel how likely it should use the swap. Change it to 1 to allow the kernel to use the swap only when it is really necessary.
For swap, we need to create a file and then allocate it. For the swappiness, we can tell the systemctl to change the swappiness of our box.&lt;/p&gt;
&lt;h3 id="transparent-huge-pages"&gt;Transparent Huge Pages&lt;/h3&gt;
&lt;p&gt;Transparent huge pages are enabled by default on the Linux kernel. And it’s not a good thing for databases like Postgres. They can cause a lot of memory fragmentation. It can slow down your database and also cause memory problems. For example, you need one gigabyte of memory for one activity, and even though you have one gigabyte available, they are split into small pieces. You cannot allocate that one gigabyte of memory. The first thing that the kernel will try to do is swap. It will just kill the database. So the transparent huge page can lead to performance issues, because we just don’t have memory, even though the memory is there, but the memory is not able to allocate.&lt;/p&gt;
&lt;h3 id="cpu-speed"&gt;CPU Speed&lt;/h3&gt;
&lt;p&gt;Make sure the CPU runs at its max speed. Find the CPU Governor file and disable the on-demand utility. For the database, we don’t want to adjust on-demand, we always want it as fast as we can.&lt;/p&gt;
&lt;h2 id="postgres-settings"&gt;Postgres Settings&lt;/h2&gt;
&lt;p&gt;Here are some database settings that you can change to optimize your database regardless of the workload you have.&lt;/p&gt;
&lt;h3 id="shared-buffers-value"&gt;Shared Buffers Value&lt;/h3&gt;
&lt;p&gt;Change the value for shared buffers to 8 GB. You can ask - why? When we talk about MySQL, a good value for the shared buffer is 50% to 70% of your memory because that will give you the ability to grow. Typically, you want your hot data all in shared memory, all data that is access at a high frequency. But unlike MySQL, Postgres relies a lot on OS buffers. In case of Postgres, if you write intensive workload, it might want to get your shared buffer much smaller, like around 5% of memory that you have, because most of the things are going to go for the kernel buffer.&lt;/p&gt;
&lt;h3 id="random-page-cost"&gt;Random Page Cost&lt;/h3&gt;
&lt;p&gt;Make sure you get the random page cost right. The random page cost is one that we think could be a big win for us, just because the default is so high compared to sequential. Lowering that by default is probably a good thing. Random page cost is the cost optimizer change. So, it’s going to push random pages to be a bit more costly and favor some sequential.&lt;/p&gt;
&lt;p&gt;We need to understand how Postgres stores data, and how Postgres stores the indexes. MySQL uses cluster storage here. The data that is stored on Postgres is not a cluster, it doesn’t organize the data. So it just keeps it. Random page cost is going to improve the index usage because it will prefer indexes. It changes the cost optimizer to prefer random pages or index scans over sequential. And it can improve or decrease performance a lot.&lt;/p&gt;
&lt;p&gt;Note that to be able to get the random page cost right, you need to understand what kind of disks you have. If you are using AWS, we suppose you have SSDs and NVMe. They are really fast. And the cost for the random page is almost the cost of the sequential page, which is why the change is not so high.&lt;/p&gt;
&lt;h3 id="synchronous-commit"&gt;Synchronous Commit&lt;/h3&gt;
&lt;p&gt;So one thing that we can change on Postgres is the synchronous commit. The synchronous commit will force the database to commit every time to the cache, to the kernel, every time that you do a commit or transaction. It is a trade off that can improve performance. But you lose a little on reliability.&lt;/p&gt;
&lt;p&gt;But here is one setting on Postgres that you should never change even trying to improve performance - &lt;strong&gt;fsync&lt;/strong&gt;. Just never change it. By default, it is on, and it is on the top of the synchronous commit. The fsync instructs the kernel to write flash data to the disk for crash safety. If you disable the fsync, you might have some performance benefits, but your writes to the disk become not safe enough. You can have disk corruption. It’s really based on the disk having its own cache and its own systems going on, and you’re basically relying on it to do everything for you, instead of forcing that right to be consistent. It’s fine to work and tune and play around the synchronous commit, but not with the fsync.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Everything above are things that are independent of your workload. But there are no strict rules that you should, for example, use eight gigabytes of shared buffer if you have 32 gigabytes of memory. After you do all of those things, come back again, run the load test to check your performance. You can’t get worse performance instead of better performance.&lt;/p&gt;</content:encoded>
      <author>Aleksandra Abramova</author>
      <category>Postgres</category>
      <category>PostgreSQL</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2022/3/Meetups-PG-1_hu_366faad9efd5803d.jpg"/>
      <media:content url="https://percona.community/blog/2022/3/Meetups-PG-1_hu_4d03f38b7e3f8466.jpg" medium="image"/>
    </item>
    <item>
      <title>How long do you keep the metrics in PMM?</title>
      <link>https://percona.community/blog/2022/02/11/poll-metrics-keep/</link>
      <guid>https://percona.community/blog/2022/02/11/poll-metrics-keep/</guid>
      <pubDate>Fri, 11 Feb 2022 00:00:00 UTC</pubDate>
      <description>Hello everyone! We are indeed excited to announce that the new release of VictoriaMetrics has many exciting features, one of them being downsampling.</description>
      <content:encoded>&lt;p&gt;Hello everyone! We are indeed excited to announce that the new release of VictoriaMetrics has many exciting features, one of them being &lt;a href="https://docs.victoriametrics.com/#downsampling" target="_blank" rel="noopener noreferrer"&gt;downsampling&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Downsampling helps to reduce disk space usage and improves query performance in a big and long time series if applied independently per each time series. However, this feature works only with a large number of samples per series.&lt;/p&gt;
&lt;p&gt;As we are keen on implementing downsampling in our future releases, we would like to understand how long you keep your metrics in PMM. Please go to the &lt;a href="https://forums.percona.com/t/how-long-do-you-keep-the-metrics-in-pmm/14236" target="_blank" rel="noopener noreferrer"&gt;Poll&lt;/a&gt; page and provide your inputs.&lt;/p&gt;
&lt;p&gt;We appreciate your help. Thank You!&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Please also check out our Engineering Monthly Meetings &lt;a href="https://percona.community/contribute/engineeringmeetings/" target="_blank" rel="noopener noreferrer"&gt;https://percona.community/contribute/engineeringmeetings/&lt;/a&gt; and join us on our journey in OpenSource! Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Anton Bystrov</author>
      <category>VictoriaMetrics</category>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>5 Steps to Improve Performance of Default MySQL Installation</title>
      <link>https://percona.community/blog/2022/01/27/5-steps-to-improve-performance-of-default-mysql-installation/</link>
      <guid>https://percona.community/blog/2022/01/27/5-steps-to-improve-performance-of-default-mysql-installation/</guid>
      <pubDate>Thu, 27 Jan 2022 00:00:00 UTC</pubDate>
      <description>Let’s say you have a fresh MySQL installation. Are there any possible steps to improve performance right away? Yes, there are!</description>
      <content:encoded>&lt;p&gt;Let’s say you have a fresh MySQL installation. Are there any possible steps to improve performance right away? Yes, there are!&lt;/p&gt;
&lt;p&gt;Recently, Marcos Albe (Principal Support Engineer, Percona) did an &lt;a href="https://percona.community/events/percona-meetups/2022-01-14-percona-meetup-for-mysql-january-2022/" target="_blank" rel="noopener noreferrer"&gt;online tuning&lt;/a&gt; on the MySQL Meetup hosted by Matt Yonkovit (Head of Open Source Strategy, Percona). Here are some steps you can consider to make your fresh MySQL installation to run better right from the start.&lt;/p&gt;
&lt;p&gt;So, we have a very basic default MySQL installation with some workload. It is connected to PMM, the slow query log is turned on. But it is largely unconfigured. Here is what actions Marcos considered to take to set up a new system to make sure that it’s actually set up from the beginning to reasonable defaults. Do some reactive configuration - go through the workload, observe bottlenecks and then configure to avoid bottlenecks&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1. Rate Limit for Slow Queries&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Go to MySQL Summary Dashboard In PMM, find your instance and set a rate limit instead of setting low query time to zero. Once we get to the 3000-4000 queries per second, these might start impacting performance in a way that is going to show in the latency papers. The thing is that while it is important to collect as many query details as possible, you don’t want to collect too many because it can impact performance and have the opposite effect of what you’re trying to do.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 2. Spikes&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Go down and turn the metrics off and on from data series from each of the graphs to be able to see the spikes on each magnitude. It allows you to find bad query patterns, under-dimensional or over-dimensional things.&lt;/p&gt;
&lt;p&gt;Think of it as workload being the light, MySQL being the prism and the metrics being the reflection of the light.&lt;/p&gt;
&lt;p&gt;Doing this, you can suggest different changes, like trying a slightly larger buffer size or increasing the thread cache. And as you go through the metrics, look at the values in the configuration, at the workload, and the actual work to find out if your hypotheses is correct.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/1/move_hu_a7820a12e73872ca.png 480w, https://percona.community/blog/2022/1/move_hu_396b32e948dbc00e.png 768w, https://percona.community/blog/2022/1/move_hu_bba934b29a032821.png 1400w"
src="https://percona.community/blog/2022/1/move.png" alt="Spikes" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3. Buffer Pool Size&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Increase the buffer pool size. It is probably the most used and most recommended setting.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 4. Redo Log Size&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Increase the redo log size and restart the instance. Make the redo log file as large as reasonably possible. What is reasonable? Reasonable is an amount of time that will allow you to write at that rate for the duration of your big workload. The purpose is to allow more pages during the heavy write periods. The only thing you could fear here is the recovery time. You should do some testing to see if the recovery times are acceptable, just like you do for backups. And then if the time of recovery is unacceptable, you should consider having a HA setup, semi-synchronous or virtually synchronous setup, where you can failover to the next instance when this one crashes. Also, you could get faster drives, or you could try to convince your developers to write less&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 5. InnoDB IO Capacity&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Set InnoDB IO capacity to 200 unless you have proof you need more. Otherwise, you’re just forcing the flushing to happen too early. The thing is that you want to keep dirty pages. Dirty pages are the performance optimization. Imagine that you update the views counter for a popular video 100 times per second. If you have a very high capacity, you will probably write that road 50 times per second to disk. If you have a smaller capacity, you will probably write it once every few seconds. And then you’re actually only doing one write for hundreds of updates because all the rest were in memory and on the redo log.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2022/1/innodb_hu_55d3fb89c55056ca.png 480w, https://percona.community/blog/2022/1/innodb_hu_bcff448b89085994.png 768w, https://percona.community/blog/2022/1/innodb_hu_763774e0526e18b1.png 1400w"
src="https://percona.community/blog/2022/1/innodb.png" alt="InnoDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you want to watch the video of the meetup and see how Marcos tuned the installation, it is always available on the &lt;a href="https://percona.community/events/percona-meetups/2022-01-14-percona-meetup-for-mysql-january-2022/" target="_blank" rel="noopener noreferrer"&gt;Community Website&lt;/a&gt;.
The meetups for MySQL, PostgreSQL, PMM, and MongoDB are regularly live-streamed. Stay tuned to &lt;a href="https://percona.community/events/percona-meetups/" target="_blank" rel="noopener noreferrer"&gt;announcements&lt;/a&gt; and feel free to join!&lt;/p&gt;</content:encoded>
      <category>MySQL</category>
      <category>tuning</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/export-data-to-JSON-from-MySQL_hu_42c14ff7c0d70c61.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/export-data-to-JSON-from-MySQL_hu_db9c8048c3d6f089.jpg" medium="image"/>
    </item>
    <item>
      <title>PMM 2.26.0 Preview Release</title>
      <link>https://percona.community/blog/2022/01/27/preview-release-2-26/</link>
      <guid>https://percona.community/blog/2022/01/27/preview-release-2-26/</guid>
      <pubDate>Thu, 27 Jan 2022 00:00:00 UTC</pubDate>
      <description>PMM 2.26.0 Preview Release Percona Monitoring and Management 2.26.0 is now available as a Preview Release.</description>
      <content:encoded>&lt;h2 id="pmm-2260-preview-release"&gt;PMM 2.26.0 Preview Release&lt;/h2&gt;
&lt;p&gt;Percona Monitoring and Management 2.26.0 is now available as a Preview Release.&lt;/p&gt;
&lt;p&gt;PMM team really appreciates your feedback!&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM Preview Release in &lt;strong&gt;testing environments&lt;/strong&gt; only, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Release Notes can be found &lt;a href="https://github.com/percona/pmm-doc/blob/bfc10bc70028af54e5f45a412010c3b301685750/docs/release-notes/2.26.0.md" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker"&gt;PMM server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag: &lt;a href="https://hub.docker.com/layers/perconalab/pmm-server/2.26.0-rc/" target="_blank" rel="noopener noreferrer"&gt;perconalab/pmm-server:2.26.0-rc&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client Release Candidate tarball for 2.26.0 from this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-3413.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable original testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;Artifact: &lt;a href="http://percona-vm.s3-website-us-east-1.amazonaws.com/PMM2-Server-2022-01-27-1524.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2022-01-27-1524.ova&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Please also check out our Engineering Monthly Meetings &lt;a href="https://percona.community/contribute/engineeringmeetings/" target="_blank" rel="noopener noreferrer"&gt;https://percona.community/contribute/engineeringmeetings/&lt;/a&gt; and join us on our journey in OpenSource! Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Rasika Chivate</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/blog/2022/1/preview_226_hu_a4515b2b4583e574.jpg"/>
      <media:content url="https://percona.community/blog/2022/1/preview_226_hu_48fb868323cf9c7d.jpg" medium="image"/>
    </item>
    <item>
      <title>How to replace `docker` with `podman` for PMM development</title>
      <link>https://percona.community/blog/2021/12/27/replace-docker-with-podman-for-pmm-dev/</link>
      <guid>https://percona.community/blog/2021/12/27/replace-docker-with-podman-for-pmm-dev/</guid>
      <pubDate>Mon, 27 Dec 2021 00:00:00 UTC</pubDate>
      <description>What is Podman? Podman is a daemonless container engine for developing, managing, and running OCI (Open Container Initiative) Containers on your Linux System. Containers can either be run as root or in rootless mode. More details here.</description>
      <content:encoded>&lt;p&gt;What is &lt;a href="https://podman.io/" target="_blank" rel="noopener noreferrer"&gt;Podman&lt;/a&gt;? Podman is a daemonless container engine for developing, managing, and running OCI (&lt;a href="https://opencontainers.org/" target="_blank" rel="noopener noreferrer"&gt;Open Container Initiative&lt;/a&gt;) Containers on your Linux System. Containers can either be run as root or in rootless mode. More details &lt;a href="https://podman.io/whatis.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Check out also &lt;a href="https://kubernetespodcast.com/episode/164-podman/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Podcast&lt;/a&gt; to learn more about &lt;code&gt;podman&lt;/code&gt; and listen to it’s creators.&lt;/p&gt;
&lt;p&gt;Why to replace? Especially in development I need simplest possible solution, I don’t need additional daemon running or allowing something to run with elevated privileges. And also it much closer to my personal understanding how it should work to run containers.&lt;/p&gt;
&lt;p&gt;Looks like for Linux it is quite possible, but the experience could be different on MacOS or Windows.&lt;/p&gt;
&lt;p&gt;But what is described bellow is strictly for development, not intended for &lt;strong&gt;production&lt;/strong&gt;! &lt;strong&gt;Podman currently is not supported for production for PMM&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;I would use Fedora 35 distro in examples bellow, first lets install &lt;code&gt;podman&lt;/code&gt; and start needed tools:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo dnf install podman docker-compose
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ systemctl --user start podman.socket&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;we still need &lt;code&gt;docker-compose&lt;/code&gt; as most of PMM tooling is built around it&lt;/li&gt;
&lt;li&gt;starting &lt;code&gt;podman.socket&lt;/code&gt; so compose would actually talk to &lt;code&gt;podman&lt;/code&gt; instead of &lt;code&gt;docker&lt;/code&gt; socket&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="pmm-managed"&gt;pmm-managed&lt;/h2&gt;
&lt;p&gt;First lets try to compile and run &lt;code&gt;pmm-managed&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id="podmansocket"&gt;podman.socket&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make env-up
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;"/usr/lib/python3.10/site-packages/docker/transport/unixconn.py"&lt;/span&gt;, line 30, in connect
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sock.connect&lt;span class="o"&gt;(&lt;/span&gt;self.unix_socket&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FileNotFoundError: &lt;span class="o"&gt;[&lt;/span&gt;Errno 2&lt;span class="o"&gt;]&lt;/span&gt; No such file or directory
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ systemctl --user status podman.socket
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;● podman.socket - Podman API Socket
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/usr/lib/systemd/user/podman.socket&lt;span class="p"&gt;;&lt;/span&gt; disabled&lt;span class="p"&gt;;&lt;/span&gt; vendor preset: disabled&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Active: active &lt;span class="o"&gt;(&lt;/span&gt;listening&lt;span class="o"&gt;)&lt;/span&gt; since Wed 2021-12-22 22:50:33 CET&lt;span class="p"&gt;;&lt;/span&gt; 1h 12min ago
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Triggers: ● podman.service
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Docs: man:podman-system-service&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Listen: /run/user/1000/podman/podman.sock &lt;span class="o"&gt;(&lt;/span&gt;Stream&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/podman.socket
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;unix:///run/user/1000/podman/podman.sock make env-up
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="c1"&gt;# ^^^that or exporting env would get us to the next stage&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;docker-compose&lt;/code&gt; that is used to bring up environment couldn’t connect to the docker daemon and thus failing. There is an env var to point to the right socket to talk to so lets find out the socket path and set it.&lt;/p&gt;
&lt;p&gt;Set that var in your environment (&lt;code&gt;.bashrc&lt;/code&gt; or similar) or I set it in the current session:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;unix:///run/user/1000/podman/podman.sock&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="short-name-image-resolution"&gt;short-name image resolution&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make env-up
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Pulling pmm-managed-server ... error
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR: &lt;span class="k"&gt;for&lt;/span&gt; pmm-managed-server failed to resolve image name: short-name resolution enforced but cannot prompt without a TTY
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR: failed to resolve image name: short-name resolution enforced but cannot prompt without a TTY
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;unix:///run/podman/podman.sock &lt;span class="nv"&gt;PMM_SERVER_IMAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;docker.io/perconalab/pmm-server:dev-latest make env-up&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now it has failed because the system doesn’t accept the short names for images, but there is another env for it &lt;code&gt;PMM_SERVER_IMAGE&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The short image name resolution we could tune in the system, &lt;code&gt;/etc/containers/registries.conf&lt;/code&gt; says:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;For more information on this configuration file, see containers-registries.conf(5).
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# NOTE: RISK OF USING UNQUALIFIED IMAGE NAMES
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# We recommend always using fully qualified image names including the registry
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# server (full dns name), namespace, image name, and tag
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# (e.g., registry.redhat.io/ubi8/ubi:latest). Pulling by digest (i.e.,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# quay.io/repository/name@digest) further eliminates the ambiguity of tags.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# When using short names, there is always an inherent risk that the image being
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# pulled could be spoofed. For example, a user wants to pull an image named
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# `foobar` from a registry and expects it to come from myregistry.com. If
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# myregistry.com is not first in the search list, an attacker could place a
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# different `foobar` image at a registry earlier in the search list. The user
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# would accidentally pull and run the attacker's image and code rather than the
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# intended content. We recommend only adding registries which are completely
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# trusted (i.e., registries which don't allow unknown or anonymous users to
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# create accounts with arbitrary names). This will prevent an image from being
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# spoofed, squatted or otherwise made insecure. If it is necessary to use one
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# of these registries, it should be added at the end of the list.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The way to go is to alias names for example in &lt;code&gt;/etc/containers/registries.conf.d/001-shortnames-den.conf&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[aliases]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; # docker
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "perconalab/pmm-server" = "docker.io/perconalab/pmm-server"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "goreleaser/goreleaser" = "docker.io/goreleaser/goreleaser"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "moby/buildkit" = "docker.io/moby/buildkit"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; "mongo" = "docker.io/library/mongo"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this way we don’t need to set &lt;code&gt;PMM_SERVER_IMAGE&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make env-up&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Please also note other aliases that I have added as I progressed through this experiment, I needed them all to run later.&lt;/p&gt;
&lt;h3 id="privileged-ports"&gt;privileged ports&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make env-up
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR: &lt;span class="k"&gt;for&lt;/span&gt; pmm-managed-server Cannot start service pmm-managed-server: rootlessport cannot expose privileged port 80, you can add &lt;span class="s1"&gt;'net.ipv4.ip_unprivileged_port_start=80'&lt;/span&gt; to /etc/sysctl.conf &lt;span class="o"&gt;(&lt;/span&gt;currently 1024&lt;span class="o"&gt;)&lt;/span&gt;, or choose a larger port number &lt;span class="o"&gt;(&lt;/span&gt;&gt;&lt;span class="o"&gt;=&lt;/span&gt; 1024&lt;span class="o"&gt;)&lt;/span&gt;: listen tcp 127.0.0.1:80: bind: permission denied
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;compose.parallel.parallel_execute_iter: Failed: &lt;Service: pmm-managed-server&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;compose.parallel.feed_queue: Pending: set&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR: &lt;span class="k"&gt;for&lt;/span&gt; pmm-managed-server Cannot start service pmm-managed-server: rootlessport cannot expose privileged port 80, you can add &lt;span class="s1"&gt;'net.ipv4.ip_unprivileged_port_start=80'&lt;/span&gt; to /etc/sysctl.conf &lt;span class="o"&gt;(&lt;/span&gt;currently 1024&lt;span class="o"&gt;)&lt;/span&gt;, or choose a larger port number &lt;span class="o"&gt;(&lt;/span&gt;&gt;&lt;span class="o"&gt;=&lt;/span&gt; 1024&lt;span class="o"&gt;)&lt;/span&gt;: listen tcp 127.0.0.1:80: bind: permission denied
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR: compose.cli.main.exit_with_metrics: Encountered errors &lt;span class="k"&gt;while&lt;/span&gt; bringing up the project.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make: *** &lt;span class="o"&gt;[&lt;/span&gt;Makefile:9: env-compose-up&lt;span class="o"&gt;]&lt;/span&gt; Error &lt;span class="m"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;OK, this is common for rootless containers that system wouldn’t allow them to bind ports and could be either tuned as suggested in error message, or we just could bind to the unprivileged ports:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="m"&gt;127.0.0.1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="m"&gt;40443&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# For headless delve&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This is fine for development purposes. In prod container should be run either under privileged user, or there should be some proxy behind it.&lt;/p&gt;
&lt;h3 id="security-opt-parameter"&gt;security-opt parameter&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make env-up
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR: &lt;span class="k"&gt;for&lt;/span&gt; pmm-managed-server Cannot create container &lt;span class="k"&gt;for&lt;/span&gt; service pmm-managed-server: fill out specgen: invalid --security-opt 1: &lt;span class="s2"&gt;"seccomp:unconfined"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;https://github.com/containers/podman/blob/7dabcbd7bcf78f3b5d310ed547801106da382618/pkg/specgenutil/specgen.go#L544&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;OK, that is more interesting. In &lt;code&gt;pmm-managed&lt;/code&gt; compose file:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;security_opt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;seccomp:unconfined&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I googled it and found out &lt;a href="https://github.com/containers/podman-compose/commit/bbaa7867399b91255859b959535fedd7c20daacc" target="_blank" rel="noopener noreferrer"&gt;this fix&lt;/a&gt; for &lt;code&gt;podman-compose&lt;/code&gt;. There they just replaced &lt;code&gt;:&lt;/code&gt; with &lt;code&gt;=&lt;/code&gt;.
OK, if I try that - it works:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;security_opt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;seccomp=unconfined&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It passes it correctly and podman happy.&lt;/p&gt;
&lt;p&gt;Probably docker would be happy as well, as they support both &lt;code&gt;:&lt;/code&gt; and &lt;code&gt;=&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/cli/blob/9de1b162f/cli/command/container/opts.go#L673" target="_blank" rel="noopener noreferrer"&gt;https://github.com/docker/cli/blob/9de1b162f/cli/command/container/opts.go#L673&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/compose/blob/a9e8164a8d2796847c83a38a2f7cd9f19a13b940/pkg/compose/create.go#L401" target="_blank" rel="noopener noreferrer"&gt;https://github.com/docker/compose/blob/a9e8164a8d2796847c83a38a2f7cd9f19a13b940/pkg/compose/create.go#L401&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Looks like devs weren’t sure which one is correct or there were no standard on the date &lt;code&gt;:&lt;/code&gt; was added. But looks like &lt;code&gt;=&lt;/code&gt; is a correct one. So we need to test it with docker and just change.&lt;/p&gt;
&lt;p&gt;Bellow I have changed compose file with &lt;code&gt;=&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id="makefile-not-parametrized"&gt;Makefile not parametrized&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make env-up
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating pmm-managed-server ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;compose.parallel.feed_queue: Pending: set&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;compose.parallel.parallel_execute_iter: Finished processing: &lt;Service: pmm-managed-server&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;compose.parallel.feed_queue: Pending: set&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; -it --workdir&lt;span class="o"&gt;=&lt;/span&gt;/root/go/src/github.com/percona/pmm-managed pmm-managed-server .devcontainer/setup.py
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make: docker: No such file or directory
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make: *** &lt;span class="o"&gt;[&lt;/span&gt;Makefile:12: env-devcontainer&lt;span class="o"&gt;]&lt;/span&gt; Error &lt;span class="m"&gt;127&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now it couldn’t find &lt;code&gt;docker&lt;/code&gt; executable, it is hardcoded in the Makefile:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;env-devcontainer:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; docker exec -it --workdir=/root/go/src/github.com/percona/pmm-managed pmm-managed-server .devcontainer/setup.py&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So we can’t just alias it in bash, but need a link:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sudo ln -s /usr/bin/podman /usr/bin/docker&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Other way to do it is to use some variable in the &lt;code&gt;Makefile&lt;/code&gt; to be able to take executable as a parameter.&lt;/p&gt;
&lt;h3 id="success"&gt;Success&lt;/h3&gt;
&lt;p&gt;Implementing all above:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ make env-up
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&gt; supervisorctl start pmm-managed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-managed: started
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Done in 129.057330132&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Actually not that bad, what we have done:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;prepared system environment: socket, env var, link, aliases&lt;/li&gt;
&lt;li&gt;fixed minor non-standard parameter&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All of that needs to be done once and after that there is no difference on running podman, except that it runs in user mode and don’t require privileged daemon ;-)&lt;/p&gt;
&lt;h2 id="mongodb_exporter"&gt;mongodb_exporter&lt;/h2&gt;
&lt;p&gt;Lets test if we can build it using &lt;code&gt;goreleaser&lt;/code&gt; with &lt;code&gt;podman&lt;/code&gt; as well as let’s try to bring up some more complex testing environment with &lt;code&gt;docker-compose&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id="goreleaser"&gt;goreleaser&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://goreleaser.com/install/#running-with-docker" target="_blank" rel="noopener noreferrer"&gt;https://goreleaser.com/install/#running-with-docker&lt;/a&gt; :&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;podman run --privileged --rm -v &lt;span class="nv"&gt;$PWD&lt;/span&gt;:/go/src/github.com/user/repo -v /run/user/1000/podman/podman.sock:/var/run/docker.sock -w /go/src/github.com/user/repo goreleaser/goreleaser release --snapshot --skip-publish --rm-dist&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So we know that we have different socket already so we are passing it, as well as we already have alias for the short-name (for &lt;code&gt;goreleaser&lt;/code&gt; as well as for the &lt;code&gt;buildx&lt;/code&gt;). And it just works:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ podman run --privileged --rm -v &lt;span class="nv"&gt;$PWD&lt;/span&gt;:/go/src/github.com/user/repo -v /run/user/1000/podman/podman.sock:/var/run/docker.sock -w /go/src/github.com/user/repo goreleaser/goreleaser release --snapshot --skip-publish --rm-dist
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • releasing...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • building binaries
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • building &lt;span class="nv"&gt;binary&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/go/src/github.com/user/repo/build/mongodb_exporter_darwin_arm64/mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • building &lt;span class="nv"&gt;binary&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/go/src/github.com/user/repo/build/mongodb_exporter_darwin_amd64/mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • building &lt;span class="nv"&gt;binary&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/go/src/github.com/user/repo/build/mongodb_exporter_linux_arm64/mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • building &lt;span class="nv"&gt;binary&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/go/src/github.com/user/repo/build/mongodb_exporter_linux_amd64/mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • building &lt;span class="nv"&gt;binary&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/go/src/github.com/user/repo/build/mongodb_exporter_linux_arm_7/mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • archives
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;archive&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-arm64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;archive&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-amd64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;archive&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-arm.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;archive&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.darwin-arm64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;archive&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.darwin-amd64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • linux packages
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm7 &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-arm.rpm &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rpm &lt;span class="nv"&gt;package&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm64 &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-arm64.deb &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;deb &lt;span class="nv"&gt;package&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm64 &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-arm64.rpm &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rpm &lt;span class="nv"&gt;package&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64 &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-64-bit.rpm &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rpm &lt;span class="nv"&gt;package&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64 &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-64-bit.deb &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;deb &lt;span class="nv"&gt;package&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • creating &lt;span class="nv"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm7 &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/mongodb_exporter-88c186c.linux-arm.deb &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;deb &lt;span class="nv"&gt;package&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • calculating checksums
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-64-bit.rpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-amd64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-64-bit.deb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-arm64.rpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-arm64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.darwin-amd64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.darwin-arm64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-arm.deb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-arm64.deb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-arm.rpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • checksumming &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb_exporter-88c186c.linux-arm.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • docker images
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • building docker image &lt;span class="nv"&gt;image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;percona/mongodb_exporter:0.30
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • pipe skipped &lt;span class="nv"&gt;error&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;publishing is disabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • storing artifact list
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • writing &lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;build/artifacts.json
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; • release succeeded after 66.48s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ ls -la build/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;total &lt;span class="m"&gt;55592&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x. &lt;span class="m"&gt;8&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;4096&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxrwxr-x. &lt;span class="m"&gt;11&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;4096&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:34 ..
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-------. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;9932&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 artifacts.json
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;3931&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:34 config.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwx------. &lt;span class="m"&gt;2&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;146&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 goreleaserdocker741570390
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;1190&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter_88c186c_checksums.txt
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5555136&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.darwin-amd64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5467991&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.darwin-arm64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5362664&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-64-bit.deb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5345376&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-64-bit.rpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5351467&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-amd64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;4914988&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-arm64.deb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;4902660&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-arm64.rpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;4908794&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-arm64.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5028350&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-arm.deb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5015878&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-arm.rpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5023920&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-arm.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x. &lt;span class="m"&gt;2&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;30&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter_darwin_amd64
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x. &lt;span class="m"&gt;2&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;30&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter_darwin_arm64
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x. &lt;span class="m"&gt;2&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;30&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter_linux_amd64
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x. &lt;span class="m"&gt;2&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;30&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter_linux_arm64
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x. &lt;span class="m"&gt;2&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;30&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter_linux_arm_7
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ ls -la build/goreleaserdocker741570390/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;total &lt;span class="m"&gt;25144&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwx------. &lt;span class="m"&gt;2&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;146&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x. &lt;span class="m"&gt;8&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;4096&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 ..
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;244&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 Dockerfile
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rwxr-xr-x. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;15024128&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5362664&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-64-bit.deb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r--. &lt;span class="m"&gt;1&lt;/span&gt; dkondratenko dkondratenko &lt;span class="m"&gt;5345376&lt;/span&gt; Dec &lt;span class="m"&gt;22&lt;/span&gt; 23:35 mongodb_exporter-88c186c.linux-64-bit.rpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ podman images
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;REPOSITORY TAG IMAGE ID CREATED SIZE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;localhost/percona/mongodb_exporter 88c186c 23d41a482eb4 &lt;span class="m"&gt;3&lt;/span&gt; minutes ago 15.2 MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;localhost/percona/mongodb_exporter 0.30 23d41a482eb4 &lt;span class="m"&gt;3&lt;/span&gt; minutes ago 15.2 MB&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="docker-compose"&gt;docker-compose&lt;/h3&gt;
&lt;p&gt;There is compose file to bring test environment for the &lt;code&gt;mongodb_exporter&lt;/code&gt;. Lets try to bring it up (also notice that mongo alias was added above to resolve short-name):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker-compose up&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;links&lt;/code&gt; don’t work. Also in compose doc they are kind of deprecated:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/compose/compose-file/compose-file-v3/#links" target="_blank" rel="noopener noreferrer"&gt;https://docs.docker.com/compose/compose-file/compose-file-v3/#links&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So I just deleted all links and it works:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ docker-compose up
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-cnf-1 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-cnf-3 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-1-3 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-1-1 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-2-2 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-2-arbiter ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-2-3 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-2-1 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating standalone ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-1-2 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-cnf-2 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-1-arbiter ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-rs2-setup ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-cnf-setup ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-rs1-setup ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongos ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating mongo-shard-setup ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Attaching to mongo-2-arbiter, mongo-cnf-1, mongo-1-3, mongo-2-2, mongo-1-2, mongo-2-3, standalone, mongo-cnf-3, mongo-1-1, mongo-2-1, mongo-cnf-2, mongo-1-arbiter, mongo-rs2-setup, mongo-cnf-setup, mongo-rs1-setup, mongos, mongo-shard-setup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-cnf-1 &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:02.806+0000 I NETWORK &lt;span class="o"&gt;[&lt;/span&gt;listener&lt;span class="o"&gt;]&lt;/span&gt; connection accepted from 10.89.0.33:58362 &lt;span class="c1"&gt;#56 (33 connections now open)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-cnf-1 &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:02.807+0000 I NETWORK &lt;span class="o"&gt;[&lt;/span&gt;conn56&lt;span class="o"&gt;]&lt;/span&gt; received client metadata from 10.89.0.33:58362 conn56: &lt;span class="o"&gt;{&lt;/span&gt; driver: &lt;span class="o"&gt;{&lt;/span&gt; name: &lt;span class="s2"&gt;"NetworkInterfaceTL"&lt;/span&gt;, version: &lt;span class="s2"&gt;"4.2.17"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;, os: &lt;span class="o"&gt;{&lt;/span&gt; type: &lt;span class="s2"&gt;"Linux"&lt;/span&gt;, name: &lt;span class="s2"&gt;"Ubuntu"&lt;/span&gt;, architecture: &lt;span class="s2"&gt;"x86_64"&lt;/span&gt;, version: &lt;span class="s2"&gt;"18.04"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; --- Sharding Status ---
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; sharding version: &lt;span class="o"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s2"&gt;"_id"&lt;/span&gt; : 1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s2"&gt;"minCompatibleVersion"&lt;/span&gt; : 5,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s2"&gt;"currentVersion"&lt;/span&gt; : 6,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s2"&gt;"clusterId"&lt;/span&gt; : ObjectId&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"61c4fdcc0039e75de22fa8bd"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; shards:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"_id"&lt;/span&gt; : &lt;span class="s2"&gt;"rs1"&lt;/span&gt;, &lt;span class="s2"&gt;"host"&lt;/span&gt; : &lt;span class="s2"&gt;"rs1/10.89.0.16:27017,10.89.0.18:27017,10.89.0.22:27017"&lt;/span&gt;, &lt;span class="s2"&gt;"state"&lt;/span&gt; : &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"_id"&lt;/span&gt; : &lt;span class="s2"&gt;"rs2"&lt;/span&gt;, &lt;span class="s2"&gt;"host"&lt;/span&gt; : &lt;span class="s2"&gt;"rs2/10.89.0.17:27017,10.89.0.19:27017,10.89.0.23:27017"&lt;/span&gt;, &lt;span class="s2"&gt;"state"&lt;/span&gt; : &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; active mongoses:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s2"&gt;"4.2.17"&lt;/span&gt; : &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; autosplit:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; Currently enabled: yes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; balancer:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; Currently enabled: yes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; Currently running: no
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; Failed balancer rounds in last &lt;span class="m"&gt;5&lt;/span&gt; attempts: &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; Migration Results &lt;span class="k"&gt;for&lt;/span&gt; the last &lt;span class="m"&gt;24&lt;/span&gt; hours:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; No recent migrations
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; databases:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"_id"&lt;/span&gt; : &lt;span class="s2"&gt;"config"&lt;/span&gt;, &lt;span class="s2"&gt;"primary"&lt;/span&gt; : &lt;span class="s2"&gt;"config"&lt;/span&gt;, &lt;span class="s2"&gt;"partitioned"&lt;/span&gt; : &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup &lt;span class="p"&gt;|&lt;/span&gt; bye
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongos &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:02.833+0000 I NETWORK &lt;span class="o"&gt;[&lt;/span&gt;conn14&lt;span class="o"&gt;]&lt;/span&gt; end connection 10.89.0.35:40380 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt; connections now open&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-shard-setup exited with code &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongos &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:03.806+0000 I CONNPOOL &lt;span class="o"&gt;[&lt;/span&gt;TaskExecutorPool-0&lt;span class="o"&gt;]&lt;/span&gt; Connecting to 10.89.0.24:27017
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongos &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:03.806+0000 I CONNPOOL &lt;span class="o"&gt;[&lt;/span&gt;TaskExecutorPool-0&lt;span class="o"&gt;]&lt;/span&gt; Connecting to 10.89.0.21:27017
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-cnf-2 &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:03.807+0000 I NETWORK &lt;span class="o"&gt;[&lt;/span&gt;listener&lt;span class="o"&gt;]&lt;/span&gt; connection accepted from 10.89.0.33:47564 &lt;span class="c1"&gt;#31 (20 connections now open)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-cnf-2 &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:03.808+0000 I NETWORK &lt;span class="o"&gt;[&lt;/span&gt;conn31&lt;span class="o"&gt;]&lt;/span&gt; received client metadata from 10.89.0.33:47564 conn31: &lt;span class="o"&gt;{&lt;/span&gt; driver: &lt;span class="o"&gt;{&lt;/span&gt; name: &lt;span class="s2"&gt;"NetworkInterfaceTL"&lt;/span&gt;, version: &lt;span class="s2"&gt;"4.2.17"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;, os: &lt;span class="o"&gt;{&lt;/span&gt; type: &lt;span class="s2"&gt;"Linux"&lt;/span&gt;, name: &lt;span class="s2"&gt;"Ubuntu"&lt;/span&gt;, architecture: &lt;span class="s2"&gt;"x86_64"&lt;/span&gt;, version: &lt;span class="s2"&gt;"18.04"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-cnf-3 &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:03.807+0000 I NETWORK &lt;span class="o"&gt;[&lt;/span&gt;listener&lt;span class="o"&gt;]&lt;/span&gt; connection accepted from 10.89.0.33:54550 &lt;span class="c1"&gt;#28 (17 connections now open)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mongo-cnf-3 &lt;span class="p"&gt;|&lt;/span&gt; 2021-12-23T22:53:03.809+0000 I NETWORK &lt;span class="o"&gt;[&lt;/span&gt;conn28&lt;span class="o"&gt;]&lt;/span&gt; received client metadata from 10.89.0.33:54550 conn28: &lt;span class="o"&gt;{&lt;/span&gt; driver: &lt;span class="o"&gt;{&lt;/span&gt; name: &lt;span class="s2"&gt;"NetworkInterfaceTL"&lt;/span&gt;, version: &lt;span class="s2"&gt;"4.2.17"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;, os: &lt;span class="o"&gt;{&lt;/span&gt; type: &lt;span class="s2"&gt;"Linux"&lt;/span&gt;, name: &lt;span class="s2"&gt;"Ubuntu"&lt;/span&gt;, architecture: &lt;span class="s2"&gt;"x86_64"&lt;/span&gt;, version: &lt;span class="s2"&gt;"18.04"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;^CGracefully stopping... &lt;span class="o"&gt;(&lt;/span&gt;press Ctrl+C again to force&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-2-2 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-1-1 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-cnf-1 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-cnf-2 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-2-arbiter ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-2-3 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-cnf-3 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping standalone ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongos ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-1-arbiter ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-1-3 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-2-1 ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Stopping mongo-1-2 ... &lt;span class="k"&gt;done&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So this case shows that compose standard isn’t that stable. And those probably just could be easily removed and &lt;code&gt;podman&lt;/code&gt; could be used in this case as well.&lt;/p&gt;
&lt;h2 id="selinux-notes"&gt;SELinux notes&lt;/h2&gt;
&lt;p&gt;If you have SELinux enabled, as I do:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ sestatus
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELinux status: enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELinuxfs mount: /sys/fs/selinux
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELinux root directory: /etc/selinux
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Loaded policy name: targeted
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Current mode: enforcing
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Mode from config file: enforcing
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Policy MLS status: enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Policy deny_unknown status: allowed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Memory protection checking: actual (secure)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Max kernel policy version: 33&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You would need some additional changes and system tunning. It is mostly related to the volume binds.&lt;/p&gt;
&lt;h3 id="pmm-managed-1"&gt;&lt;code&gt;pmm-managed&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;Compose file for &lt;code&gt;pmm-managed&lt;/code&gt; has 2 volumes binded that without additional option would through errors like:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; -it --workdir&lt;span class="o"&gt;=&lt;/span&gt;/root/go/src/github.com/percona/pmm-managed pmm-managed-server .devcontainer/setup.py
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/usr/bin/python2: can&lt;span class="s1"&gt;'t open file '&lt;/span&gt;/root/go/src/github.com/percona/pmm-managed/.devcontainer/setup.py&lt;span class="err"&gt;'&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;Errno 13&lt;span class="o"&gt;]&lt;/span&gt; Permission denied
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make: *** &lt;span class="o"&gt;[&lt;/span&gt;Makefile:12: env-devcontainer&lt;span class="o"&gt;]&lt;/span&gt; Error &lt;span class="m"&gt;2&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;and in case of &lt;code&gt;go-modules&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-22" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-22"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: downloading golang.org/x/perf v0.0.0-20210220033136-40a54f11e909
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mkdir /root/go/pkg/mod/cache: permission denied
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tools.go:37: running &lt;span class="s2"&gt;"go"&lt;/span&gt;: &lt;span class="nb"&gt;exit&lt;/span&gt; status &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make: *** &lt;span class="o"&gt;[&lt;/span&gt;init&lt;span class="o"&gt;]&lt;/span&gt; Error &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Traceback &lt;span class="o"&gt;(&lt;/span&gt;most recent call last&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;"/root/go/src/github.com/percona/pmm-managed/.devcontainer/setup.py"&lt;/span&gt;, line 129, in &lt;module&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; main&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;"/root/go/src/github.com/percona/pmm-managed/.devcontainer/setup.py"&lt;/span&gt;, line 116, in main
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; make_init&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;"/root/go/src/github.com/percona/pmm-managed/.devcontainer/setup.py"&lt;/span&gt;, line 75, in make_init
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;"make init"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;"/root/go/src/github.com/percona/pmm-managed/.devcontainer/setup.py"&lt;/span&gt;, line 19, in run_commands
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; subprocess.check_call&lt;span class="o"&gt;(&lt;/span&gt;cmd, &lt;span class="nv"&gt;shell&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; File &lt;span class="s2"&gt;"/usr/lib64/python2.7/subprocess.py"&lt;/span&gt;, line 542, in check_call
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; raise CalledProcessError&lt;span class="o"&gt;(&lt;/span&gt;retcode, cmd&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;subprocess.CalledProcessError: Command &lt;span class="s1"&gt;'make init'&lt;/span&gt; returned non-zero &lt;span class="nb"&gt;exit&lt;/span&gt; status &lt;span class="m"&gt;2&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make: *** &lt;span class="o"&gt;[&lt;/span&gt;Makefile:12: env-devcontainer&lt;span class="o"&gt;]&lt;/span&gt; Error &lt;span class="m"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Documentation for &lt;code&gt;podman-run&lt;/code&gt; &lt;a href="https://docs.podman.io/en/latest/markdown/podman-run.1.html#volume-v-source-volume-host-dir-container-dir-options" target="_blank" rel="noopener noreferrer"&gt;clarifies it&lt;/a&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-23" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-23"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Podman to relabel file objects on the shared volumes. The z option tells Podman that two containers share the volume content. As a result, Podman labels the content with a shared content label. Shared volume labels allow all containers to read/write content. The Z option tells Podman to label the content with a private unshared label.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here we need &lt;code&gt;:Z&lt;/code&gt; option:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-24" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-24"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;.:/root/go/src/github.com/percona/pmm-managed:Z&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./Makefile.devcontainer:/root/go/src/github.com/percona/pmm-managed/Makefile:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;go-modules:/root/go/pkg/mod:Z&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Put modules cache into a separate volume&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="mongodb-exporter"&gt;&lt;code&gt;mongodb-exporter&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;Compose for &lt;code&gt;mongodb_exporter&lt;/code&gt; also contains volume binds, but it is shared across different container abd thus needs to be binded with &lt;code&gt;:z&lt;/code&gt; option:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-25" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-25"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./docker/scripts:/scripts:z&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here is additional info:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.podman.io/en/latest/markdown/podman-run.1.html#volumes-from-container-options" target="_blank" rel="noopener noreferrer"&gt;https://docs.podman.io/en/latest/markdown/podman-run.1.html#volumes-from-container-options&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/containers/podman/issues/10779" target="_blank" rel="noopener noreferrer"&gt;https://github.com/containers/podman/issues/10779&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.podman.io/en/latest/markdown/podman-run.1.html#volume-v-source-volume-host-dir-container-dir-options" target="_blank" rel="noopener noreferrer"&gt;https://docs.podman.io/en/latest/markdown/podman-run.1.html#volume-v-source-volume-host-dir-container-dir-options&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="mongodb-selinux"&gt;MongoDB SELinux&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/#std-label-install-rhel-configure-selinux" target="_blank" rel="noopener noreferrer"&gt;https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/#std-label-install-rhel-configure-selinux&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="devcontainers"&gt;devcontainers&lt;/h2&gt;
&lt;p&gt;If someone uses VSCode and would like to use devcontainer which &lt;code&gt;pmm-managed&lt;/code&gt; supports, podman is also &lt;a href="https://code.visualstudio.com/docs/remote/containers#_can-i-use-podman-instead-of-docker" target="_blank" rel="noopener noreferrer"&gt;supported&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I do have VSCode, but I use it in flatpak. Setting up that is a little bit tricky, and I didn’t manage it as I don’t really care and don’t want to spend time to figure it out.&lt;/p&gt;
&lt;p&gt;But here are couple of useful links:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/flathub/com.visualstudio.code/issues/55" target="_blank" rel="noopener noreferrer"&gt;https://github.com/flathub/com.visualstudio.code/issues/55&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gist.github.com/FilBot3/4424d312a87f7b4178722d3b5eb20212" target="_blank" rel="noopener noreferrer"&gt;https://gist.github.com/FilBot3/4424d312a87f7b4178722d3b5eb20212&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;I don’t have docker installed for a long time and don’t struggle without it much. As shown above it is easy to setup the system and with minor changes and without obsolete parameters it would work for both docker and podman.&lt;/p&gt;
&lt;p&gt;The way to go from compose files is probably k8s manifests that podman supports with &lt;code&gt;podman generate kube&lt;/code&gt; and &lt;code&gt;podman play kube&lt;/code&gt;. Those are more standard and widely used.&lt;/p&gt;</content:encoded>
      <author>Denys Kondratenko</author>
      <category>PMM</category>
      <category>docker-compose</category>
      <category>goreleaser</category>
      <category>docker</category>
      <category>podman</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>PMM development and testing with help of minikube</title>
      <link>https://percona.community/blog/2021/12/20/pmm-minikube-postgres/</link>
      <guid>https://percona.community/blog/2021/12/20/pmm-minikube-postgres/</guid>
      <pubDate>Mon, 20 Dec 2021 00:00:00 UTC</pubDate>
      <description>Why Some time ago I needed to test PG14 with the new pg_stat_monitor version that wasn’t released. I decided to log my journey so I would spend less effort next time to replicate it.</description>
      <content:encoded>&lt;h2 id="why"&gt;Why&lt;/h2&gt;
&lt;p&gt;Some time ago I needed to test PG14 with the new &lt;code&gt;pg_stat_monitor&lt;/code&gt; version that wasn’t released. I decided to log my journey so I would spend less effort next time to replicate it.&lt;/p&gt;
&lt;p&gt;I do use podman and run PMM with its help but I also like to hack PMM DBaaS features and think that k8s and minikube are perfect and better scalable solutions for the different development environments and especially for the number of clusters and DBs.
If I need just PMM I would run it with podman, but while I am already hacking around DBaaS, I would like to use the same tool for my other development activities.&lt;/p&gt;
&lt;p&gt;So my goal is to deploy PMM on minikube, deploy PG14 there with new &lt;code&gt;pg_stat_monitor&lt;/code&gt; and check that PMM has support of new fields and features in &lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/using/query-analytics.html" target="_blank" rel="noopener noreferrer"&gt;QAN&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="minikube"&gt;&lt;code&gt;minikube&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;I use Linux and in the examples bellow I run Fedora 35.&lt;/p&gt;
&lt;p&gt;First of all you would need minikube - &lt;a href="https://minikube.sigs.k8s.io/docs/start/" target="_blank" rel="noopener noreferrer"&gt;install it&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I had a clean system and &lt;code&gt;podman&lt;/code&gt; and &lt;code&gt;buildah&lt;/code&gt; installed. When you first start a minikube with &lt;code&gt;minikube start&lt;/code&gt; it searches for available drivers and tries to deploy kubernetes on top of it. In my case it found podman and provided me with instruction that I needed to follow to get minikube correctly use podman driver.&lt;/p&gt;
&lt;p&gt;After setting everything up I was ready to go:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube start
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;😄 minikube v1.24.0 on Fedora &lt;span class="m"&gt;35&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;✨ Using the podman driver based on existing profile
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;👍 Starting control plane node minikube in cluster minikube
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🚜 Pulling base image ...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🔄 Restarting existing podman container &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s2"&gt;"minikube"&lt;/span&gt; ...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🐳 Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🔎 Verifying Kubernetes components...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🌟 Enabled addons: storage-provisioner, default-storageclass
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🏄 Done! kubectl is now configured to use &lt;span class="s2"&gt;"minikube"&lt;/span&gt; cluster and &lt;span class="s2"&gt;"default"&lt;/span&gt; namespace by default&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;OK, that was easy.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube kubectl -- get nodes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &gt; kubectl.sha256: &lt;span class="m"&gt;64&lt;/span&gt; B / &lt;span class="m"&gt;64&lt;/span&gt; B &lt;span class="o"&gt;[&lt;/span&gt;--------------------------&lt;span class="o"&gt;]&lt;/span&gt; 100.00% ? p/s 0s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &gt; kubectl: 44.73 MiB / 44.73 MiB &lt;span class="o"&gt;[&lt;/span&gt;-------------&lt;span class="o"&gt;]&lt;/span&gt; 100.00% 36.08 MiB p/s 1.4s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME STATUS ROLES AGE VERSION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube Ready control-plane,master 28d v1.22.3&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Minikube has it’s own kubectl in case you don’t have one installed. If you do it would configure it to use the correct kubernetes config to access k8s it has deployed.&lt;/p&gt;
&lt;p&gt;I had some &lt;a href="https://github.com/kubernetes/minikube/issues/12569#issuecomment-932732865" target="_blank" rel="noopener noreferrer"&gt;issue&lt;/a&gt; while deploying on my btrfs root file system, and I could workaround it starting it with:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube start --feature-gates&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"LocalStorageCapacityIsolation=false"&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="pmm-in-k8s"&gt;PMM in k8s&lt;/h2&gt;
&lt;p&gt;PMM currently doesn’t have native k8s support as the container has root privileges and is tightly integrated with different components.&lt;/p&gt;
&lt;p&gt;But it is fine for running in staging and testing environments.&lt;/p&gt;
&lt;p&gt;To deploy PMM there are 2 ways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;hard one with persistent storage&lt;/li&gt;
&lt;li&gt;easy one with ephemeral storage&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The hard one is longer and could break anytime. The option #1 is used in &lt;a href="https://www.percona.com/blog/2021/05/19/percona-monitoring-and-management-dbaas-overview-and-technical-details/" target="_blank" rel="noopener noreferrer"&gt;this blog&lt;/a&gt; post and you could use &lt;a href="https://github.com/percona-platform/dbaas-controller/blob/main/deploy/pmm-server-minikube.yaml" target="_blank" rel="noopener noreferrer"&gt;this yaml&lt;/a&gt; file to see how to do it.&lt;/p&gt;
&lt;p&gt;I need to quickly run testa and don’t care if data disappears (ephemeral storage) neither for PMM nor for DB. So I wrote this quick deployment &lt;code&gt;pmm-k8s-ephemeral.yaml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Service&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;NodePort&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;web&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;targetPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;nodePort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;30080&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;api&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;targetPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;nodePort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;30443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nn"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Service&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-net&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app.kubernetes.io/part-of&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-server&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;vm-agent&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;8428&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nn"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ConfigMap&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-conf&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app.kubernetes.io/part-of&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_USERNAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_ADDRESS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-net:443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SETUP&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'true'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_DEBUG&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'true'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_TRACE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'true'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_CONFIG_FILE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/usr/local/percona/pmm2/config/pmm-agent.yaml"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SETUP_METRICS_MODE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"push"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;PMM_AGENT_SERVER_INSECURE_TLS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nn"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;apps/v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Deployment&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-deployment&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app.kubernetes.io/part-of&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;strategy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Recreate&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;matchLabels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app.kubernetes.io/part-of&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-server&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;docker.io/perconalab/pmm-server:2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;containerPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;web&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;containerPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;api&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;containerPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;8428&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;vm&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;What I have done there:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Service (pmm) that will expose PMM to the local machine so I can reach PMM in the browser&lt;/li&gt;
&lt;li&gt;Service (pmm-net) for tools and PMM client to contact PMM server and send monitoring and analytics&lt;/li&gt;
&lt;li&gt;ConfigMap (pmm-conf) with parameters for the PMM client&lt;/li&gt;
&lt;li&gt;Deployment that runs PMM container and exposes couple of ports for the Services&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s deploy it:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube kubectl -- apply -f pmm-k8s-ephemeral.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/pmm created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;service/pmm-net created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;configmap/pmm-conf created
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;deployment.apps/pmm-deployment created&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Nice, is it running?&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube kubectl -- get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-deployment-d785ff89f-rz8zp 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 61s&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Lets open PMM in the browser:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;ssh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube kubectl -- get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-deployment-d785ff89f-rz8zp 1/1 Running 0 61s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube service pmm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|-----------|------|-------------|---------------------------|
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| NAMESPACE | NAME | TARGET PORT | URL |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|-----------|------|-------------|---------------------------|
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| default | pmm | web/80 | http://192.168.49.2:30080 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| | | api/443 | http://192.168.49.2:30443 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|-----------|------|-------------|---------------------------|
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;🎉 Opening service default/pmm in default browser...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Opening in existing browser session.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;And it will open a couple of links, if you would like to use one on &lt;code&gt;30443&lt;/code&gt; port - add &lt;code&gt;https://&lt;/code&gt; before the IP. The user/pass is &lt;code&gt;admin/admin&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;OK, now I see the PMM and it is working.&lt;/p&gt;
&lt;p&gt;Lets connect some clients to it.&lt;/p&gt;
&lt;h2 id="pg14-with-pg_stat_monitor"&gt;PG14 with pg_stat_monitor&lt;/h2&gt;
&lt;p&gt;For my task I need to take vanila PG14 and add &lt;code&gt;pg_stat_monitor&lt;/code&gt; to it, as it doesn’t come as a part of standard container distribution. Percona has &lt;a href="https://www.percona.com/software/postgresql-distribution" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL&lt;/a&gt; which comes with &lt;code&gt;pg_stat_monitor&lt;/code&gt; installed, but it wouldn’t work for me as I need unreleased version and it also wasn’t available with PG14.&lt;/p&gt;
&lt;p&gt;First I need to build &lt;a href="https://github.com/percona/pg_stat_monitor" target="_blank" rel="noopener noreferrer"&gt;pg_stat_monitor&lt;/a&gt;. There are &lt;a href="https://github.com/percona/pg_stat_monitor#building-from-source" target="_blank" rel="noopener noreferrer"&gt;instructions&lt;/a&gt; so lets do it but I would use &lt;code&gt;toolbox&lt;/code&gt; to not pollute my host system:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ git clone https://github.com/percona/pg_stat_monitor.git
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;cd&lt;/span&gt; pg_stat_monitor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ toolbox create pg_mon
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Creating container pg_mon: &lt;span class="p"&gt;|&lt;/span&gt; Created container: pg_mon
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Enter with: toolbox enter pg_mon
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;[&lt;/span&gt;pg_stat_monitor&lt;span class="o"&gt;]&lt;/span&gt;$ toolbox enter pg_mon
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;⬢&lt;span class="o"&gt;[&lt;/span&gt;pg_stat_monitor&lt;span class="o"&gt;]&lt;/span&gt;$ sudo dnf module reset postgresql -y
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;⬢&lt;span class="o"&gt;[&lt;/span&gt;pg_stat_monitor&lt;span class="o"&gt;]&lt;/span&gt;$ sudo dnf module &lt;span class="nb"&gt;enable&lt;/span&gt; postgresql:14
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;⬢&lt;span class="o"&gt;[&lt;/span&gt;pg_stat_monitor&lt;span class="o"&gt;]&lt;/span&gt;$ sudo dnf install make gcc redhat-rpm-config postgresql-server-devel
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;⬢&lt;span class="o"&gt;[&lt;/span&gt;pg_stat_monitor&lt;span class="o"&gt;]&lt;/span&gt;$ make &lt;span class="nv"&gt;USE_PGXS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;⬢&lt;span class="o"&gt;[&lt;/span&gt;pg_stat_monitor&lt;span class="o"&gt;]&lt;/span&gt;$ ls -la ?&lt;span class="o"&gt;(&lt;/span&gt;*.sql&lt;span class="p"&gt;|&lt;/span&gt;*.so&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-rw-r--. &lt;span class="m"&gt;1&lt;/span&gt; user user &lt;span class="m"&gt;6904&lt;/span&gt; Dec &lt;span class="m"&gt;17&lt;/span&gt; 22:23 pg_stat_monitor--1.0.sql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rwxr-xr-x. &lt;span class="m"&gt;1&lt;/span&gt; user user &lt;span class="m"&gt;253328&lt;/span&gt; Dec &lt;span class="m"&gt;17&lt;/span&gt; 22:23 pg_stat_monitor.so
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;⬢&lt;span class="o"&gt;[&lt;/span&gt;pg_stat_monitor&lt;span class="o"&gt;]&lt;/span&gt;$ exit&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;OK, so I have a new &lt;code&gt;pg_stat_monitor&lt;/code&gt; that I built from the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;
&lt;p&gt;Now I need to embed it into the standard PG14 container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nv"&gt;container&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$(&lt;/span&gt;buildah from postgres&lt;span class="k"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ buildah copy &lt;span class="nv"&gt;$container&lt;/span&gt; ./pg_stat_monitor.so /usr/lib/postgresql/14/lib/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cea14ac2e80f79232619557c6e2a7fb2f2379dc5216a67b775905819f5f5c730
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ buildah copy &lt;span class="nv"&gt;$container&lt;/span&gt; ./pg_stat_monitor.bc /usr/lib/postgresql/14/lib/bitcode/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;4d24aedb673a86a09883336657f6abaf20327ff21ec7a1885e2018a32a548f57
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ buildah copy &lt;span class="nv"&gt;$container&lt;/span&gt; ./pg_stat_monitor.bc /usr/lib/postgresql/14/lib/bitcode/pg_stat_monitor/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;4d24aedb673a86a09883336657f6abaf20327ff21ec7a1885e2018a32a548f57
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ buildah copy &lt;span class="nv"&gt;$container&lt;/span&gt; ./pg_stat_monitor--1.0.sql usr/share/postgresql/14/extension/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ff06a0a8c94bcfe92b8b3616c5791a8e54180a1d9730c6c26c42400741a793dd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ buildah copy &lt;span class="nv"&gt;$container&lt;/span&gt; ./pg_stat_monitor.control usr/share/postgresql/14/extension/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ec90a547ee46e628ad853c7e4a0afc6aa6ba39677e9adcf04c22bd820dc9aa4b
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ buildah run &lt;span class="nv"&gt;$container&lt;/span&gt; -- sh -c &lt;span class="s2"&gt;"echo shared_preload_libraries = \'pg_stat_monitor\' &gt;&gt; /usr/share/postgresql/postgresql.conf.sample"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ buildah commit &lt;span class="nv"&gt;$container&lt;/span&gt; postgresql-pg-stat-monitor-test
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Getting image &lt;span class="nb"&gt;source&lt;/span&gt; signatures
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 9321ff862abb skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 1fd9b284a3ce skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob e408a39a0b68 skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 8083ac6c7a07 skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 16bdcb6f65a3 skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 470529a805d0 skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 51e951dc5705 skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 27051a077cdc skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob dd44883ded8b skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 1b8d5d101e2a skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 806c98b52cc8 skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 1fb1b8252a25 skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 20371ceade59 skipped: already exists
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 94a669b6abd4 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying config 381f3d202a &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Writing manifest to image destination
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Storing signatures
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;381f3d202aca494c2caa663dfa1f95934c3a0bb64e0efceb0388ff6f3854be08
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ podman save --format docker-archive -o postgresql-pg-stat-monitor-test.tar localhost/postgresql-pg-stat-monitor-test
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 9321ff862abb &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 1fd9b284a3ce &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob e408a39a0b68 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 8083ac6c7a07 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 16bdcb6f65a3 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 470529a805d0 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 51e951dc5705 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 27051a077cdc &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob dd44883ded8b &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 1b8d5d101e2a &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 806c98b52cc8 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 1fb1b8252a25 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 20371ceade59 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying blob 94a669b6abd4 &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Copying config 381f3d202a &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Writing manifest to image destination
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Storing signatures
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube image load ./postgresql-pg-stat-monitor-test.tar
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube image ls
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker.io/localhost/postgresql-pg-stat-monitor-test:latest
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;What I have done there:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;created new image from &lt;code&gt;docker.io/library/postgres&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;copied all needed files from locally built &lt;code&gt;pg_stat_monitor&lt;/code&gt; to the new image&lt;/li&gt;
&lt;li&gt;enabled &lt;code&gt;pg_stat_monitor&lt;/code&gt; in the config&lt;/li&gt;
&lt;li&gt;commited changes to the image&lt;/li&gt;
&lt;li&gt;saved image to the archive&lt;/li&gt;
&lt;li&gt;loaded image from the archive to the minikube cache (if anyone know how to load local image directly - please let me know)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So now I have a PG14 image with the new &lt;code&gt;pg_stat+monitor&lt;/code&gt; that I would like to test as the image in my k8s cluster.&lt;/p&gt;
&lt;p&gt;Lets create a PG14 deployment, shall we? Here is &lt;code&gt;postgresql_eph.yaml&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ConfigMap&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;postgres-configuration&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;postgres&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;admin&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nn"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;apps/v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;StatefulSet&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;postgres-statefulset&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;postgres&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"postgres"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;matchLabels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;postgres&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;postgres&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;postgres&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;docker.io/localhost/postgresql-pg-stat-monitor-test:latest&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Never&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;envFrom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;configMapRef&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;postgres-configuration&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-agent&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;docker.io/perconalab/pmm-client:2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;envFrom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;configMapRef&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;pmm-conf&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;containerPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;8428&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;vm&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Notice here &lt;code&gt;imagePullPolicy: Never&lt;/code&gt; and &lt;code&gt;image: docker.io/localhost/postgresql-pg-stat-monitor-test:latest&lt;/code&gt;, I am instructing k8s to not pull the image but only use one in cache and it is the image name that I have uploaded earlier.&lt;/p&gt;
&lt;p&gt;I also added a PMM client sidecar container to monitor and query PG14. Also notice that PG14 is not exposed outside of the pod, I just don’t need it. The load I need I could produce just from inside of the pod. So if you use this example for something else - expose the port for PG14.&lt;/p&gt;
&lt;p&gt;Lets deploy it:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube kubectl -- apply -f ./postgresql_eph.yml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube kubectl -- get pods
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NAME READY STATUS RESTARTS AGE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm-deployment-d785ff89f-sgr6s 1/1 Running &lt;span class="m"&gt;0&lt;/span&gt; 11m
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;postgres-statefulset-0 2/2 Running &lt;span class="m"&gt;0&lt;/span&gt; 11m&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now I have PMM and PG14 running, let’s connect them.&lt;/p&gt;
&lt;h2 id="pmm-qan-with-pg_stat_monitor"&gt;PMM QAN with &lt;code&gt;pg_stat_monitor&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;First of all I need to enable &lt;code&gt;pg_stat_monitor&lt;/code&gt; extension for PG14:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube kubectl -- &lt;span class="nb"&gt;exec&lt;/span&gt; --stdin --tty postgres-statefulset-0 --container postgres -- /bin/bash
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;root@postgres-statefulset-0:/# psql -U admin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;psql &lt;span class="o"&gt;(&lt;/span&gt;14.1 &lt;span class="o"&gt;(&lt;/span&gt;Debian 14.1-1.pgdg110+1&lt;span class="o"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Type &lt;span class="s2"&gt;"help"&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; help.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;admin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="c1"&gt;# CREATE EXTENSION pg_stat_monitor;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE EXTENSION
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;admin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="c1"&gt;# SELECT pg_stat_monitor_version();&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pg_stat_monitor_version
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-------------------------
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1.0.0-rc.1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt; row&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;admin-# &lt;span class="se"&gt;\q&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;root@postgres-statefulset-0:/# exit&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now I need connect PG14 to the PMM client so it would start monitor it and scrape query analytics:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sh&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ minikube kubectl -- &lt;span class="nb"&gt;exec&lt;/span&gt; --stdin --tty postgres-statefulset-0 --container pmm-agent -- /bin/bash
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bash-4.2$ pmm-admin list
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Service &lt;span class="nb"&gt;type&lt;/span&gt; Service name Address and port Service ID
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Agent &lt;span class="nb"&gt;type&lt;/span&gt; Status Metrics Mode Agent ID Service ID
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pmm_agent Connected /agent_id/318838db-bd57-44d4-b7a7-3786ec2492f0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;node_exporter Running push /agent_id/58ef7f93-cf83-4d5b-bd2b-be34b7fc5ecf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;vmagent Running push /agent_id/d29685ba-61e0-429f-84f5-4f85505242dc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bash-4.2$ pmm-admin add postgresql --username&lt;span class="o"&gt;=&lt;/span&gt;admin --password&lt;span class="o"&gt;=&lt;/span&gt;admin --query-source&lt;span class="o"&gt;=&lt;/span&gt;pgstatmonitor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PostgreSQL Service added.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Service ID : /service_id/736b6453-23d2-45f1-b30e-2bacccca3644
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Service name: postgres-statefulset-0-postgresql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now if I go to the PMM UI, I could see the QAN data for the postgres, or debug why I don’t see it :)&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;minikube is a very nice tool for developers and testers to bring up complex deployments, play, debug and test.&lt;/p&gt;
&lt;p&gt;k8s yaml &lt;a href="https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-manifest" target="_blank" rel="noopener noreferrer"&gt;manifests&lt;/a&gt; are really good standardized and have tons of configurable options as ConfigMaps, Secrets and etc. Which you could have different in testing, staging and production but sharing same operators, deployments and pods. It also has a clear, documented, open source API and code.&lt;/p&gt;
&lt;p&gt;Also podman has &lt;code&gt;play kube&lt;/code&gt; feature that allows the reuse of the same manifest files to create pods with podman. It is not fully featured, but potential and ideas are very powerful.&lt;/p&gt;
&lt;p&gt;Check out &lt;a href="https://kubernetespodcast.com/episode/164-podman/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Podcast&lt;/a&gt; to learn more about podman.&lt;/p&gt;</content:encoded>
      <author>Denys Kondratenko</author>
      <category>PMM</category>
      <category>PostgreSQL</category>
      <category>PG</category>
      <category>pg_stat_monitor</category>
      <category>minikube</category>
      <category>podman</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>2.25.0 Preview Release</title>
      <link>https://percona.community/blog/2021/12/03/preview-release/</link>
      <guid>https://percona.community/blog/2021/12/03/preview-release/</guid>
      <pubDate>Fri, 03 Dec 2021 00:00:00 UTC</pubDate>
      <description>2.25.0 Preview Release Percona Monitoring and Management 2.25.0 is now available as a Preview Release.</description>
      <content:encoded>&lt;h2 id="2250-preview-release"&gt;2.25.0 Preview Release&lt;/h2&gt;
&lt;p&gt;Percona Monitoring and Management 2.25.0 is now available as a Preview Release.&lt;/p&gt;
&lt;p&gt;PMM team really appreciates your feedback!&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM Preview Release in &lt;strong&gt;testing environments&lt;/strong&gt; only, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Release Notes can be found &lt;a href="https://docs.percona.com/percona-monitoring-and-management/release-notes/2.25.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker"&gt;PMM server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag: perconalab/pmm-server:2.25.0-rc&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client Release Candidate tarball for 2.25.0 from this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-3300.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable original testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;Artifact: &lt;a href="http://percona-vm.s3-website-us-east-1.amazonaws.com/PMM2-Server-2021-12-03-1855.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2021-12-03-1855.ova&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact id: &lt;code&gt;ami-04ba67eb15e6e089e&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Please also check out our Engineering Monthly Meetings &lt;a href="https://percona.community/contribute/engineeringmeetings/" target="_blank" rel="noopener noreferrer"&gt;https://percona.community/contribute/engineeringmeetings/&lt;/a&gt; and join us on our journey in OpenSource! Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/blog/2021/12/preview_225_hu_b57661f11988d885.jpg"/>
      <media:content url="https://percona.community/blog/2021/12/preview_225_hu_7501f4a76d1ef9da.jpg" medium="image"/>
    </item>
    <item>
      <title>2.24.0 Preview Release</title>
      <link>https://percona.community/blog/2021/11/11/preview-release/</link>
      <guid>https://percona.community/blog/2021/11/11/preview-release/</guid>
      <pubDate>Thu, 11 Nov 2021 00:00:00 UTC</pubDate>
      <description>2.24.0 Preview Release Percona Monitoring and Management 2.24.0 is now available as a Preview Release.</description>
      <content:encoded>&lt;h2 id="2240-preview-release"&gt;2.24.0 Preview Release&lt;/h2&gt;
&lt;p&gt;Percona Monitoring and Management 2.24.0 is now available as a Preview Release.&lt;/p&gt;
&lt;p&gt;PMM team really appreciates your feedback!&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM Preview Release in &lt;strong&gt;testing environments&lt;/strong&gt; only, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Release Notes can be found &lt;a href="https://deploy-preview-622--pmm-doc.netlify.app/release-notes/2.24.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker"&gt;PMM server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag: &lt;a href="https://hub.docker.com/layers/perconalab/pmm-server/2.24.0-rc/images/sha256-e59fbdf2ffe7e30a3eb3cc83c438130bcecd8ca6ea02ef04c8a121fbb81a948a?context=explore" target="_blank" rel="noopener noreferrer"&gt;perconalab/pmm-server:2.24.0-rc&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client Release Candidate tarball for 2.24.0 from this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-3216.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable original testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;Artifact: &lt;a href="http://percona-vm.s3-website-us-east-1.amazonaws.com/PMM2-Server-2021-11-10-1310.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2021-11-10-1310.ova&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact id: &lt;code&gt;ami-03db8ae0f3ef49618&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Please also check out our Engineering Monthly Meetings &lt;a href="https://percona.community/contribute/engineeringmeetings/" target="_blank" rel="noopener noreferrer"&gt;https://percona.community/contribute/engineeringmeetings/&lt;/a&gt; and join us on our journey in OpenSource! Contact us Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/blog/2021/11/preview_224_hu_7fcb27e8f8ae902d.jpg"/>
      <media:content url="https://percona.community/blog/2021/11/preview_224_hu_32b78b028248d4eb.jpg" medium="image"/>
    </item>
    <item>
      <title>The Errant GTID</title>
      <link>https://percona.community/blog/2021/11/08/the-errant-gtid-pt1/</link>
      <guid>https://percona.community/blog/2021/11/08/the-errant-gtid-pt1/</guid>
      <pubDate>Mon, 08 Nov 2021 00:00:00 UTC</pubDate>
      <description>Part 1 What is a GTID? Oracle/MySQL define a GTID as "A global transaction identifier (GTID) is a unique identifier created and associated with each transaction committed on the server of origin (the source). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication topology." An errant transaction can make promotion of a replica to primary very difficult.</description>
      <content:encoded>&lt;h1 id="part-1"&gt;Part 1&lt;/h1&gt;
&lt;p&gt;What is a GTID? Oracle/MySQL define a GTID as "A global transaction identifier (GTID) is a unique identifier created and associated with each transaction committed on the server of origin (the source). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication topology."
&lt;p&gt;An errant transaction can make
promotion of a replica to primary very difficult.&lt;/p&gt;
&lt;p&gt;An errant transaction is BAD. Why is it bad? The errant transaction could still be in the replicas binlog so when it becomes the new primary these event will get sent to other replicas causing data corruption or breaking replication.&lt;/p&gt;
&lt;h4 id="its-easy-to-prevent-errant-transaction"&gt;Its easy to prevent errant transaction.&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;read_only = ON&lt;/code&gt; in the replicas my.cnf&lt;/li&gt;
&lt;li&gt;Disable binlogs when you need to perform work on a replica. &lt;code&gt;set session sql_log_bin = 'off';&lt;/code&gt; before your work on replica. &lt;code&gt;set session sql_log_bin = 'on';&lt;/code&gt; when your work is complete.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id="find-and-correct-errant-transaction"&gt;Find and correct errant transaction&lt;/h4&gt;
&lt;p&gt;How do you correct an errant transaction? Compare the &lt;code&gt;gtid_executed&lt;/code&gt; on the primary and replica. Identify the errant transaction on the replica and then apply that transaction to the primary.&lt;/p&gt;
&lt;p&gt;I will show you one method in the steps below.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;On the replica run &lt;code&gt;show variables like 'gtid_executed'&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You will receive output similar to this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_replica&gt; show variables like 'gtid_executed'\G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Variable_name: gtid_executed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Value: 858d4d54-3fe1-11ec-a7e8-080027ae8b99:1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Make note of the gtid_executed value. You will need this to check if you have an errant transaction.&lt;/p&gt;
&lt;ol start="2"&gt;
&lt;li&gt;On the primary run &lt;code&gt;show variables like 'gtid_executed'&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You will receive output similar to this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_primary&gt; show variables like 'gtid_executed'\G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Variable_name: gtid_executed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Value: 858d4d54-3fe1-11ec-a7e8-080027ae8b99:1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Make note of the gtid_executed value. You will need this to check if you have an errant transaction.&lt;/p&gt;
&lt;ol start="3"&gt;
&lt;li&gt;We need to determine if the replica has any errant transaction’s. We will use the function: ‘gtid_subset’ to compare the executed GTID set from &lt;strong&gt;replica&lt;/strong&gt; and &lt;strong&gt;primary&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_replica&gt; select gtid_subset('858d4d54-3fe1-11ec-a7e8-080027ae8b99:1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; '&gt; a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2','858d4d54-3fe1-11ec-a7e8-080027ae8b99:1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; '&gt; a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2') as subset;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| subset |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Subset = 1 tells us we have &lt;strong&gt;no&lt;/strong&gt; errant transactions.&lt;/p&gt;
&lt;p&gt;Now we need to introduce an errant transaction in to the replica. Let’s do something simple by creating a new database.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_replica&gt; create database community;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 1 row affected (0.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Lets repeat step 1 from above.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_replica&gt; show variables like 'gtid_executed'\G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Variable_name: gtid_executed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Value: 858d4d54-3fe1-11ec-a7e8-080027ae8b99:1-2,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We will repeat step 3 using the &lt;strong&gt;new gtid_executed&lt;/strong&gt; from the replica and the &lt;strong&gt;original gtid_executed&lt;/strong&gt; from the primary.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_replica&gt; select gtid_subset('858d4d54-3fe1-11ec-a7e8-080027ae8b99:1-2,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; '&gt; a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2','858d4d54-3fe1-11ec-a7e8-080027ae8b99:1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; '&gt; a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2') as subset;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| subset |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 0 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Subset = 0 tells us that this replica has errant transactions.&lt;/p&gt;
&lt;p&gt;Now we need to determine the errant transaction. We will need to subtract the &lt;code&gt;replica executed GTID&lt;/code&gt; from the &lt;code&gt;primary executed GTID&lt;/code&gt;. To do this we will use the &lt;code&gt;gtid_subtract&lt;/code&gt; function.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`mysql_replica&gt; select gtid_subtract('858d4d54-3fe1-11ec-a7e8-080027ae8b99:1-2,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; '&gt; a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2','858d4d54-3fe1-11ec-a7e8-080027ae8b99:1,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; '&gt; a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2') as errant;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| errant |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 858d4d54-3fe1-11ec-a7e8-080027ae8b99:2 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now we have our errant transaction from the replica &lt;code&gt;858d4d54-3fe1-11ec-a7e8-080027ae8b99:2&lt;/code&gt;&lt;/p&gt;
&lt;h4 id="repair-the-issue"&gt;Repair the issue&lt;/h4&gt;
&lt;p&gt;Now let’s move to the &lt;strong&gt;primary&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Once on the &lt;strong&gt;primary&lt;/strong&gt; we want to insert a pseudo transaction to resolve the errant transaction from the replica.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_primary&gt; set gtid_next='858d4d54-3fe1-11ec-a7e8-080027ae8b99:2';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 rows affected (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_primary&gt; begin;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 rows affected (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`mysql_primary&gt; commit;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 rows affected (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`mysql_primary&gt; set gtid_next='automatic';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 rows affected (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We can compare the GTID executed again from the replica and primary.&lt;/p&gt;
&lt;h4 id="primary"&gt;Primary:&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_primary&gt; show variables like 'gtid_executed'\G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Variable_name: gtid_executed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Value: 858d4d54-3fe1-11ec-a7e8-080027ae8b99:1-2,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="replica"&gt;Replica:&lt;/h4&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql_replica&gt; show variables like 'gtid_executed'\G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Variable_name: gtid_executed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Value: 858d4d54-3fe1-11ec-a7e8-080027ae8b99:1-2,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;a6b3751e-3fd3-11ec-a4f5-080027ae8b99:1-2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)`&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Note that both values match. We have repaired the errant transaction from the replica to the primary.&lt;/p&gt;
&lt;p&gt;Now we need to take care of the replica that had the errant transaction. We need to flush and purge the binary logs. Use the commands below to find the current binary file, and then flush and purge. &lt;strong&gt;Remember to be on the replica&lt;/strong&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;show binary logs;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FLUSH LOGS;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PURGE BINARY LOGS TO 'binlog.00000x';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Thats it. You have fixed the errant transaction. This was a rather simple example of an errant GTID. I will be doing part 2 that will look at more complexed examples.
&lt;/p&gt;
&lt;h3 id="referance-information"&gt;Referance Information&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/set-sql-log-bin.html" target="_blank" rel="noopener noreferrer"&gt;Set SQL Log Bin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/gtid-functions.html" target="_blank" rel="noopener noreferrer"&gt;GTID Functions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Percona</category>
      <category>MySQL</category>
      <category>Recovery</category>
      <category>Replication</category>
      <category>GTID</category>
      <media:thumbnail url="https://percona.community/blog/2021/11/errant-gtid-1_hu_ee112b4f2df8b1e3.jpg"/>
      <media:content url="https://percona.community/blog/2021/11/errant-gtid-1_hu_34baad6223fe5844.jpg" medium="image"/>
    </item>
    <item>
      <title>Going back to the original node_exporter in PMM</title>
      <link>https://percona.community/blog/2021/10/21/going-back-to-original-pmm-node-exporter/</link>
      <guid>https://percona.community/blog/2021/10/21/going-back-to-original-pmm-node-exporter/</guid>
      <pubDate>Thu, 21 Oct 2021 15:00:00 UTC</pubDate>
      <description>This is my first (I hope) post about something no so usual in our regular posts about technology.</description>
      <content:encoded>&lt;p&gt;This is my first (I hope) post about something no so usual in our regular posts about technology.&lt;/p&gt;
&lt;p&gt;Usually we discuss new features, talk about how to do something but even for me, a Percona developer, sometimes it is hard to know where and what to touch in PMM. There are many components, many abstractions, parts that send messages to remote APIs or agents, the PMM agent, the PMM API (pmm-managed), the command line client (pmm-admin) and all the external exporters.&lt;/p&gt;
&lt;p&gt;In this post, I will try to show how to implement the replacement of the current node_exporter we use in PMM to move back to the original one.&lt;/p&gt;
&lt;h2 id="what-this-post-is-about"&gt;What this post is about?&lt;/h2&gt;
&lt;p&gt;In the next paragraphs, I’ll try to explain the basics of how PMM works. My intention is to walk you trough the internals of the PMM API and PMM agent, how do the communicate and how to make some code changes.
There are many places to contact us to get help if you need to, but nowadays, &lt;a href="https://forums.percona.com/" target="_blank" rel="noopener noreferrer"&gt;forums.percona.com&lt;/a&gt; is the fastest place to get answers.
I will try to keep things clear and simple, but this is a technical post so, there will be some code.&lt;/p&gt;
&lt;h2 id="why-do-we-use-a-different-node_exporter"&gt;Why do we use a different node_exporter?&lt;/h2&gt;
&lt;p&gt;Probably going back in time we could find many other reasons, like maintainability, or the ability to use custom builds but one of the things that lacked in the first exporters was the support for basic authentication. In PMM, all exporters metrics are password protected and since there was no support for that in the past and we needed it as part of our specification, PMM exporters use a common HTTP module called &lt;code&gt;exporter_shared&lt;/code&gt;. In that module, the HTTP server supports basic authentication and some other features as well but time has passed, Prometheus exporters are much much mature and now the Prometheus &lt;a href="https://github.com/prometheus/exporter-toolkit/tree/v0.1.0/https" target="_blank" rel="noopener noreferrer"&gt;exporter-toolkit package&lt;/a&gt; has support for TLS, HTTP2, cyphers, basic auth, etc.&lt;/p&gt;
&lt;h2 id="how-pmm-works"&gt;How PMM works.&lt;/h2&gt;
&lt;p&gt;As mentioned before, there are many components in PMM. The one in charge to start internal and external exporters and run commands is &lt;code&gt;pmm-agent&lt;/code&gt;. Internal exporters are the ones built in into &lt;code&gt;pmm-agent&lt;/code&gt;, mostly for Query Analytics and running commands like &lt;code&gt;EXPLAIN&lt;/code&gt;, &lt;code&gt;SHOW TABLES&lt;/code&gt;, etc.
Also, &lt;code&gt;pmm-agent&lt;/code&gt; has an internal &lt;code&gt;supervisor&lt;/code&gt; that, like the popular Python’s &lt;a href="http://supervisord.org/" target="_blank" rel="noopener noreferrer"&gt;supervisord&lt;/a&gt; project, run processes (agents) and manage them.&lt;/p&gt;
&lt;p&gt;How does pmm-agent know which parameters should be used to run each exporter? That’s where &lt;code&gt;pmm-managed&lt;/code&gt; gets involved. &lt;code&gt;pmm-managed&lt;/code&gt; is the PMM API server and it is the one that sends and receive commands from the UI or the command line client, prepare the messages and deliver them to the proper &lt;code&gt;pmm-agent&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;As a general rule, all agents are defined in &lt;code&gt;pmm-managed&lt;/code&gt;’s &lt;code&gt;services/agents&lt;/code&gt; directory.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/../assets/blog/2021/10/directory.png" alt="directory" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In our case, we want to modify how we start the &lt;code&gt;node_exporter&lt;/code&gt; so we need to modify the &lt;a href="https://github.com/percona/pmm-managed/blob/PMM-2.0/services/agents/node.go" target="_blank" rel="noopener noreferrer"&gt;services/agents/node.go&lt;/a&gt; file.
The &lt;a href="https://github.com/percona/pmm-managed/blob/PMM-2.0/services/agents/node.go#L31" target="_blank" rel="noopener noreferrer"&gt;nodeExporterConfig&lt;/a&gt; is defined as follows:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;nodeExporterConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Node&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;exporter&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;agentpb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SetStateRequest_AgentProcess&lt;/span&gt; &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;and the returned structure is defined in the pmm repository which has all the definitions for PMM.
The &lt;a href="https://github.com/percona/pmm/blob/PMM-2.0/api/agentpb/agent.proto#L54-L62" target="_blank" rel="noopener noreferrer"&gt;AgentProcess&lt;/a&gt; message has these fields:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="nx"&gt;AgentProcess&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;inventory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AgentType&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nx"&gt;template_left_delim&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nx"&gt;template_right_delim&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;repeated&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;repeated&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;&lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;&gt;&lt;/span&gt; &lt;span class="nx"&gt;text_files&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;repeated&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nx"&gt;redact_words&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Currently, our node_exporter fork receives the user name and password used for the exporter’s basic auth via the &lt;a href="https://github.com/percona/pmm-managed/blob/PMM-2.0/services/agents/node.go#L135-L137" target="_blank" rel="noopener noreferrer"&gt;HTTP_AUTH&lt;/a&gt; environment var:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;fmt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"HTTP_AUTH=pmm:%s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;exporter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AgentID&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;},&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;From the Prometheus exporter-toolkit package, we can see it receives the configuration information via a file specified with the &lt;code&gt;--web.config&lt;/code&gt; parameter and the config example tell us we also need to encrypt the password&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c"&gt;# Usernames and hashed passwords that have full access to the web&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="c"&gt;# server via basic authentication. If empty, no basic authentication is&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="c"&gt;# required. Passwords are hashed with bcrypt.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;basic_auth_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;[ &lt;string&gt;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;&lt;secret&gt; ... ]&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;so, we need to update the &lt;code&gt;nodeExporterConfig&lt;/code&gt; function to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Update the parameters sent to &lt;a href="https://github.com/percona/pmm-managed/blob/PMM-2.0/services/agents/node.go#L130-L138" target="_blank" rel="noopener noreferrer"&gt;pmm-agent&lt;/a&gt; to make the exporter receive the new configuration file.&lt;/li&gt;
&lt;li&gt;Update the node config response to include the new files and remove unused env vars.&lt;/li&gt;
&lt;li&gt;Last but not least, update the tests.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In first place, we need to create a new configuration file and make the&lt;code&gt;node_expoter&lt;/code&gt;use but, how? The node exporter runs on the client server but pmm-managed runs on pmm server so, at first glance, it is not as easy as writing a file and updating the parameters but it is. We can spy on other exporter’s config definition to see how are they receiving the TLS certificate files and we can do the same for the web.config file. Let’s take a look at the mysql_exporter.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/percona/pmm-managed/blob/PMM-2.0/services/agents/mysql.go#L33" target="_blank" rel="noopener noreferrer"&gt;mysqlExporterConfig&lt;/a&gt; method returns all the paremeters needed to call the node exporter. The &lt;a href="https://github.com/percona/pmm-managed/blob/PMM-2.0/services/agents/mysql.go#L131" target="_blank" rel="noopener noreferrer"&gt;TextFiles&lt;/a&gt; parameter is being built &lt;a href="https://github.com/percona/pmm-managed/blob/PMM-2.0/services/agents/mysql.go#L100-L113" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt; and for each file there is an exporter’s file parameter. For example, for the &lt;code&gt;--mysql.ssl-ca-file=&lt;/code&gt; parameter it receives:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tdp.Left+" .TextFiles.tlsCa "+tdp.Right&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This might look complicates but &lt;strong&gt;tdp&lt;/strong&gt; stands for &lt;strong&gt;T&lt;/strong&gt;emplate &lt;strong&gt;D&lt;/strong&gt;elimiter &lt;strong&gt;P&lt;/strong&gt;air and it is just a helper to choose the correct delimiters in case this value id used in a template and the rest is just a parameter, in this case, the TLS CA file (the file contents, not just the name).&lt;/p&gt;
&lt;h2 id="implementing-the-changes"&gt;Implementing the changes.&lt;/h2&gt;
&lt;h3 id="1-return-files-for-node_exporter-type"&gt;1. Return files for node_exporter type&lt;/h3&gt;
&lt;p&gt;The current node_exporter in PMM is a fork that doesn’t use the exporter-toolkit package for the http server, it uses Percona’s exporter_shared instead so, to make the upstream exporter behave like the forked one, we need to create a pass a file for the &lt;code&gt;--web.config&lt;/code&gt; parameter having the Basic Authentication parameters we use to protect the metrics exposition.&lt;/p&gt;
&lt;p&gt;In pmm-managed agent_model’s &lt;a href="https://github.com/percona/pmm-managed/blob/PMM-2.0/models/agent_model.go#L562" target="_blank" rel="noopener noreferrer"&gt;Files()&lt;/a&gt; function, we need to return the list of files for &lt;code&gt;NodeExporterType&lt;/code&gt; and we are going to write a new function to build the config file (&lt;code&gt;buildWebConfigFile&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;&lt;code&gt;webConfigFilePlaceholder&lt;/code&gt; is just a string constant used to identify the different files that can be passed to the agents.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;const (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; // webConfigFile is the Prometheus HTTP Toolkit's web.config file.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; // It the basic auth parameters we need to set for node exporter.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; // All other exporters are using exporter shared but after going back to the upstream
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; // version of node_exporter, we need to pass this file to pmm-agent.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; webConfigFilePlaceholder = "webConfigPlaceholder"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; case NodeExporterType:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; return map[string]string{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; webConfigFilePlaceholder: s.buildWebConfigFile(s.GetAgentPassword()),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;buildWebConfigFile&lt;/code&gt; returns the file content needed to specify user and password for the exporter’s basic auth.&lt;/p&gt;
&lt;p&gt;According to the &lt;a href="https://github.com/prometheus/exporter-toolkit/tree/v0.1.0/https" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;,the password must be encrypted so, our function receives a plain-test password and return the configuration file contents with the password encrypted with bcrypt.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;func (s *Agent) buildWebConfigFile(password string) string {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; buf, err := bcrypt.GenerateFromPassword([]byte(password), 14)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; if err != nil {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; log.Fatal(err, "cannot encrypt basic auth password")
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; content := "basic_auth_users:" + "\n" +
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; " pmm:" + string(buf) + "\n"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; return content
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="2-update-the-node-service"&gt;2. Update the node service&lt;/h3&gt;
&lt;p&gt;In &lt;code&gt;services/agents/node.go&lt;/code&gt; there is a &lt;code&gt;nodeExporterConfig&lt;/code&gt; which is called to get the exporter configuration.&lt;/p&gt;
&lt;p&gt;The new implementation should get the files the exporter is going to use and remove the now unused environment variables.&lt;/p&gt;
&lt;p&gt;The code will look like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; files := exporter.Files()
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; for k := range files {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; switch k {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; case "webConfigPlaceholder":
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; // see https://github.com/prometheus/exporter-toolkit/tree/v0.1.0/https
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; args = append(args, "--web.config="+tdp.Left+" .TextFiles.webConfigPlaceholder "+tdp.Right)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; default:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; continue
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sort.Strings(args)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; return &amp;agentpb.SetStateRequest_AgentProcess{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Type: inventorypb.AgentType_NODE_EXPORTER,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; TemplateLeftDelim: tdp.Left,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; TemplateRightDelim: tdp.Right,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Args: args,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Env: []string{},
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; TextFiles: files,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="3-updating-tests"&gt;3. Updating tests&lt;/h3&gt;
&lt;p&gt;Since we changed the response for the &lt;code&gt;nodeExporterConfig&lt;/code&gt; method and now we are returning files and removed the environment vars, tests will fail.&lt;/p&gt;
&lt;p&gt;We need to update the tests at &lt;code&gt;services/agents/node_test.go&lt;/code&gt; to reflect the changes. I am not going to get into the details because they are trivial but I do want to mention that since the encrypted password is not always the same (because of the nature of the function) I am just comparing that we are returning a file from the Files() method (&lt;code&gt;agent_model.go&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;If you never ran the test for pmm-managed let me quickly show the the only 2 steps needed:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;make env-up&lt;/li&gt;
&lt;li&gt;make env TARGET=test&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;With those 2 commands, you will be able to run all the tests in the suite to ensure the changes won’t break anything.&lt;/p&gt;
&lt;p&gt;I hope I was able to explain at least the basics about how to make changes in PMM.
Remember you can contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; ans we will be glad to answer your questions.&lt;/p&gt;</content:encoded>
      <author>Carlos Salguero</author>
      <category>node_exporter</category>
      <category>exporter</category>
      <category>pmm</category>
      <media:thumbnail url="https://percona.community/assets/blog/2021/10/directory_hu_4c042915442984b0.jpg"/>
      <media:content url="https://percona.community/assets/blog/2021/10/directory_hu_767ada9cff1b08e9.jpg" medium="image"/>
    </item>
    <item>
      <title>2.23.0 Preview Release(Updated!)</title>
      <link>https://percona.community/blog/2021/10/15/preview-release/</link>
      <guid>https://percona.community/blog/2021/10/15/preview-release/</guid>
      <pubDate>Fri, 15 Oct 2021 00:00:00 UTC</pubDate>
      <description>Update Percona Monitoring and Management 2.23.0 is now available as a Public Release!</description>
      <content:encoded>&lt;h2 id="update"&gt;Update&lt;/h2&gt;
&lt;p&gt;Percona Monitoring and Management 2.23.0 is now available as a Public Release!&lt;/p&gt;
&lt;p&gt;Release notes for Percona Monitoring and Management 2.23.0 Public Release can be found &lt;a href="https://per.co.na/pmm/2.23.0" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="2230-preview-release"&gt;2.23.0 Preview Release&lt;/h2&gt;
&lt;p&gt;Percona Monitoring and Management 2.23.0 is released today as a Preview Release.&lt;/p&gt;
&lt;p&gt;PMM team really appreciates your feedback!&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM Internal Release in &lt;strong&gt;testing environments&lt;/strong&gt; only, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Known issue:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-8983" target="_blank" rel="noopener noreferrer"&gt;PMM-8983&lt;/a&gt; - DBaaS: PXC cluster is displayed as active after suspend&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Release notes:
Release Notes Preview found &lt;a href="https://deploy-preview-610--pmm-doc.netlify.app/release-notes/2.23.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker"&gt;PMM server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag: &lt;a href="https://hub.docker.com/layers/percona/pmm-server/2.23.0/images/sha256-ff0bb20cba0dbfcc8929dbbba0558bb01acc933ec593717727707dce083441b4?context=explore" target="_blank" rel="noopener noreferrer"&gt;percona/pmm-server:2.23.0&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client Release Candidate tarball for 2.23.0 from this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-latest-3126.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable original testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact: &lt;a href="http://percona-vm.s3-website-us-east-1.amazonaws.com/PMM2-Server-2021-10-14-2120.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2021-10-14-2120.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact id: &lt;code&gt;ami-047173e7a14c3f287&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Please also check out our Engineering Monthly Meetings &lt;a href="https://percona.community/contribute/engineeringmeetings/" target="_blank" rel="noopener noreferrer"&gt;https://percona.community/contribute/engineeringmeetings/&lt;/a&gt; and join us on our journey in OpenSource! Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Taras Kozub</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/blog/2021/10/super_hero_sloth_hu_8a1e547f9c5b81d0.jpg"/>
      <media:content url="https://percona.community/blog/2021/10/super_hero_sloth_hu_c66f52f5f7da3ea3.jpg" medium="image"/>
    </item>
    <item>
      <title>2.22.0 Preview Release</title>
      <link>https://percona.community/blog/2021/09/16/preview-release/</link>
      <guid>https://percona.community/blog/2021/09/16/preview-release/</guid>
      <pubDate>Thu, 16 Sep 2021 00:00:00 UTC</pubDate>
      <description>2.20.0 Preview Release Percona Monitoring and Management 2.22.0 is released today as a Preview Release.</description>
      <content:encoded>&lt;h2 id="2200-preview-release"&gt;2.20.0 Preview Release&lt;/h2&gt;
&lt;p&gt;Percona Monitoring and Management 2.22.0 is released today as a Preview Release.&lt;/p&gt;
&lt;p&gt;PMM team really appreciates your feedback!&lt;/p&gt;
&lt;p&gt;We encourage you to try this PMM Preview Release in &lt;strong&gt;testing environments&lt;/strong&gt; only, as these packages and images are not fully production-ready. The final version is expected to be released through the standard channels in the coming week.&lt;/p&gt;
&lt;p&gt;Known issue:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-8829" target="_blank" rel="noopener noreferrer"&gt;PMM-8829&lt;/a&gt; - “Missing Listen Port” error for external exporters after restart&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Release notes:
Release Notes Preview found &lt;a href="https://deploy-preview-588--pmm-doc.netlify.app/release-notes/2.22.0.html" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-server-docker"&gt;PMM server docker&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/docker.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;docker tag: &lt;code&gt;perconalab/pmm-server:2.22.0-rc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://hub.docker.com/layers/perconalab/pmm-server/2.22.0-rc/" target="_blank" rel="noopener noreferrer"&gt;https://hub.docker.com/layers/perconalab/pmm-server/2.22.0-rc/&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="pmm-client-package-installation"&gt;PMM client package installation&lt;/h3&gt;
&lt;p&gt;Download the latest pmm2-client Release Candidate tarball for 2.22.0 from this &lt;a href="https://s3.us-east-2.amazonaws.com/pmm-build-cache/PR-BUILDS/pmm2-client/pmm2-client-PR-2003-7917413.tar.gz" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you want to install pmm2-client package, please enable testing repository via Percona-release:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;percona-release enable original testing&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;install pmm2-client package for your OS via package manager.&lt;/p&gt;
&lt;h3 id="ova"&gt;OVA&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/virtual-appliance.html" target="_blank" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact: &lt;a href="http://percona-vm.s3-website-us-east-1.amazonaws.com/PMM2-Server-2021-09-14-1514.ova" target="_blank" rel="noopener noreferrer"&gt;PMM2-Server-2021-09-14-1514.ova&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="ami"&gt;AMI&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/doc/percona-monitoring-and-management/2.x/setting-up/server/aws.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Artifact id: &lt;code&gt;ami-0a6b861c9225afbd8&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Please also check out our Engineering Monthly Meetings &lt;a href="https://percona.community/contribute/engineeringmeetings/" target="_blank" rel="noopener noreferrer"&gt;https://percona.community/contribute/engineeringmeetings/&lt;/a&gt; and join us on our journey in OpenSource! Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Denys Kondratenko</author>
      <category>PMM</category>
      <media:thumbnail url="https://percona.community/superhero_hu_252fc2b480c0a197.jpg"/>
      <media:content url="https://percona.community/superhero_hu_17979f11d5d3562e.jpg" medium="image"/>
    </item>
    <item>
      <title>The lost art of Database Server Initialization.</title>
      <link>https://percona.community/blog/2021/09/06/lost-art-of-database-server-initialization/</link>
      <guid>https://percona.community/blog/2021/09/06/lost-art-of-database-server-initialization/</guid>
      <pubDate>Mon, 06 Sep 2021 00:00:00 UTC</pubDate>
      <description>With all the DBaaS, IaaS and PaaS environments, sometimes I think the Art of MySQL initialization is becoming a lost art. Many times we just delete the MySQL Server and order a new one.</description>
      <content:encoded>&lt;p&gt;With all the DBaaS, IaaS and PaaS environments, sometimes I think the Art of MySQL initialization is becoming a lost art. Many times we just delete the MySQL Server and order a new one.&lt;/p&gt;
&lt;p&gt;Just recently I was talking with a colleague, and this subject came up. We both thought about it and decided we have become spoiled by automation. We were both rusty on this process. This gave me the idea for this post.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/09/lostart-01.png" alt="lostart-10" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;You might be thinking why initialize MySQL again? Let’s say that you wanted MySQL Server 8.0 not to use mixed case. Yet when the database was initialized the default setting of &lt;code&gt;lower_case_table_names = 0&lt;/code&gt; was used. With 8.0 you can’t make the change &lt;code&gt;lower_case_table_names = 1&lt;/code&gt; in the my.cnf and restart MySQL. It won’t work, leaving you with two option. One Initialize MySQL a second time, or order a new environment.&lt;/p&gt;
&lt;p&gt;Let’s look at the steps we would need to change the MySQL server to support only lower case.&lt;/p&gt;
&lt;p&gt;You may want to take a backup before you begin these steps if you have already loaded data that you wish to keep.&lt;/p&gt;
&lt;h2 id="the-steps"&gt;The Steps&lt;/h2&gt;
&lt;p&gt;The steps below assume you are working with a default MySQL
server installation. Modify as needed for a custom installation. One word of caution. Please dont user root to run the bellow commands. Use &lt;code&gt;sudo mysql&lt;/code&gt; this will add an extra layer of safety by not being root.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Stop the MySQL Server. &lt;code&gt;$ systemctl stop mysqld&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You will need to delete everything out of your current data directory.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ rm -fR /var/lib/mysql*&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit your my.cnf file and add: &lt;code&gt;lower_case_table_names=1&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ vi /etc/my.cnf&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/09/lostart-02.png" alt="lostart-10" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now initialize mysql.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ /usr/sbin/mysqld --initialize --user=mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get the temporary root password from the &lt;code&gt;mysqld.log&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cat /var/log/mysqld.log | grep password&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2021/09/lostart-03_hu_5141778ed22a6fd5.png 480w, https://percona.community/blog/2021/09/lostart-03_hu_1c375d4cb25f1391.png 768w, https://percona.community/blog/2021/09/lostart-03_hu_649f1469cd09bbe4.png 1400w"
src="https://percona.community/blog/2021/09/lostart-03.png" alt="lostart-10" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;If you dont find the temporary password for the root user, review the steps above making sure you did not miss something.&lt;/p&gt;
&lt;ol start="6"&gt;
&lt;li&gt;Start MySQL.
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ systemctl start mysqld&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;Verify MySQL is running.
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cat /var/log/mysqld.log&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/09/lostart-04.png" alt="lostart-10" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now you should be able to log into MySQL using the password received got from step 5.&lt;/p&gt;
&lt;p&gt;There could be many more reasons to re-initliatize a MySQL Database. This is just one example.
Automation is great. Just remember to pull out your command line tools now and then, so they dont get to rusty.&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Percona</category>
      <category>MySQL</category>
      <category>Recovery</category>
      <category>Installation</category>
      <media:thumbnail url="https://percona.community/blog/2021/09/lostart-01_hu_62162eb2becac880.jpg"/>
      <media:content url="https://percona.community/blog/2021/09/lostart-01_hu_6c77c8b753b5f4a8.jpg" medium="image"/>
    </item>
    <item>
      <title>Humans need not apply</title>
      <link>https://percona.community/blog/2021/08/19/humans-need-not-apply-tarantool-ansible/</link>
      <guid>https://percona.community/blog/2021/08/19/humans-need-not-apply-tarantool-ansible/</guid>
      <pubDate>Thu, 19 Aug 2021 00:00:00 UTC</pubDate>
      <description>Hi, my name is Roman Proskin. I work at Mail.Ru Group and develop high-performance applications on Tarantool, which is an in-memory computing platform.</description>
      <content:encoded>&lt;p&gt;Hi, my name is Roman Proskin. I work at Mail.Ru Group and develop high-performance applications on Tarantool, which is an in-memory computing platform.&lt;/p&gt;
&lt;p&gt;In this article, I will explain how we built the automated process of deploying Tarantool apps. It allows updating the codebase in production without any downtime or outages. I will describe the problems we faced and the solutions we found in the process. I hope that &lt;em&gt;our&lt;/em&gt; experience will be useful for &lt;em&gt;your&lt;/em&gt; deployments.&lt;/p&gt;
&lt;p&gt;It’s not that hard to deploy an application. Our &lt;strong&gt;cartridge-cli&lt;/strong&gt; tool (&lt;a href="https://github.com/tarantool/cartridge-cli" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;). lets you deploy cluster applications within a couple of minutes – for instance, in Docker. However, it is &lt;em&gt;much&lt;/em&gt; harder to turn a small-scale solution into a full-fledged product, one that would handle hundreds of instances and be used by dozens of teams of various levels.&lt;/p&gt;
&lt;p&gt;Our deployment is based on a simple idea: set up two hardware servers, run an instance on each server, join the instances in a single replica set, and update them one by one. However, when it comes to deploying a production system with terabytes of unique data — palms are sweaty, knees weak, arms are heavy, there’s vomit on the sweater already, code’s spaghetti. The database might be on the verge of collapse.&lt;/p&gt;
&lt;h2 id="initial-conditions"&gt;Initial Conditions&lt;/h2&gt;
&lt;p&gt;There is a strict SLA for the project: 99% uptime is required, planned downtime included in the remaining 1%. This means that there are 87 hours each year when we are allowed to &lt;em&gt;not respond&lt;/em&gt; to requests. It seems like a big number, &lt;em&gt;but&lt;/em&gt;…&lt;/p&gt;
&lt;p&gt;The project is targeting about 1.8 TB of data, so a mere restart would take as much as 40 minutes! Add the time for the manual update itself on top of that. Three updates a week take 40*3*52/60 = &lt;strong&gt;104 hours&lt;/strong&gt;, &lt;em&gt;which breaks the SLA&lt;/em&gt;. And this is only &lt;em&gt;planned&lt;/em&gt; maintenance work. What about the outages that are surely going to happen?&lt;/p&gt;
&lt;p&gt;As the application is designed with heavy user load in mind, it has to be very stable. We don’t want to lose data if a node dies. So we divided our cluster geographically, using machines in two data centers. This deployment mechanism makes sure that the SLA is not violated. Updates are rolled out on groups of instances in different data centers, not on all of them at once. During the updates, we transfer the load to the other data center, so the cluster remains writable. This classical deployment strategy is a standard disaster recovery practice.**&lt;/p&gt;
&lt;p&gt;One of the key elements of downtime-free deployment is the ability to update instances one data center at a time. I will explain more about that process by the end of the article. Now, let’s focus on our automated deployment and the challenges associated with it.&lt;/p&gt;
&lt;h2 id="challenges"&gt;Challenges&lt;/h2&gt;
&lt;h3 id="moving-traffic-across-the-street"&gt;Moving Traffic Across the Street&lt;/h3&gt;
&lt;p&gt;There are several data centers and requests may hit any of them. Retrieving data from another data center increases the response time by 1–100 milliseconds. To avoid cross-user-traffic between the two data centers, we tagged them as &lt;em&gt;active&lt;/em&gt; and &lt;em&gt;standby&lt;/em&gt;. We configured the &lt;strong&gt;nginx&lt;/strong&gt; balancer so that all the traffic would always be directed to the active data center. If Tarantool in the active data center failed or became unavailable, the traffic would go to the standby data center instead.&lt;/p&gt;
&lt;p&gt;Every user request matters, so we needed to ensure that every connection would be maintained. For that we wrote a special Ansible playbook that switches the traffic between the data centers. The switch is implemented through the &lt;code&gt;backup&lt;/code&gt; option of every server’s &lt;code&gt;upstream&lt;/code&gt; directive. The servers that have to become active are defined with the ansible-playbook &lt;code&gt;--limit&lt;/code&gt; flag. The other servers are marked as &lt;code&gt;backup&lt;/code&gt;, and nginx will only direct traffic at them if &lt;em&gt;all active servers&lt;/em&gt; are unavailable. If there are open connections during a configuration change, they will not be closed, and &lt;em&gt;new&lt;/em&gt; requests will be redirected to the routers that haven’t been restarted because of the change.&lt;/p&gt;
&lt;p&gt;What if there is no external balancer in the infrastructure? You can write your own balancer in Java to monitor the availability of Tarantool instances. However, that separate subsystem also requires deployment. Another option is to embed the switch mechanism in the routers. Whatever the case may be, you have to control HTTP traffic.&lt;/p&gt;
&lt;p&gt;OK, we configured nginx, but this is not our only challenge. We also have to rotate masters in replica sets. As I mentioned above, data &lt;em&gt;must&lt;/em&gt; be kept close to the routers to avoid external retrievals whenever possible. Moreover, if the current master (the writable storage instance) dies, the failover mechanism is not launched instantly. First, the cluster has to come to a group decision to declare the instance unavailable. During that time, all requests to the data in question fail. To solve this problem, we developed another playbook that sends GraphQL requests to the cluster API.&lt;/p&gt;
&lt;p&gt;The mechanisms to rotate masters and switch user traffic are two remaining key elements of downtime-free deployment. A controlled balancer helps avoid connection loss and user request processing errors. Master rotation helps eliminate data access errors. These techniques, along with branch-wise updates, form the three pillars of failsafe deployment, which we later automated.&lt;/p&gt;
&lt;h3 id="legacy-strikes-back"&gt;Legacy Strikes Back&lt;/h3&gt;
&lt;p&gt;Our client had a custom deployment solution – Ansible roles with step-by-step instance deployment and configuration. Then we arrived with the magic &lt;strong&gt;ansible-cartridge&lt;/strong&gt; (&lt;a href="https://github.com/tarantool/ansible-cartridge" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;) that solves all the problems. We just didn’t factor in that ansible-cartridge is a monolith. It is a single huge role with a lot of stages, divided into smaller tasks and marked all over by tags.&lt;/p&gt;
&lt;p&gt;To use ansible-cartridge efficiently, we had to alter the process of artifact delivery, reconsider directory structure on target machines, switch to a different orchestrator, and make other changes. I spent a whole month on improving the deployment solution with ansible-cartridge. But the monolithic role just didn’t fit in with the existing custom playbooks. While I was struggling to make use of the role that way, my colleague asked me a fair question, “Do we even want that?”&lt;/p&gt;
&lt;p&gt;So we found another way – we put cluster configuration into a separate playbook. Specifically, that playbook was responsible for joining storage instances into replica sets, vshard (cluster data sharding mechanism) bootstrapping, and failover configuration (automated master rotation in case of death).**** These are the final stages of deployment that take place when all the instances are already running.&lt;/p&gt;
&lt;p&gt;Unfortunately, we had to keep all the other deployment stages unchanged.&lt;/p&gt;
&lt;h3 id="choosing-an-orchestrator"&gt;Choosing an Orchestrator&lt;/h3&gt;
&lt;p&gt;If the code on the servers can’t be launched, it’s useless. We needed a utility to start and stop Tarantool instances. There are tasks in ansible-cartridge that can create systemctl service files and work with RPM packages. However, our client had a closed network and no sudo privileges, which meant that we could not use systemctl.&lt;/p&gt;
&lt;p&gt;Soon we found &lt;strong&gt;supervisord&lt;/strong&gt;, an orchestrator that didn’t require root privileges all the time. We had to pre-install it on all the servers and solve local problems with socket file access. To set up supervisord, we wrote a separate Ansible role, which created configuration files, updated the configuration, launched and stopped instances. That was enough to roll out into production.&lt;/p&gt;
&lt;p&gt;Supervisord application launch was added to ansible-cartridge for the sake of experiment. That method, however, proved less flexible and is currently awaiting improvement in a designated branch.&lt;/p&gt;
&lt;h3 id="reducing-load-time"&gt;Reducing Load Time&lt;/h3&gt;
&lt;p&gt;Whatever orchestrator we use, we cannot wait for an hour every time an instance has to boot. The threshold is 20 minutes. If an instance remains unavailable for a longer time, it will be reported to the incident management system. Frequent failures impact team KPIs and may sabotage system development plans. We didn’t want to lose our bonus because of a scheduled deployment, so we needed to keep the boot time within 20 minutes at all costs.&lt;/p&gt;
&lt;p&gt;Here is a fact: load time directly correlates with the amount of data. The more information has to be restored from the disk into the RAM, the longer it takes for the instance to start after an update. Consider also that storage instances on one machine will compete for resources, as Tarantool builds indexes using all processor cores.&lt;/p&gt;
&lt;p&gt;Our observations show that an instance’s memtx_memory must not exceed 40 GB. This optimal size makes sure that instance recovery takes less than 20 minutes. The number of instances on a server is calculated separately and closely linked to the project infrastructure.&lt;/p&gt;
&lt;h3 id="setting-up-monitoring"&gt;Setting Up Monitoring&lt;/h3&gt;
&lt;p&gt;Every system, including Tarantool, has to be monitored. However, we did not set up monitoring right away. &lt;strong&gt;It took us three months to obtain access rights, get approvals, and configure the environment.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;**While developing our application and writing playbooks, we touched up the &lt;strong&gt;metrics&lt;/strong&gt; module (&lt;a href="https://github.com/tarantool/metrics" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;). Global labels now allow separating out metrics by instance name. We also developed a special &lt;a href="https://github.com/tarantool/metrics#cartridge-role" target="_blank" rel="noopener noreferrer"&gt;role&lt;/a&gt; to integrate Tarantool cluster application metrics with monitoring systems. Besides, we introduced a new useful metric, &lt;a href="https://habr.com/ru/company/mailru/blog/529456/" target="_blank" rel="noopener noreferrer"&gt;&lt;em&gt;quantile&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Now we can see the current number of requests to the system, memory usage data, the replication lag, and many other key metrics. Chat notification alerts are set up for all of them. The incident management system records critical issues, and there is a strict SLA for resolving them.&lt;/p&gt;
&lt;p&gt;Let’s talk more about our monitoring tools. &lt;strong&gt;etcd&lt;/strong&gt; specifies the logs to collect and provides a full description of where and how to collect them, and the &lt;strong&gt;telegraf&lt;/strong&gt; agent takes its cues from there. JSON metrics are stored in &lt;strong&gt;InfluxDB&lt;/strong&gt;. We visualized data with &lt;strong&gt;Grafana&lt;/strong&gt; and even created a &lt;a href="https://grafana.com/grafana/dashboards/13054" target="_blank" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt; template for it. Finally, alerts are configured with &lt;strong&gt;kapacitor&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Of course, this is not the only monitoring implementation that works. You can use &lt;strong&gt;Prometheus&lt;/strong&gt;, especially since the metrics module can yield values in a compatible format. Alerts can be also configured with &lt;strong&gt;Zabbix&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;To learn more about Tarantool monitoring setup, read my colleague’s article &lt;a href="https://habr.com/ru/company/mailru/blog/534826/" target="_blank" rel="noopener noreferrer"&gt;Tarantool Monitoring: Logs, Metrics, and Their Processing&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="enabling-logging"&gt;Enabling Logging&lt;/h3&gt;
&lt;p&gt;Simply monitoring the system is not enough. To see the big picture, you have to collect all diagnostic insights, including logs. Higher logging levels yield more debug information but also produce larger log files.&lt;/p&gt;
&lt;p&gt;However, disk space is finite. At peak load, our application could generate up to 1 TB of logs per day. Of course, we could add more disks, but sooner or later, we would run out of either free space or project budget. Yet we didn’t want to wipe debug information completely. So what did we do?&lt;/p&gt;
&lt;p&gt;One of the stages of our deployment was to configure &lt;strong&gt;logrotate&lt;/strong&gt;. It allowed us to store a couple of 100 MB uncompressed log files and a couple more compressed ones, which is enough to pinpoint a local issue within 24 hours under normal operations. The logs are stored in a designated directory in JSON format. All the servers are running the &lt;strong&gt;filebeat&lt;/strong&gt; daemon, which collects application logs and sends them to &lt;strong&gt;ElasticSearch&lt;/strong&gt; for long-term storage. This approach helps avoid disk overflow errors and allows analyzing system operations in case of persistent problems. It also integrates well with the deployment.&lt;/p&gt;
&lt;h3 id="scaling-the-solution"&gt;Scaling the Solution&lt;/h3&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2021/08/human-2_hu_28ef8337b564d094.jpg 480w, https://percona.community/blog/2021/08/human-2_hu_b49864fbe73a7c9d.jpg 768w, https://percona.community/blog/2021/08/human-2_hu_8ff4713ccdd1669b.jpg 1400w"
src="https://percona.community/blog/2021/08/human-2.jpg" alt="Scaling the Solution" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Our path was a long and rocky one, and we learned a lot by trial and error. To avoid making the same mistakes, we standardized our deployment, relying on the CI/CD formula of Gitlab + Jenkins. Scaling was also challenging – it took us months to debug our solution. Still, we tackled all the problems and are now ready to share our experience with you. Let’s do it step by step.&lt;/p&gt;
&lt;p&gt;How do we make sure that any developer can quickly put together a solution for their problem and deliver it to production? Take the Jenkinsfile away from them! We have to set firm boundaries and disallow deployment if the developer violates them. We created and rolled out to production a full-fledged example application, which serves as the perfect zero app. With our client, we went even further and wrote a utility for template creation that automatically configures the Git repo and Jenkins jobs. As a result, the developer needs less than an hour to get ready and push their project to production.&lt;/p&gt;
&lt;p&gt;The pipeline begins with standard code checkout and environment setup. We add inventories for further deployment to a number of test zones and to production. Then the unit tests begin.&lt;/p&gt;
&lt;p&gt;We use the standard Tarantool &lt;strong&gt;luatest&lt;/strong&gt; framework (&lt;a href="https://github.com/tarantool/luatest" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;), which allows writing both unit and integration tests. It also has modules for launching and configuring &lt;a href="https://www.tarantool.io/en/doc/latest/getting_started/getting_started_cartridge/" target="_blank" rel="noopener noreferrer"&gt;Tarantool Cartridge&lt;/a&gt;. Code coverage checking can be enabled in the most recent versions of luatest.** To run it, execute the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.rocks/bin/luatest --coverage&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After the tests are over, the statistical data is sent to &lt;strong&gt;SonarQube&lt;/strong&gt;, a piece of software for code quality assurance and security checking. We have a Quality Gate configured inside it. Any code in the application, regardless of the language (Lua, Python, SQL, etc.), is subject to checking. However, SonarQube lacks a built-in Lua processor. Therefore, to provide coverage in a generic format, we have to install special modules before the tests.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tarantoolctl rocks install luacov 0.13.0-1 # coverage collection utility
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tarantoolctl rocks install luacov-reporters 0.1.0-1 # additional reports&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To get a simple console view, execute:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.rocks/bin/luacov -r summary . &amp;&amp; cat ./luacov.report.out&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To form a SonarQube report, run the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.rocks/bin/luacov -r sonar .&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After that, the linter is launched. We use &lt;strong&gt;luacheck&lt;/strong&gt; (&lt;a href="https://github.com/mpeterv/luacheck" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;), which is also a Tarantool module.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tarantoolctl rocks install luacheck 0.26.0-1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Linter results are also sent to SonarQube.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.rocks/bin/luacheck --config .luacheckrc --formatter sonar *.lua&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Code coverage and linter statistics are both taken into account. To pass the Quality Gate, all of the following must be true:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Code coverage is no less than 80%.&lt;/li&gt;
&lt;li&gt;The changes do not introduce any new code smells.&lt;/li&gt;
&lt;li&gt;There are 0 critical errors in total.&lt;/li&gt;
&lt;li&gt;There are no more than 5 minor errors.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After the code passes the Quality Gate, we have to assemble the artifact. As we decided that all applications would be using Tarantool Cartridge, we build the artifact with &lt;strong&gt;cartridge-cli&lt;/strong&gt; (&lt;a href="https://github.com/tarantool/cartridge-cli" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;). This small utility lets you run (in fact, develop) Tarantool cluster applications locally. It can also create Docker images and archives from application code, both locally and in Docker – for instance, if you have to build an artifact for a different infrastructure. To assemble a &lt;code&gt;tar.gz&lt;/code&gt; archive, run the following command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cartridge pack tgz --name &lt;name&gt; --version &lt;version&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The resulting archive can then be uploaded to any repository, like &lt;strong&gt;Artifactory&lt;/strong&gt; or &lt;a href="https://mcs.mail.ru/storage/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Mail.ru Cloud Storage&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="downtime-free-deployment"&gt;Downtime-free Deployment&lt;/h2&gt;
&lt;p&gt;The final step of the pipeline is deployment itself. The code is deployed to one of several test zones based on branch merge status. One zone is designated for testing small improvements – every push to the repository triggers the whole pipeline. Other functional zones can be used to test compatibility with external systems. This requires a merge request to the repo &lt;em&gt;master&lt;/em&gt; branch.** As for production deployment, it can only be launched after all changes are accepted and merged.&lt;/p&gt;
&lt;p&gt;To summarize, here are the key elements of our downtime-free deployment:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Roll out updates one data center at a time&lt;/li&gt;
&lt;li&gt;Rotate masters in replica sets&lt;/li&gt;
&lt;li&gt;Configure the load balancer to direct traffic to the active data center&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is important to maintain version and schema compatibility during updates. If there is an error at any stage, the update stops.&lt;/p&gt;
&lt;p&gt;Here is what the update process looks like:&lt;/p&gt;
&lt;pre class="mermaid"&gt;
sequenceDiagram
Jenkins-&gt;&gt;Data center 2: become master
Data center 2--&gt;&gt;Jenkins: OK
Jenkins-&gt;&gt;Data center 2: nginx: switch traffic
Data center 2--&gt;&gt;Jenkins: OK
Jenkins-&gt;&gt;Data center 1: update application version
activate Data center 1
NOTE right of Data center 1: new version &lt;br/&gt;installation:&lt;br/&gt;- port&lt;br/&gt;availability check&lt;br/&gt;- logrotate&lt;br/&gt;- package installation&lt;br/&gt;- orchestrator&lt;br/&gt;configuration&lt;br/&gt;- cluster build&lt;br/&gt;- bootstrap&lt;br/&gt;- failover&lt;br/&gt;configuration
Data center 1--&gt;&gt;Jenkins: OK
deactivate Data center 1
Jenkins-&gt;&gt;Data center 1: become master
Data center 1--&gt;&gt;Jenkins: OK
Jenkins-&gt;&gt;Data center 1: nginx: switch traffic
Data center 1--&gt;&gt;Jenkins: OK
Jenkins-&gt;&gt;Data center 2: update application version
activate Data center 2
NOTE right of Data center 2: new version&lt;br/&gt;installation
Data center 2--&gt;&gt;Jenkins: OK
deactivate Data center 2
Jenkins-&gt;&gt;Data center 1: become master
Data center 1--&gt;&gt;Jenkins: OK
Jenkins-&gt;&gt;Data center 1: nginx: switch traffic
Data center 1--&gt;&gt;Jenkins: OK
&lt;/pre&gt;
&lt;p&gt;Currently, all updates require server restart. To find the right moment to continue our deployment, we have a special playbook that monitors instance states. Tarantool Cartridge has a state machine, and the state we are waiting for is called &lt;em&gt;RolesConfigured&lt;/em&gt;. It signifies that the instance is fully configured and ready to accept requests. If the application is deployed for the first time, the desired state would be &lt;em&gt;Unconfigured&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The diagram above illustrates the general idea of downtime-free deployment, which can be easily scaled up to more data centers. You can update all the standby branches at once right after master rotation – that is, along with Data center 1 – or update them one by one, depending on your project requirements.&lt;/p&gt;
&lt;p&gt;Of course, we could not but make our work open source. You can find it in my ansible-cartridge fork on GitHub (&lt;a href="https://github.com/opomuc/ansible-cartridge" target="_blank" rel="noopener noreferrer"&gt;opomuc/ansible-cartridge&lt;/a&gt;). Most of it has already been transferred to the master branch of the main repo.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/opomuc/ansible-cartridge/tree/master/examples/deploy-by-dc" target="_blank" rel="noopener noreferrer"&gt;Here is our deployment example&lt;/a&gt;. For it to work correctly, configure &lt;code&gt;supervisord&lt;/code&gt; on the server for the user &lt;code&gt;tarantool&lt;/code&gt;. See &lt;a href="https://github.com/opomuc/ansible-cartridge/blob/master/examples/deploy-with-targz/Vagrantfile#L18" target="_blank" rel="noopener noreferrer"&gt;this page&lt;/a&gt; for the configuration commands. The application archive also has to contain the &lt;code&gt;tarantool&lt;/code&gt; binary.&lt;/p&gt;
&lt;p&gt;To launch branch-wise deployment, run the following commands:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Install application (for initial deployment)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ansible-playbook -i hosts.yml playbook.yml \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -b --become-user tarantool \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'base_dir=/data/tarantool' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'cartridge_package_path=./getting-started-app-1.0.0-0.tar.gz' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'app_version=1.0.0' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tags supervisor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Update version to 1.2.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Transfer master to dc2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ansible-playbook -i hosts.yml master.yml \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -b --become-user tarantool \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'base_dir=/data/tarantool' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'cartridge_package_path=./getting-started-app-1.2.0-0.tar.gz' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --limit dc2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Update the main data center -- dc1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ansible-playbook -i hosts.yml playbook.yml \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -b --become-user tarantool \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'base_dir=/data/tarantool' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'cartridge_package_path=./getting-started-app-1.2.0-0.tar.gz' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'app_version=1.2.0' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tags supervisor \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --limit dc1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Transfer master to dc1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ansible-playbook -i hosts.yml master.yml \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -b --become-user tarantool \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'base_dir=/data/tarantool' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'cartridge_package_path=./getting-started-app-1.2.0-0.tar.gz' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --limit dc1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Update the standby data center -- dc2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ansible-playbook -i hosts.yml playbook.yml \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -b --become-user tarantool \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'base_dir=/data/tarantool' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'cartridge_package_path=./getting-started-app-1.2.0-0.tar.gz' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'app_version=1.2.0' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --tags supervisor \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --limit dc2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Make sure that the masters are in dc1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ansible-playbook -i hosts.yml master.yml \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -b --become-user tarantool \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'base_dir=/data/tarantool' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --extra-vars 'cartridge_package_path=./getting-started-app-1.2.0-0.tar.gz' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --limit dc1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;base_dir&lt;/code&gt; option specifies the path to your project’s home directory. After the deployment, the following subdirectories will be created:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;&lt;base_dir&gt;/run&lt;/code&gt; – for console sockets and pid files&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&lt;base_dir&gt;/data&lt;/code&gt; – for .snap and .xlog files, as well as Tarantool Cartridge configuration&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&lt;base_dir&gt;/conf&lt;/code&gt; – for application configuration and settings associated with specific instances&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&lt;base_dir&gt;/releases&lt;/code&gt; – for versioning and source code&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&lt;base_dir&gt;/instances&lt;/code&gt; – for links to the current version of every application instance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;code&gt;cartridge_package_path&lt;/code&gt; option speaks for itself, but there is a trick:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If the path starts with &lt;code&gt;http://&lt;/code&gt; or &lt;code&gt;https://&lt;/code&gt;, the artifact is pre-downloaded from the network (for example, from Artifactory).&lt;/li&gt;
&lt;li&gt;In other cases, the file search is performed locally.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;code&gt;app_version&lt;/code&gt; option is used for versioning in the &lt;code&gt;&lt;base_dir&gt;/releases&lt;/code&gt; folder. Its default value is &lt;code&gt;latest&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;supervisor&lt;/code&gt; tag means that &lt;code&gt;supervisord&lt;/code&gt; is the orchestrator.&lt;/p&gt;
&lt;p&gt;There are many ways to build a deployment, but the most reliable is good old &lt;code&gt;Makefile&lt;/code&gt;. The &lt;code&gt;make &lt;deployment&gt;&lt;/code&gt; command works well for any CI/CD pipeline.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;That’s it! We made a Jenkins pipeline, got rid of mediators, and changes are now delivered at a crazy speed. The number of our users is growing. As many as 500 instances are running in our production environment, all of them deployed with our solution. Still, there is room for growth.&lt;/p&gt;
&lt;p&gt;Branch-wise deployment may not be perfect, but it provides firm support for further development of DevOps. Use our implementation with confidence to quickly deliver your system to production without worrying about pushing frequent changes.&lt;/p&gt;
&lt;p&gt;This was also a valuable lesson for us. You can’t take a monolith and hope that it will be a perfect fit in any situation. You need to divide playbooks into smaller parts, create separate roles for every installation stage, and make your inventory flexible. Some day all our work will be merged into the master branch – which will make everything even better.&lt;/p&gt;
&lt;h2 id="links"&gt;Links&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Step-by-step ansible-cartridge tutorial
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://habr.com/ru/company/mailru/blog/478710/" target="_blank" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://habr.com/ru/company/mailru/blog/484192/" target="_blank" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Read more about Tarantool Cartridge &lt;a href="https://habr.com/ru/company/mailru/blog/465503/" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Kubernetes deployment
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://habr.com/ru/company/mailru/blog/533308/" target="_blank" rel="noopener noreferrer"&gt;A guide to using Tarantool Cartridge in Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=8NvE6uooMQY&amp;ab_channel=Tarantool" target="_blank" rel="noopener noreferrer"&gt;Webinar: Deploying your Tarantool Cartridge application in an MCS Kubernetes cluster&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://habr.com/ru/company/mailru/blog/534826/" target="_blank" rel="noopener noreferrer"&gt;Tarantool monitoring: Logs, metrics, and their processing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Get help in our &lt;a href="https://t.me/tarantoolru?utm_source=habr&amp;utm_medium=articles&amp;utm_campaign=2020" target="_blank" rel="noopener noreferrer"&gt;Telegram chat&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Roman Proskin</author>
      <category>tarantool</category>
      <category>ansible</category>
      <category>ops</category>
      <category>tarantool cartridge</category>
      <media:thumbnail url="https://percona.community/blog/2021/08/human-cover_hu_1ee1923249b2fb13.jpg"/>
      <media:content url="https://percona.community/blog/2021/08/human-cover_hu_d154b65806482905.jpg" medium="image"/>
    </item>
    <item>
      <title>Lets be inSync!</title>
      <link>https://percona.community/blog/2021/07/22/lets-be-insync/</link>
      <guid>https://percona.community/blog/2021/07/22/lets-be-insync/</guid>
      <pubDate>Thu, 22 Jul 2021 00:00:00 UTC</pubDate>
      <description>Percona Toolkit + pt-table-checksum + pt-table-sync = Faster Replica Recovery Asynchronous replication with MySQL is a tried and true technology. Add the use of GTID’s and you have a very stable solution.</description>
      <content:encoded>&lt;h2 id="percona-toolkit--pt-table-checksum--pt-table-sync--faster-replica-recovery"&gt;Percona Toolkit + pt-table-checksum + pt-table-sync = Faster Replica Recovery&lt;/h2&gt;
&lt;p&gt;Asynchronous replication with MySQL is a tried and true technology. Add the use of GTID’s and you have a very
stable solution.&lt;/p&gt;
&lt;p&gt;The fundamental issue with async replication is that writes sent to the Replica are not guaranteed to be written. I have only seen a handful of times when writes did not get applied to the replica. Most of the time this happens is due to network packet drops or a replica crashes before new data is committed.&lt;/p&gt;
&lt;p&gt;I can remember long nights of restoring backups of the primary to the replica’s. Not a painful process but time consuming.&lt;/p&gt;
&lt;p&gt;Please take a few moments to review the full &lt;a href="https://www.percona.com/software/database-tools/percona-toolkit" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; of both tools before trying this example on live data: &lt;strong&gt;pt-table-checksum, pt-table-sync&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;With pt-table-check and pt-table-sync provided by Percona Toolkit we can recover a replica without needed to do a restore. Keep in mind this approach might not work for all situations. We will go over one example below. We will also use dbdeployer to help us setup a testing sandbox.&lt;/p&gt;
&lt;p&gt;Let’s start off by setting up a VM to play with. For this I will be using Virtualbox and Ubuntu 20.04LTS.&lt;/p&gt;
&lt;h3 id="prepare-ubuntu-2004lts"&gt;Prepare Ubuntu 20.04LTS&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;sudo apt install gnupg2 curl libaio-dev libncurses-dev mysql-client-core-8.0&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo percona-release enable tools release&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo apt update&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo apt install percona-toolkit sysbench&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="install-dbdeployer"&gt;Install dbdeployer&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;mkdir $HOME/bin ; cd $HOME/bin ; source $HOME/.profile&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;curl -s https://raw.githubusercontent.com/datacharmer/dbdeployer/master/scripts/dbdeployer-install.sh | bash&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-2.png" alt="lbis-2" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ol start="3"&gt;
&lt;li&gt;&lt;code&gt;ln -s dbdeployer-1.60.0.linux dbdeployer&lt;/code&gt; (symlink for less typing)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dbdeployer init&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-3.png" alt="lbis-3" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ol start="5"&gt;
&lt;li&gt;Download Percona Server: &lt;code&gt;wget https://downloads.percona.com/downloads/Percona-Server-LATEST/Percona-Server-8.0.25-15/binary/tarball/Percona-Server-8.0.25-15-Linux.x86_64.glibc2.17-minimal.tar.gz&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-4.png" alt="lbis-4" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ol start="6"&gt;
&lt;li&gt;Prepare Percona Server: &lt;code&gt;dbdeployer --prefix=ps unpack Percona-Server-8.0.23-14-Linux.x86_64.glibc2.17-minimal.tar.gz&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-6.png" alt="lbis-6" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;ol start="7"&gt;
&lt;li&gt;Deploy your cluster:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; dbdeployer deploy replication ps8.0.23 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --gtid \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --custom-role-name=R_POWERFUL \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --custom-role-privileges='ALL PRIVILEGES' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --custom-role-target='*.*' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --custom-role-extra='WITH GRANT OPTION' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --default-role=R_POWERFUL \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --bind-address=0.0.0.0 \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --remote-access='%' \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --native-auth-plugin \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --db-user=sbtest \
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --db-password=sbtest!&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Lets verify our Cluster: &lt;code&gt;dbdeployer sandboxes --full-info&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-7.png" alt="lbis-7" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Change directories into you cluster directory: &lt;code&gt;_$HOME/sandboxes/rsandboxps8.0.23_&lt;/code&gt; and run the &lt;code&gt;./check_slaves&lt;/code&gt; script.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-9.png" alt="lbis-9" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This will display information about your new cluster. Take time to make yourself familiar with the scripts in this directory.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; we will stay in the &lt;code&gt;_$HOME/sandboxes/rsandboxps8.0.23_&lt;/code&gt; for remainder of this post. * Please note that the location of your cluster might be different. *&lt;/p&gt;
&lt;h3 id="preparing-data-for-testing"&gt;Preparing Data for testing&lt;/h3&gt;
&lt;p&gt;Let’s move on and add some data to play with. While in your sandboxes/cluster directory run this command:&lt;/p&gt;
&lt;p&gt;Connect you to the master: &lt;code&gt;mysql --socket=/tmp/mysql_sandbox21325.sock --port=21325 -u sbtest -p&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;create database synctest;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;use synctest;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;create table names (id int not null auto_increment primary key, fname varchar(50), lname varchar(50));
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Moe','Howard');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Larry','Howard');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Curly','Howard');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Shemp','Howard');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Joe','Howard');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('James','Bond');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Doctor','No');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Gold','Finger');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Money','Penny');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Number','One');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values ('Number','Two');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into names (fname,lname) values (‘Micky','Mouse');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Make sure you do a quick &lt;code&gt;select * from percona.synctest&lt;/code&gt;;&lt;/p&gt;
&lt;p&gt;You should see 12 rows of data. If you don’t double check your inserts.&lt;/p&gt;
&lt;p&gt;Let’s connect to mysql and create a percona database and add the dsns table.
We will need this database and table to hold our checksums and DSNS data.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;create database percona;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;use percona;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE `dsns` (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; `id` int(11) NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; `parent_id` int(11) DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; `dsn` varchar(255) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; PRIMARY KEY (`id`)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into dsns (id,parent_id,dsn) values (1,1,"h=percona-lab,u=sbtest,p=sbtest!,P=21325");
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;insert into dsns (id,parent_id,dsn) values (2,2,"h=percona-lab,u=sbtest,p=sbtest!,P=21326");&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Remember to populate this data based on your cluster&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Quit out of your master sandbox.&lt;/p&gt;
&lt;p&gt;Now we are ready to move on to the pt-table-checksum tool.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pt-table-checksum --user=sbtest --socket=/tmp/mysql_sandbox21324.sock --port=21234 --ask-pass --no-check-binlog-format&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-15.png" alt="lbis-15" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Notice we had errors: (I cropped out the rest of the output since it does not show a good run of pt-table-checksum.)
pt-table-checksum could not find the slaves. Lets run the command a second time, but this time lets tell it the the –recursion-method:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pt-table-checksum --user=sbtest --socket=/tmp/mysql_sandbox21324.sock --port=21234 --ask-pass --no-check-binlog-format --recursion-method=dsn=D=percona,t=dsns&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Success!!!&lt;/strong&gt; This time pt-table-checksum was able to find the replicas.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-16.png" alt="lbis-16" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note: there a couple mysql tables that are different between the master and replicas. This is normal.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="now-lets-remove-data-from-both-replicas"&gt;Now lets remove data from both replicas.&lt;/h2&gt;
&lt;p&gt;Connect to the 1st replica:
&lt;code&gt;mysql --socket=/tmp/mysql_sandbox21325.sock --port=21325 -u sbtest -p&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Change into the synctest database. Do a select on the synctest.names table and you should see 12 rows of data. Remove one row of data.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;delete from names where id = 7;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;quit out of slave1.&lt;/p&gt;
&lt;p&gt;Connect to the 2nd replica.
&lt;code&gt;mysql --socket=/tmp/mysql_sandbox21326.sock --port=21326 -u sbtest -p&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Change into the synctest database. Do a select on the names table and you should see 12 rows of data. Remove one row of data.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;delete from names where id = 8;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;quit out of slave2.&lt;/p&gt;
&lt;p&gt;Now we know that our cluster is out of sync, but let’s use the tool to verify. This time on checksum we will ignore mysql and sys databases.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pt-table-checksum --user=sbtest --socket=/tmp/mysql_sandbox21324.sock --port=21234 --ask-pass --no-check-binlog-format —recursion-method=dsn=D=percona,t=dsns -- ignore-databases=mysql,sys&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-10.png" alt="lbis-10" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note: that pt-table-checksum is shows a DIFFS of 1 and DIFF_ROWS of 1. This reflects that we have 1 row of data missing from one our both of the slaves.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Go back to slave2 and remove another row of data. Run Checksum again.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;delete from names where id = 9;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-11.png" alt="lbis-11" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This time we are seeing DIFF_ROWS of 2. This would now reflect that we have 2 rows missing on at least of one the slaves. Let’s fix this mess we created. Before we do that let’s look at the data on both slaves. As we can see they are not in-sync with the master.&lt;/p&gt;
&lt;center&gt; &lt;b&gt;Slave1&lt;/b&gt; &lt;/center&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-13.png" alt="lbis-13" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;center&gt; &lt;b&gt;Slave2&lt;/b&gt; &lt;/center&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-12.png" alt="lbis-12" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="now-lets-sync-the-slaves-to-the-master"&gt;Now let’s sync the slaves to the master.&lt;/h2&gt;
&lt;p&gt;Replication Safety is very important. Please take a moment to read the replication safety section of the pt-table-sync tool.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pt-table-sync --execute h=percona-lab,P=21324,u=sbtest,p=sbtest! h=percona-lab,P=21325,u=sbtest,p=sbtest! h=percona-lab,P=21326,u=sbtest,p=sbtest! --no-check-slave --ignore-databases=mysql,sys&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This will run in a couple seconds. When done lets checksum the cluster again.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Your cluster is now repaired.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/lbis-14.png" alt="lbis-14" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is just an example of what the two tools can do, they may not meet your every need.&lt;/p&gt;
&lt;p&gt;If you look to use this to repair a production databases, &lt;strong&gt;please make sure have have good backups on hand to fall back on&lt;/strong&gt; if needed.&lt;/p&gt;
&lt;h2 id="whats-next"&gt;Whats next?&lt;/h2&gt;
&lt;p&gt;I really only scratched the surface of these tools, dbdeployer, percona-toolkit.&lt;/p&gt;
&lt;p&gt;For more information on both tools please check out the the links below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.dbdeployer.com/" target="_blank" rel="noopener noreferrer"&gt;dbdeployer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/software/database-tools/percona-toolkit" target="_blank" rel="noopener noreferrer"&gt;Percona-Toolkit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/doc/percona-toolkit/LATEST/pt-table-checksum.html" target="_blank" rel="noopener noreferrer"&gt;pt-table-checksum&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/doc/percona-toolkit/LATEST/pt-table-sync.html" target="_blank" rel="noopener noreferrer"&gt;pt-table-sync&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>Toolkit</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2021/07/lbis-1_hu_e04ff82b3f8b1772.jpg"/>
      <media:content url="https://percona.community/blog/2021/07/lbis-1_hu_1c6e417900bfd5fb.jpg" medium="image"/>
    </item>
    <item>
      <title>Create your own Exporter in Go!</title>
      <link>https://percona.community/blog/2021/07/21/create-your-own-exporter-in-go/</link>
      <guid>https://percona.community/blog/2021/07/21/create-your-own-exporter-in-go/</guid>
      <pubDate>Wed, 21 Jul 2021 00:00:00 UTC</pubDate>
      <description>Overview Hi, it’s too hot summer in Korea. Today I want to talk about an interesting and exciting topic. Try to making your own exporter in Go language.</description>
      <content:encoded>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Hi, it’s too hot summer in Korea. Today I want to talk about an interesting and exciting topic. &lt;strong&gt;Try to making your own exporter in Go language&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If you register a specific query, it is a simple program that shows the result of this query as an exporter result metrics. Some of you may still be unfamiliar with what Expoter is.&lt;/p&gt;
&lt;p&gt;I will explain about Exporter step by step in today’s post.&lt;/p&gt;
&lt;h2 id="exporter"&gt;Exporter?&lt;/h2&gt;
&lt;p&gt;You can think of an &lt;strong&gt;Exporter as an HTTP server for pulling data from a time series database&lt;/strong&gt; like Prometheus. Prometheus periodically calls the specific URL of the exporter and saves the result of metrics as a time series.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/prometheus-exporter.png" alt="prometheus exporter" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;There are many exporters exist in the everywhere.&lt;/p&gt;
&lt;p&gt;Typically, there is &lt;a href="https://github.com/prometheus/mysqld_exporter" target="_blank" rel="noopener noreferrer"&gt;mysqld_expoter&lt;/a&gt;, which is Prometheus’s Offcial projects, and &lt;a href="https://github.com/percona/mysqld_exporter" target="_blank" rel="noopener noreferrer"&gt;mysqld_expoter&lt;/a&gt;, which they fork and distribute additionally in Percona. Besides these, not only node_expoter for monitoring Linux nodes, but also memcached_expoter etc..&lt;/p&gt;
&lt;p&gt;For reference, you can see various exporters from &lt;a href="https://exporterhub.io" target="_blank" rel="noopener noreferrer"&gt;exporterhub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;What I am going to present my Blog that is the process of adding my own new exporter among these various exporters. Let’s go!&lt;/p&gt;
&lt;h2 id="creating-a-go-project"&gt;Creating a Go project&lt;/h2&gt;
&lt;p&gt;Exporter can be implemented in various languages, but today I will implement it with Go.&lt;/p&gt;
&lt;p&gt;Personally, I think Go is very convenient in terms of distribution and compatibility. I will omit the go installation and environment configuration here.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;cd&lt;/span&gt; ~/go/src
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mkdir -p query-exporter-simple
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ &lt;span class="nb"&gt;cd&lt;/span&gt; query-exporter-simple
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ go mod init
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: creating new go.mod: module query-exporter-simple
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ ls -al
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;total &lt;span class="m"&gt;8&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x &lt;span class="m"&gt;3&lt;/span&gt; chan staff &lt;span class="m"&gt;96&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;12&lt;/span&gt; 13:33 .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x &lt;span class="m"&gt;12&lt;/span&gt; chan staff &lt;span class="m"&gt;384&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;12&lt;/span&gt; 13:33 ..
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; chan staff &lt;span class="m"&gt;38&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;12&lt;/span&gt; 13:33 go.mod
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ cat go.mod
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;module query-exporter-simple
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go 1.16&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Although it is an fundamental project, now everything is ready to make your own exporter. From now on, package management is managed with &lt;code&gt;go mod&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="try-empty-exporter"&gt;Try Empty Exporter&lt;/h2&gt;
&lt;p&gt;Now, let’s start making the Exporter in earnest.&lt;/p&gt;
&lt;p&gt;First, as a taster, let’s try to make an empty Exporter that has no function.. it simply outputs the exporter version only.&lt;/p&gt;
&lt;p&gt;This is to read OS parameters using flags. The “bind” is server HTTP binding information when Exporter is started.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"flag"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Get OS parameter&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;bind&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;StringVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"0.0.0.0:9104"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Parse&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Register Collector to collect and run Exporter with HTTP server. Collector is the concept of a thread that collects information, and it implements the Collector interface of Prometheus.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"flag"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"net/http"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/client_golang/prometheus"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/client_golang/prometheus/promhttp"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/common/version"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt; &lt;span class="s"&gt;"github.com/sirupsen/logrus"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Get OS parameter&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;bind&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;StringVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"0.0.0.0:9104"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Parse&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// ========================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Regist handler&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// ========================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NewCollector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"query_exporter"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Regist http handler&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;HandleFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;h&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;promhttp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;HandlerFor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Gatherers&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DefaultGatherer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;promhttp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HandlerOpts&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;h&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ServeHTTP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// start server&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Starting http server - %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListenAndServe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to start http server: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Since the packages used by source code do not exist in the project yet, numerous errors will be occurred.&lt;/p&gt;
&lt;p&gt;So, as below, get related packages through &lt;code&gt;go mod vendor&lt;/code&gt;. Related packages are placed under the vendor directory.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ go mod vendor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: finding module &lt;span class="k"&gt;for&lt;/span&gt; package github.com/prometheus/common/version
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: finding module &lt;span class="k"&gt;for&lt;/span&gt; package github.com/prometheus/client_golang/prometheus
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: finding module &lt;span class="k"&gt;for&lt;/span&gt; package github.com/sirupsen/logrus
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: finding module &lt;span class="k"&gt;for&lt;/span&gt; package github.com/prometheus/client_golang/prometheus/promhttp
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: found github.com/prometheus/client_golang/prometheus in github.com/prometheus/client_golang v1.11.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: found github.com/prometheus/client_golang/prometheus/promhttp in github.com/prometheus/client_golang v1.11.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: found github.com/prometheus/common/version in github.com/prometheus/common v0.29.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go: found github.com/sirupsen/logrus in github.com/sirupsen/logrus v1.8.1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ ls -al
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;total &lt;span class="m"&gt;112&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x &lt;span class="m"&gt;6&lt;/span&gt; chan staff &lt;span class="m"&gt;192&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt; 10:26 .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x &lt;span class="m"&gt;12&lt;/span&gt; chan staff &lt;span class="m"&gt;384&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;12&lt;/span&gt; 13:33 ..
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; chan staff &lt;span class="m"&gt;169&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt; 10:26 go.mod
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; chan staff &lt;span class="m"&gt;45722&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt; 10:26 go.sum
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r--r-- &lt;span class="m"&gt;1&lt;/span&gt; chan staff &lt;span class="m"&gt;1163&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt; 10:34 main.go
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drwxr-xr-x &lt;span class="m"&gt;6&lt;/span&gt; chan staff &lt;span class="m"&gt;192&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt; &lt;span class="m"&gt;13&lt;/span&gt; 10:26 vendor&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you start the Exporter server, the server will be run on port 9104 (the port specified by default in flag).&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ go run .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; Regist version collector - query_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; HTTP handler path - /metrics
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; Starting http server - 0.0.0.0:9104&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you want to change the port, give the bind OS parameter as below, then, the server will run with that port.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ go run . --bind&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0:9105
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; Regist version collector - query_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; HTTP handler path - /metrics
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; Starting http server - 0.0.0.0:9105&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Even though it is an empty Exporter.. You can see that a lot of information is extracted through the Exporter. (Most of the information is about go itself..)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ curl 127.0.0.1:9104/metrics
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# TYPE go_gc_duration_seconds summary&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go_gc_duration_seconds&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;quantile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"0"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go_gc_duration_seconds&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;quantile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"0.25"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.. skip ..
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# HELP go_threads Number of OS threads created.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# TYPE go_threads gauge&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go_threads &lt;span class="m"&gt;7&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# HELP query_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which query_exporter was built.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# TYPE query_exporter_build_info gauge&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;query_exporter_build_info&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;branch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;,goversion&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"go1.16.5"&lt;/span&gt;,revision&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;,version&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;At the very bottom, there is the query_exporter_build_info metric, which is the information collected by the Collector that we added in the previous section. This is the moment we created the new Exporter collecting version information!&lt;/p&gt;
&lt;h2 id="creating-an-exporter-in-earnest"&gt;Creating an Exporter in earnest&lt;/h2&gt;
&lt;p&gt;I made an empty Exporter that specifies only the exporter version. Is that easy, right? 🙂&lt;/p&gt;
&lt;p&gt;From now on, I’m going to implement a Collector that collects the information we really need from database and sends the result to the HTTP GET method.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2021/07/query-exporter.png" alt="query exporter" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="1-configuration-format-yaml"&gt;1. Configuration format (YAML)&lt;/h3&gt;
&lt;p&gt;As I said before, I want to make something that passes the result of the registered query to the Exporter result metric. To do this, you need to know information about the target instance as well as the query to be executed.&lt;/p&gt;
&lt;p&gt;Let’s set it up in the below format. MySQL connection information and the query to be executed. It will show two pieces of information as a result: &lt;strong&gt;“Connections per host”&lt;/strong&gt; and &lt;strong&gt;“Connections per user”&lt;/strong&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;dsn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;test:test123@tcp(127.0.0.1:3306)/information_schema&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;process_count_by_host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"select user,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s2"&gt; substring_index(host, ':', 1) host,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s2"&gt; count(*) sessions
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s2"&gt; from information_schema.processlist
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s2"&gt; group by 1,2 "&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;gauge&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"process count by host"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;sessions&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;process_count_by_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"select user, count(*) sessions
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s2"&gt; from information_schema.processlist
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s2"&gt; group by 1 "&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;gauge&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"process count by user"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;sessions&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I tried defining the above yaml as Go structure.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;Config&lt;/span&gt; &lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;DSN&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Metrics&lt;/span&gt; &lt;span class="kd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Query&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Type&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Description&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Labels&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Value&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metricDesc&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Desc&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here, metricDesc &lt;code&gt;*prometheus.Desc&lt;/code&gt; can be understood as a specification used in Prometheus metrics. It also specifies any label and metric types such as Counter/Gauge.&lt;/p&gt;
&lt;p&gt;Read the YAML file as below, and finally load the setting information into the structure defined below.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="nx"&gt;Config&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ioutil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ReadFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"config.yml"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to read config file: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// Load yaml&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;yaml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Unmarshal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to load config: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this way, we can now put the necessary information in the Config structure and use it to implement the desired implementation.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"flag"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"io/ioutil"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"net/http"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"os"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/ghodss/yaml"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/client_golang/prometheus"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/client_golang/prometheus/promhttp"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/common/version"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt; &lt;span class="s"&gt;"github.com/sirupsen/logrus"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="nx"&gt;Config&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;configFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bind&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Get OS parameter&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;StringVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;configFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"config"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"config.yml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"configuration file"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;StringVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"0.0.0.0:9104"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Parse&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Load config &amp; yaml&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ioutil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ReadFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;configFile&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to read config file: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Load yaml&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;yaml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Unmarshal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to load config: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// ========================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Regist handler&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// ========================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Regist version collector - %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"query_exporter"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NewCollector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"query_exporter"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Regist http handler&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"HTTP handler path - %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"/metrics"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;HandleFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;h&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;promhttp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;HandlerFor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Gatherers&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DefaultGatherer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;promhttp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HandlerOpts&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;h&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ServeHTTP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// start server&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Starting http server - %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListenAndServe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to start http server: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// =============================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// Config config structure&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// =============================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;Config&lt;/span&gt; &lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;DSN&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Metrics&lt;/span&gt; &lt;span class="kd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Query&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Type&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Description&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Labels&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Value&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metricDesc&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Desc&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="2-implement-collector"&gt;2. Implement Collector&lt;/h3&gt;
&lt;p&gt;The highlight of today’s post is the implementing a Collector to collect the desired information from database.&lt;/p&gt;
&lt;p&gt;All the processes I implemented so far is to get the results as an HTTP result. Collector actually connect to the database and delivering the specified metric result based on the result of executing the specified query.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;QueryCollector&lt;/span&gt; &lt;span class="kd"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// Describe prometheus describe&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;QueryCollector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;Describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="kd"&gt;chan&lt;/span&gt;&lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Desc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// Collect prometheus collect&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;QueryCollector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;Collect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="kd"&gt;chan&lt;/span&gt;&lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metric&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As I have mentioned earlier, Collector is kind of a thread concept that collects information, and is a structure that implements the Collector interface of Prometheus. In other words, this story means that if you want to create another Collector of your own, &lt;strong&gt;you must implement two of the Describe and Collect defined by the prometheus.Collector interface&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Register the Collector defined as below.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;..&lt;/span&gt; &lt;span class="nx"&gt;skip&lt;/span&gt; &lt;span class="p"&gt;..&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// ========================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Regist handler&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// ========================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Regist version collector - %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"query_exporter"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NewCollector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"query_exporter"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;QueryCollector&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;..&lt;/span&gt; &lt;span class="nx"&gt;skip&lt;/span&gt; &lt;span class="p"&gt;..&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The Version Collector added to the can exporter created earlier and the QueryCollector newly added this time are registered. When an http request comes in to “/metric”, the above two Collectors are finally executed by each thread.&lt;/p&gt;
&lt;h4 id="2-1-create-the-describe-function"&gt;2-1. Create the Describe function&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;This is the part that defines the specifications of each metric.&lt;/strong&gt; Actually, it is not necessary to define the specification of the metric here, but it is useful if you consider the case of creating and operating multiple Collectors. This method is executed only once when a Collector is registered with prometheus.Register.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;QueryCollector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;Describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="kd"&gt;chan&lt;/span&gt;&lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Desc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;metricName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metrics&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metricDesc&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NewDesc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;BuildFQName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"query_exporter"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metricName&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metrics&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;metricName&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"metric description for \"%s\" registerd"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metricName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here, I have defined the specification of the metric with the information related to Query in the setting information read earlier.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;prometheus.BuildFQName: name of metric&lt;/li&gt;
&lt;li&gt;metric.Description: Description&lt;/li&gt;
&lt;li&gt;metric.Labels: Array of label names, label values should be mapped later in this order&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you look at the config.yml, each mapping will be as follows.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# metricName&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;process_count_by_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;## metric.Description&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"process count by user"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;## metric.Labels&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="2-2-create-the-collect-function"&gt;2-2. Create the Collect function&lt;/h4&gt;
&lt;p&gt;This is the part that connects to the DB, executes the registered SQL, and makes it a metric.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2021/07/metric-results_hu_f42480f2db3de72e.png 480w, https://percona.community/blog/2021/07/metric-results_hu_ea7e641939b8be9b.png 768w, https://percona.community/blog/2021/07/metric-results_hu_c95cb9f17d9c0c0f.png 1400w"
src="https://percona.community/blog/2021/07/metric-results.png" alt="metric results" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The execution results(rows) of each query are displayed as a metric name and values as shown in the figure above.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;QueryCollector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;Collect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="kd"&gt;chan&lt;/span&gt;&lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metric&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Connect to database&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"mysql"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DSN&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Connect to database failed: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Execute each queries in metrics&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metrics&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Execute query&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to execute query: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Get column info&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Columns&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to get column meta: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;des&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kd"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([][]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;cols&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;des&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// fetch database&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Next&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Scan&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;des&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bytes&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Metric labels&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;label&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Labels&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;labelVals&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Metric value&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;strconv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ParseFloat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Add metric&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="nx"&gt;strings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToLower&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s"&gt;"counter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MustNewConstMetric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metricDesc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CounterValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s"&gt;"gauge"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MustNewConstMetric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metricDesc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GaugeValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Fail to add metric for %s: %s is not valid type"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you can see from the labelVals value, you need to pass the label values in the order of Labels of the specification defined in Describe earlier. There are two metric types here: &lt;strong&gt;counter&lt;/strong&gt; and &lt;strong&gt;gauge&lt;/strong&gt;. Each type has the following meaning.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;COUNTER&lt;/strong&gt;: A value that only increases. In prometheus, the indicator is displayed as a change calculation function such as rate/irate.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GAUGE&lt;/strong&gt;: A type whose value can increase/decrease, such as like car gauge. In general, it is used to save the current metric value as it is, such as process count.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// COUNTER&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MustNewConstMetric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metricDesc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CounterValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// GAUGE&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MustNewConstMetric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metricDesc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GaugeValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For the value to be displayed as a metric, the value item specified in the setting is retrieved from the query result.&lt;/p&gt;
&lt;h2 id="queryexporter-source"&gt;QueryExporter Source&lt;/h2&gt;
&lt;p&gt;Here’s the everything that I’ve done so far:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;go&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"database/sql"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"flag"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"io/ioutil"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"net/http"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"os"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"strconv"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"strings"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/ghodss/yaml"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;_&lt;/span&gt; &lt;span class="s"&gt;"github.com/go-sql-driver/mysql"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/client_golang/prometheus"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/client_golang/prometheus/promhttp"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s"&gt;"github.com/prometheus/common/version"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt; &lt;span class="s"&gt;"github.com/sirupsen/logrus"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="nx"&gt;Config&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;collector&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"query_exporter"&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;configFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bind&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Get OS parameter&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;StringVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;configFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"config"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"config.yml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"configuration file"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;StringVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"0.0.0.0:9104"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Parse&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Load config &amp; yaml&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// =====================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ioutil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ReadFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;configFile&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to read config file: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Load yaml&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;yaml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Unmarshal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to load config: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// ========================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Regist handler&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// ========================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Regist version collector - %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;collector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NewCollector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;collector&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;QueryCollector&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Regist http handler&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"HTTP handler path - %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"/metrics"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;HandleFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;h&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;promhttp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;HandlerFor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Gatherers&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DefaultGatherer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;promhttp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HandlerOpts&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;h&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ServeHTTP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// start server&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Starting http server - %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListenAndServe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to start http server: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// =============================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// Config config structure&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// =============================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;Config&lt;/span&gt; &lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;DSN&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Metrics&lt;/span&gt; &lt;span class="kd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Query&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Type&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Description&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Labels&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;Value&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metricDesc&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Desc&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// =============================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// QueryCollector exporter&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// =============================&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;QueryCollector&lt;/span&gt; &lt;span class="kd"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// Describe prometheus describe&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;QueryCollector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;Describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="kd"&gt;chan&lt;/span&gt;&lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Desc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;metricName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metrics&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metricDesc&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NewDesc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;BuildFQName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;collector&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metricName&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metrics&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;metricName&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Infof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"metric description for \"%s\" registerd"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metricName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;// Collect prometheus collect&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nx"&gt;QueryCollector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;Collect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="kd"&gt;chan&lt;/span&gt;&lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metric&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Connect to database&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"mysql"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DSN&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Connect to database failed: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Execute each queries in metrics&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metrics&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Execute query&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to execute query: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Get column info&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Columns&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to get column meta: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;des&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kd"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([][]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;cols&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;des&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// fetch database&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Next&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Scan&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;des&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bytes&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Metric labels&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;label&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Labels&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;labelVals&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Metric value&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nx"&gt;strconv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ParseFloat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;// Add metric&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="nx"&gt;strings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToLower&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s"&gt;"counter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MustNewConstMetric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metricDesc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CounterValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s"&gt;"gauge"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;&lt;-&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MustNewConstMetric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metricDesc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GaugeValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;labelVals&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Fail to add metric for %s: %s is not valid type"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If the package does not exist, run &lt;code&gt;go mod vendor&lt;/code&gt; to download the necessary packages.&lt;/p&gt;
&lt;p&gt;Start the server and check the information collected by the actual exporter.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ go run .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; Regist version collector - query_exporter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; metric description &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s2"&gt;"process_count_by_host"&lt;/span&gt; registerd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; metric description &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s2"&gt;"process_count_by_user"&lt;/span&gt; registerd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; HTTP handler path - /metrics
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INFO&lt;span class="o"&gt;[&lt;/span&gt;0000&lt;span class="o"&gt;]&lt;/span&gt; Starting http server - 0.0.0.0:9104&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you run it with curl, you can see that the session count per user/host defined in the settings is displayed.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;bash&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ curl 127.0.0.1:9104/metrics
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# TYPE go_gc_duration_seconds summary&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go_gc_duration_seconds&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;quantile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"0"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;go_gc_duration_seconds&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;quantile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"0.25"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.. skip ..
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# HELP query_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which query_exporter was built.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# TYPE query_exporter_build_info gauge&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;query_exporter_build_info&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;branch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;,goversion&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"go1.16.5"&lt;/span&gt;,revision&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;,version&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# HELP query_exporter_process_count_by_host process count by host&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# TYPE query_exporter_process_count_by_host gauge&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;query_exporter_process_count_by_host&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;,user&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"event_scheduler"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;query_exporter_process_count_by_host&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;,user&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"test"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# HELP query_exporter_process_count_by_user process count by user&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# TYPE query_exporter_process_count_by_user gauge&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;query_exporter_process_count_by_user&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"event_scheduler"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;query_exporter_process_count_by_user&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"test"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This is the moment when your own Exporter is created at final!. 🙂&lt;/p&gt;
&lt;h2 id="concluding"&gt;Concluding..&lt;/h2&gt;
&lt;p&gt;The post was very long. I put the source code in the body several times.. I feel like the amount of text is getting longer.&lt;/p&gt;
&lt;p&gt;Anyway, I’ve created my own unique Exporter! &lt;strong&gt;I implemented a simple function to simply register a query and extract this result as a metric result&lt;/strong&gt;, but I think you can add more interesting elements according to your own thoughts as needed.&lt;/p&gt;
&lt;p&gt;For reference, the source written above is organized in the following Git.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/go-gywn/query-exporter-simple" target="_blank" rel="noopener noreferrer"&gt;https://github.com/go-gywn/query-exporter-simple&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Sometimes, when I need to monitor hundreds and thousands of servers from one monitoring server, it is sometimes useful to manage the collection of metrics. As of yet, only support with MySQL, I personally create another Query Exporter project. I implemented more parallel processing and timeouts in the above project base.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/go-gywn/query-exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/go-gywn/query-exporter&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;It’s always been like that… If there is nothing, just create and it If there is, use it well!&lt;/p&gt;
&lt;p&gt;I hope to all you have a nice summer.&lt;/p&gt;</content:encoded>
      <author>Dongchan Sung</author>
      <category>Exporter</category>
      <category>Go</category>
      <category>Query</category>
      <category>MySQL</category>
      <category>Prometheus</category>
      <category>Programming</category>
      <media:thumbnail url="https://percona.community/blog/2021/07/query-exporter_hu_5c9539ca66ebe142.jpg"/>
      <media:content url="https://percona.community/blog/2021/07/query-exporter_hu_e31fad24a19a09b2.jpg" medium="image"/>
    </item>
    <item>
      <title>Exporters Roadmap</title>
      <link>https://percona.community/blog/2021/06/11/exporters-roadmap/</link>
      <guid>https://percona.community/blog/2021/06/11/exporters-roadmap/</guid>
      <pubDate>Fri, 11 Jun 2021 00:00:00 UTC</pubDate>
      <description>Exporters Roadmap Goals Prometheus exports as a part of PMM are a big and valuable component.</description>
      <content:encoded>&lt;h2 id="exporters-roadmap"&gt;Exporters Roadmap&lt;/h2&gt;
&lt;h3 id="goals"&gt;Goals&lt;/h3&gt;
&lt;p&gt;Prometheus exports as a part of PMM are a big and valuable component.&lt;/p&gt;
&lt;p&gt;According to the goal to involve open source contributors to contribute to PMM and Percona to contribute to open source. As the main focus, it was decided to start from the exporter.&lt;/p&gt;
&lt;p&gt;For now PMM use the next exporters:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/node_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/node_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/mysqld_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/mysqld_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/mongodb_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/mongodb_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/postgres_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/postgres_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Percona-Lab/clickhouse_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/Percona-Lab/clickhouse_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/proxysql_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/proxysql_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/rds_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/rds_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/azure_metrics_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/azure_metrics_exporter&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="groups"&gt;Groups&lt;/h3&gt;
&lt;p&gt;We can split them into three groups.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The first group&lt;/strong&gt; is exporters that are created by Percona or Percona contribution in its fork is so big - that it cannot be pushed back upstream.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/mongodb_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/mongodb_exporter&lt;/a&gt; - built by Percona from scratch.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/proxysql_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/proxysql_exporter&lt;/a&gt; - built by Percona from scratch.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/rds_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/rds_exporter&lt;/a&gt; - too far from upstream - Percona made a big contribution to fit it for PMM needs.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For those three exporters we are going to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;encourage contribution from the community;&lt;/li&gt;
&lt;li&gt;create an easy setup dev environment to speed up development and testing;&lt;/li&gt;
&lt;li&gt;consider user’s issues and request - and try to solve this with “community priority” level;&lt;/li&gt;
&lt;li&gt;create regular releases with needed binaries for community consumption.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;The second group&lt;/strong&gt; is exporters that are not that far away from upstream and Percona would like to contribute back as much as possible.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/node_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/node_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/mysqld_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/mysqld_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/postgres_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/postgres_exporter&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For this group, we will try to push all fixes made by Percona to Upstream and are going to take part in development and bug fixing as open-source contributors - trying to bring value for the community as well as for PMM.&lt;/p&gt;
&lt;p&gt;And &lt;strong&gt;the third group&lt;/strong&gt; are exporters that currently fit PMM needs and Percona did not contribute a lot in its forks.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/Percona-Lab/clickhouse_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/Percona-Lab/clickhouse_exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/percona/azure_metrics_exporter" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/azure_metrics_exporter&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For those exporters, we are going to start using upstream and make changes if needed in other PMM components. Downstream repos would only be used as forks synced with upstream only for the PMM build support.&lt;/p&gt;
&lt;h3 id="action-plan"&gt;Action plan&lt;/h3&gt;
&lt;p&gt;Here is some short term plan of tasks to implement a part of the plan above:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Remove fork of clickhouse_exporter and remove it as component from PMM - looks like the easiest task. The new version of the ClickHouse server exposes metrics in Prometheus format, so we can collect them without any exporters. (we can use build-in metrics exporter &lt;a href="https://clickhouse.tech/docs/en/operations/server-configuration-parameters/settings/#server_configuration_parameters-prometheus" target="_blank" rel="noopener noreferrer"&gt;https://clickhouse.tech/docs/en/operations/server-configuration-parameters/settings/#server_configuration_parameters-prometheus&lt;/a&gt; starting from &lt;a href="https://clickhouse.tech/docs/en/whats-new/changelog/2020/#clickhouse-release-v20-1-2-4-2020-01-22" target="_blank" rel="noopener noreferrer"&gt;https://clickhouse.tech/docs/en/whats-new/changelog/2020/#clickhouse-release-v20-1-2-4-2020-01-22&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Discard our changes in azure_metrics_exporter and keep the fork in sync with upstream. We use common formulas in Grafana to visualize metrics from different exporters. Current azure_exporter has slightly different a few metric names - we can achieve the same by renaming using Prometheus recording rules &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/recording_rules" target="_blank" rel="noopener noreferrer"&gt;https://prometheus.io/docs/prometheus/latest/configuration/recording_rules&lt;/a&gt; . This discard needs to keep the fork up to date with upstream.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Node exporter looks like the best candidate to contribute back to the community &lt;a href="https://github.com/prometheus/node_exporter/compare/master...percona:main" target="_blank" rel="noopener noreferrer"&gt;https://github.com/prometheus/node_exporter/compare/master...percona:main&lt;/a&gt;. This exporter’s source code did not go far away - so we can leverage what we can accept from upstream and create minimal PR to upstream with features we only required.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;MySQL exporter can be the heaviest task to push back to upstream - we did a lot of change. So the tactic could be split difference into logical parts and try to push back it step by step &lt;a href="https://github.com/percona/mysqld_exporter/pull/61/files" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/mysqld_exporter/pull/61/files&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;PostgreSQL exporter is also quite far from upstream plus it requires a few fundamental improvements like a handle DB connection, etc. For this exporter, we also need split difference on logical parts and contribute it with small PR back to upstream &lt;a href="https://github.com/percona/postgres_exporter/pull/28/files" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/postgres_exporter/pull/28/files&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Maintain mongodb_exporter, add needed packaging, docker container and update helm chart.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxy exporter looks good for now, but we need to take into consideration that ProxySQL start exports metrics natively &lt;a href="https://proxysql.com/documentation/prometheus-exporter" target="_blank" rel="noopener noreferrer"&gt;https://proxysql.com/documentation/prometheus-exporter&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And RDS exporter goes to be separated from upstream - now it contains a big part of code that serve mostly PMM needs.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For all the above we would try to use the GitHub Project board &lt;a href="https://github.com/orgs/percona/projects/2" target="_blank" rel="noopener noreferrer"&gt;https://github.com/orgs/percona/projects/2&lt;/a&gt; to track progress in different repositories for all mentioned tasks above.&lt;/p&gt;
&lt;p&gt;We would sync with the community during Engineering Monthly Meeting &lt;a href="https://percona.community/contribute/engineeringmeetings/" target="_blank" rel="noopener noreferrer"&gt;https://percona.community/contribute/engineeringmeetings/&lt;/a&gt; as well as by participating in Upstream meetings.&lt;/p&gt;
&lt;p&gt;Come and join us on our journey in OpenSource! Contact us at &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; .&lt;/p&gt;</content:encoded>
      <author>Andrii Skomorokhov</author>
      <category>Exporter</category>
      <category>Prometheus</category>
      <category>PMM</category>
      <category>Monitoring</category>
      <media:thumbnail url="https://percona.community/blog/2021/06/pmm-exporters_hu_d93268051f105860.jpg"/>
      <media:content url="https://percona.community/blog/2021/06/pmm-exporters_hu_5dd9a213d87f4edc.jpg" medium="image"/>
    </item>
    <item>
      <title>How to Speed Up Re Sync of Dropped Percona Xtradb Cluster Node</title>
      <link>https://percona.community/blog/2021/02/24/how-to-speed-up-re-sync-of-dropped-percona-xtradb-cluster-node/</link>
      <guid>https://percona.community/blog/2021/02/24/how-to-speed-up-re-sync-of-dropped-percona-xtradb-cluster-node/</guid>
      <pubDate>Wed, 24 Feb 2021 00:00:00 UTC</pubDate>
      <description>The Problem HELP, HELP! My Percona XtraDB Cluster version: 5.7.31-31. Single Node is stuck in a joined state.</description>
      <content:encoded>&lt;h2 id="the-problem"&gt;The Problem&lt;/h2&gt;
&lt;p&gt;HELP, HELP! My Percona XtraDB Cluster version: 5.7.31-31. Single Node is stuck in a joined state.&lt;/p&gt;
&lt;p&gt;I recently had the privilege to help a client with a fascinating issue.&lt;/p&gt;
&lt;p&gt;NODE-B dropped out of the 3 node PXC cluster. It looked to be DISK IO that caused NODE-B to fall far behind and eventually be removed from the cluster. A restart of NODE-B allowed it
to rejoin the cluster. NODE-B looked to have been down for about 4 hours. Once NODE-B was back as part of the cluster, it required a full SST.&lt;/p&gt;
&lt;p&gt;When NODE-B stayed in a joint state for more than 12 hours, the client gave me a call. They were concerned that there was another issue with this cluster.&lt;/p&gt;
&lt;p&gt;Before going forward, let’s make sure we know the CPU, RAM and Database Size.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;8 CPU&lt;/li&gt;
&lt;li&gt;32 GB RAM&lt;/li&gt;
&lt;li&gt;Database Size approx. 2.75TB&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s gather some base information.&lt;/p&gt;
&lt;p&gt;I pulled the below data once I understood what was going on.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SHOW STATUS LIKE ‘wsrep_last%';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_last_applied | 9802457 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_last_committed | 10103670 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------------+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------------+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_local_state_comment | Joined |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------------+--------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SHOW STATUS LIKE 'wsrep_cert_deps_distance';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_cert_deps_distance | 148.96 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------+---------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Pulled the below stats about one hour later.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NODE-B
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_last_applied | 11901100 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_last_committed | 12801100 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NODE-A
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_last_applied | 32900981 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_last_committed | 32901100 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As we can see above, NODE-B is processing write sets, but very slowly. The gcache files were being consumed very quickly, but being only 128MB in size would be slow going to get in sync. At this time, NODE-A and NODE-B seqno’s were separated by 20,100,000.&lt;/p&gt;
&lt;p&gt;Now we know NODE-B is working as it should. At this rate, it could be a day or more to catch up.&lt;/p&gt;
&lt;h2 id="gathering-data-and-coming-up-with-a-solution"&gt;Gathering Data and Coming up with a solution&lt;/h2&gt;
&lt;p&gt;I did a quick review of the PXC settings and found:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The wsrep_slave_threads = 2&lt;/li&gt;
&lt;li&gt;Many tables had no primary key. The mysql.log file was approx 500Mb in size. The gal-leria cache size was set at the default 128MB (Now I saw why NODE-B needed a fullSST)&lt;/li&gt;
&lt;li&gt;The client had set the wsrep_doner_node to use NODE-C. NODE-C had a higher laten-cy to NODE-B than NODE-B had to NODE-A. I would prefer to have PXC choose the donor. Not have it set up to use NODE-C.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A scheduled 500 million row data extract started right about the time NODE-B re-joined the cluster. Now we have a large data load taking place plus a full SST to NODE-B.&lt;/p&gt;
&lt;p&gt;Let’s now talk about how we helped to speed up NODE-B going from Joined to Synced.&lt;/p&gt;
&lt;h2 id="recommendations"&gt;Recommendations&lt;/h2&gt;
&lt;p&gt;We upped the slave threads from 2 to 8. This is equal to the number of CPU’s on the system. Exceeding 8 threads could cause performance impact.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;mysql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-mysql" data-lang="mysql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kt"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;global&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;wsrep_slave_threads&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Changed pxc_stright_mode from permissive too disabled. This was done to stop all PXC warnings being written to mysqld.log.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;mysql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-mysql" data-lang="mysql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kt"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;global&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;pxc_strick_mode&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;disabled&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Relaxed ACID compliance. I made these changes to help NODE-B get back into a sync status quicker. I don’t recommend relaxing ACID compliance. This change should only be made if the client fully understands the risk.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;mysql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-mysql" data-lang="mysql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kt"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;global&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;innodb_flush_log_at_trx_commit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="kt"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;global&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sync_binlog&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We let these changes bake in for about 2 hours. The client did not want to stop the data extract just yet. They were very open to the idea and did not want to lose the work that had already been completed. This did not bother me because I know the NODE-B was working as it should be. We let these changes bake in for about two hours.&lt;/p&gt;
&lt;h2 id="improvement"&gt;Improvement&lt;/h2&gt;
&lt;p&gt;NODE-B&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SHOW STATUS LIKE ‘wsrep_last%';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_last_applied | 32902200 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| wsrep_last_committed | 40902100 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------+----------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SHOW STATUS LIKE 'wsrep_cert_deps_distance';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +--------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +--------------------------+---------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | wsrep_cert_deps_distance | 86.81 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +--------------------------+---------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +---------------------------+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +---------------------------+--------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | wsrep_local_state_comment | Joined |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +---------------------------+--------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now let’s look at our primary read/write NODE-A:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SHOW STATUS LIKE ‘wsrep_last%';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | Variable_name | Value |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | wsrep_last_applied | 43900992 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +----------------------+----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; | wsrep_last_committed | 43902200 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; +----------------------+----------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As we can now see, NODE-B is catching up much faster than before. The committed seqno is only 3,000,100 apart now, where the seqno had been this far apart 20,100,000.&lt;/p&gt;
&lt;p&gt;Clearly, we made some significant progress. The client still was concerned about only having 2 of the 3 nodes up. We had a couple of choices, one stops the data extract or be patient for a bit longer. Clients choose patience. After another 2.5 hours, NODE-B had caught up to its peers and switched to Synced.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;NODE-B was stuck in a joined state due to a very undersized gcache; the default size had nev- er been changed.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Review your Percona XtraDB Cluster setting; if you have an extensive data set, the default gcache size won’t be enough. Not sure how to best size the cache? Look here:• Miguel Angel Nieto wrote a great blog post to help size the galera cache.&lt;/li&gt;
&lt;li&gt;Give the cluster a regular health check. This is critical as your database grows.&lt;/li&gt;
&lt;li&gt;Make sure all your tables have Primary Keys. Without the use of primary keys on all tables;your performance will suffer.&lt;/li&gt;
&lt;li&gt;Make sure you are getting all the performance you can.• Useful link: Tips for MySQL 5.7 Database Tuning and Performance&lt;/li&gt;
&lt;li&gt;As you can see, adjusting the number of threads applying transactions can make a big dif-ference. Just don’t go overboard.&lt;/li&gt;
&lt;li&gt;If possible large data loads should be done in off-hours.&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <media:thumbnail url="https://percona.community/blog/2021/02/Blog-Community-Pic_hu_1e650b39cb6c10b6.jpg"/>
      <media:content url="https://percona.community/blog/2021/02/Blog-Community-Pic_hu_7d2b6a1704ddd161.jpg" medium="image"/>
    </item>
    <item>
      <title>Embracing the Stream</title>
      <link>https://percona.community/blog/2020/12/10/embracing-the-stream/</link>
      <guid>https://percona.community/blog/2020/12/10/embracing-the-stream/</guid>
      <pubDate>Thu, 10 Dec 2020 10:33:29 UTC</pubDate>
      <description>So this happened: CentOS Project shifts focus to CentOS Stream</description>
      <content:encoded>&lt;p&gt;So this happened: &lt;a href="https://lists.centos.org/pipermail/centos-announce/2020-December/048208.html" target="_blank" rel="noopener noreferrer"&gt;CentOS Project shifts focus to CentOS Stream&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The future of the CentOS Project is CentOS Stream, and over the next year we’ll be shifting focus from CentOS Linux, the rebuild of Red Hat Enterprise Linux (RHEL), to CentOS Stream, which tracks just ahead of a current RHEL release. CentOS Linux 8, as a rebuild of RHEL 8, will end at the end of 2021. CentOS Stream continues after that date, serving as the upstream (development) branch of Red Hat Enterprise Linux.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;And a lot of people react like this: &lt;a href="https://twitter.com/nixcraft/status/1336348208184741888" target="_blank" rel="noopener noreferrer"&gt;&lt;/a&gt; &lt;em&gt;Oracle buys Sun: Solaris Unix, Sun servers/workstation, and MySQL went to /dev/null. IBM buys Red Hat: CentOS is going to &gt;/dev/null. Note to self: If a big vendor such as Oracle, IBM, MS, and others buys your fav software, start the migration procedure ASAP. (&lt;a href="https://twitter.com/nixcraft/status/1336348208184741888" target="_blank" rel="noopener noreferrer"&gt;Tweet&lt;/a&gt;)&lt;/em&gt; So it seems my opinion is the unpopular one: CentOS switching to Stream is not bad at all. When you wanted to run Openstack on CentOS in 2015, you needed to enable &lt;a href="https://fedoraproject.org/wiki/EPEL" target="_blank" rel="noopener noreferrer"&gt;EPEL&lt;/a&gt; to even begin an install. The first thing this did was literally replace every single package in the install. That was, because CentOS at that time was literally making Debian Stale look young. And we see similar problems with Ubuntu LTS, for what it’s worth. Ubuntu LTS comes out every 2 years, and that’s kind of ok-ish, but it lasts 5 years, which is nonsensical. It was not, in the past.&lt;/p&gt;
&lt;h2 id="so-what-changed"&gt;So what changed?&lt;/h2&gt;
&lt;p&gt;Software Development. We have been moving to a platform based development approach, leveraging the wins from DevOps. “Kris, that’s corporate bullshit.” It’s not, though. Let me spell it out in plain for you.&lt;/p&gt;
&lt;h3 id="programming-languages-are-platforms-powered-by-tools"&gt;Programming languages are platforms powered by tools&lt;/h3&gt;
&lt;p&gt;People these days do not program in an editor, with a compiler. They use Github or Gitlab, with many integrations, and a local IDE. They commit to a VCS (git, actually, the world converged on one single VCS), and trigger a bunch of things. Typechecks, Reformatters, Tests, but also Code Quality Metrics, and Security Scanners. Even starting a new programming language in 2020 is not as easy as it was in the past. Having a language is not enough, because you do not only need a language and maybe a standard library, but also a JetBrains Product supporting it, SonarQube support, XRay integration, gitlab-ci.yml examples and so on. Basically, there is a huge infrastructure system designed to support development and whatever you start needs to fit into it ,right from the start. That is, because we have come to rely on an entire ecosystem of tooling to make our developers faster, and to enforce uniform standards across the group. And that is a good thing, which can help us to become better programmers.&lt;/p&gt;
&lt;h3 id="github-and-gitlab-are-tools-for-conversations-about-code-among-developers"&gt;Github and Gitlab are tools for conversations about code among developers&lt;/h3&gt;
&lt;p&gt;We also have come to rely on tooling to enable collaboration, and structured discussion about code, since we as programmers no longer work alone. A good part of the value of Gitlab, Github and similar is enabling useful cooperation between developers, in ways that Developers value. Another good part of the value is extracted at the production end of these platforms: We produce artifacts of builds, automatically and in reproducible ways. Which includes also knowing things about these artifacts - for example, what went into producing them and being able to report on these things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Dependencies&lt;/li&gt;
&lt;li&gt;Licenses&lt;/li&gt;
&lt;li&gt;Versions&lt;/li&gt;
&lt;li&gt;Vulnerabilities&lt;/li&gt;
&lt;li&gt;Commit frequency and time to fix for each dependency, abandonware alert&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and many more things. With these processes, and repositories, and with one other ingredient, we have made rollouts and rollbacks an automated and uniform procedure, provided we find a way to manage and evolve state properly. Compared to the hand crafted bespoke rollout and rollback procedures of the 2010s, this is tremendous progress.&lt;/p&gt;
&lt;h3 id="immutable-infrastructure-and-reproducible-builds"&gt;Immutable infrastructure, and reproducible builds&lt;/h3&gt;
&lt;p&gt;This other ingredient is immutable infrastructure. It is the basic idea that we do no longer manipulate the state of the base image we run our code on, ever, after it is deployed. It’s basically death to Puppet and its likes. Instead we change the build process, producing immutable images, and quickly rebuild and redeploy. We deploy the base image, and then supply secrets, runtime config and control config in other, more appropriate ways. Things like Vault, a consensus system such as Zookeeper, or similar mechanisms come to mind. It allows us to orchestrate change across a fleet of instances, all alike, in a way that guarantees consistency across our fleet, in an age where all computing has become distributed computing. The same thinking can be applied to the actual base operating system of the host, where we remove application installs completely from the base operating system. Instead we provide a mechanism to mount and unmount application installs, including their dependencies, in the form of virtual machine images, container images or serverless function deployments (also containers, but with fewer buttons). As a consequence, everything becomes single-user, single-tenant - one image contains only Postgres, another one only your static images webserver (images supplied from an external mountable volume), and a third one only your production Python application plus runtime environment. With only one thing in the container, Linux UIDs no longer have a useful separation function, and other isolation and separation mechanisms take their place:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;virtualization,&lt;/li&gt;
&lt;li&gt;CGroups,&lt;/li&gt;
&lt;li&gt;Namespaces,&lt;/li&gt;
&lt;li&gt;Seccomp,&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and similar. They are arguably more powerful, anyway. This also forms a kind of argument in the great “Is curlbash or even sudo curlbash still a bad thing?” debate of our times, but I am unsure which (I’m not: in a single-user single-tenant environment curlbashing into that environment should not be a security problem, but you get problems proving the provenance of your code. Which you would not have, had you used another, less casual method of acquiring that dependency).&lt;/p&gt;
&lt;h3 id="images-as-building-blocks-for-applications"&gt;Images as building blocks for applications&lt;/h3&gt;
&lt;p&gt;So now we can use entire applications, with configuration provided and injected at runtime, to construct services, and we can add relatively tiny bits of our own code to build our own services on top of existing services, provided by the environment. We get Helm Charts for Kubernetes, we get &lt;a href="https://www.infoq.com/articles/serverless-sea-change/" target="_blank" rel="noopener noreferrer"&gt;The Serverless Sea Change&lt;/a&gt;, and Step Functions. We also get Nocode, Codeless or similar attempts at building certain things only from services without actual coding. But it is more pervasive than this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The Unifi Control Plane uses multiple Java processes and one Mongodb. It can be dockered into one container, or can be provided as helm chart or as a docker-compose with multiple containers, for better scalability and maintenance.&lt;/li&gt;
&lt;li&gt;The gitlab Omnibus uses a single container, again, with Postgres, Redis and a lot of internal state plus Chef to deploy about a dozen components, but differentiated deploys for the individual components in a K8s context also exist.&lt;/li&gt;
&lt;li&gt;Things like a Jitsi setup can be packaged into a single, relatively simple docker-compose.yml, and will assemble themselves from images mostly automatically. The result will run on almost any operating system substrate, as long as it provides a Linux kernel syscall interface.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="fighting-conways-law"&gt;Fighting Conway’s law&lt;/h3&gt;
&lt;p&gt;At that is kind of the point: By packing all dependencies into the container or VM image itself, the base operating system hardly matters any more. It allows us to move on, each on their own speed, on a per-project basis. The project will bring its own database, cache, runtime and libraries with itself, without version conflicts, and without waiting for the distro to upgrade them, or to provide them at all. Conversely it allows the Distro to move to Stream: They are finally free from slow moving OSS projects preventing them from upgrading local components, because one of them is not yet ready to move. Even teams in the Enterprise are now free to move at their own speed, because they no longer have to wait for half a dozen stakeholders ot get to the Technical Debt Section of their backlog. The main point is, in my opinion, that it is okay and normal for the application to use a different “No longer a full OS” than what the host uses. In acknowledging that both can reduce scope and size, and optimize. This is a good thing, and will speed up development. So in a world where components and their dependencies are being packaged as single-user single-tenancy units of execution (virtual machines, containers and the like), CentOS moving to Streams is not only acknowledging that change, it also forced the slower half of the world to acknowledge this, and to embrace it. I say: This is a good thing. And if you rant “Stability goes out of the window!” - check your calendar and your processes. It’s 2020. Act like it. One of the major innovations in how we do computers in the last decade has been establishing the beginnings of a certifiable process for building the things we run. Or, as &lt;a href="https://isotopp.github.io/Christoph%20Petrausch" target="_blank" rel="noopener noreferrer"&gt;Christoph Petrausch&lt;/a&gt; puts it in &lt;a href="https://twitter.com/hikhvar/status/1336608880013488130" target="_blank" rel="noopener noreferrer"&gt;this tweet&lt;/a&gt;: “If your compliance is based on certifying the running end product instead of the process that built it, your organisation will not be able to keep up with the development speed of others.”  &lt;/p&gt;
&lt;p&gt;&lt;em&gt;First published on &lt;a href="https://blog.koehntopp.info/" target="_blank" rel="noopener noreferrer"&gt;https://blog.koehntopp.info/&lt;/a&gt; and syndicated here with permission of the author.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Kristian Köhntopp</author>
      <category>centos</category>
      <category>DevOps</category>
      <category>GitHub</category>
      <category>Gitlab</category>
      <category>koehntopp</category>
      <category>Linux</category>
      <category>Open Source Databases</category>
      <category>RedHat</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/12/stream-migrate-now_hu_50c1668258a508b7.jpg"/>
      <media:content url="https://percona.community/blog/2020/12/stream-migrate-now_hu_45909977bd60fdc1.jpg" medium="image"/>
    </item>
    <item>
      <title>Not JOINing on PERFORMANCE_SCHEMA</title>
      <link>https://percona.community/blog/2020/12/01/not-joining-on-performance_schema/</link>
      <guid>https://percona.community/blog/2020/12/01/not-joining-on-performance_schema/</guid>
      <pubDate>Tue, 01 Dec 2020 19:22:46 UTC</pubDate>
      <description>The tables in PERFORMANCE_SCHEMA (P_S) are not actually tables. You should not think of them as tables, even if your SQL works on them. You should not JOIN them, and you should not GROUP or ORDER BY them.</description>
      <content:encoded>&lt;p&gt;The tables in &lt;code&gt;PERFORMANCE_SCHEMA&lt;/code&gt; (&lt;code&gt;P_S&lt;/code&gt;) are not actually tables. You should not think of them as tables, even if your SQL works on them. You should not JOIN them, and you should not GROUP or ORDER BY them.&lt;/p&gt;
&lt;h2 id="unlocked-memory-buffers-without-indexes"&gt;Unlocked memory buffers without indexes&lt;/h2&gt;
&lt;p&gt;The stuff in &lt;code&gt;P_S&lt;/code&gt; has been created with “keep the impact on production small” in mind. That is, from a users point of view, you can think of them as unlocked memory buffers - the values in there change as you look at them, and there are precisely zero stability guarantees. There are also no indexes.&lt;/p&gt;
&lt;h3 id="unstable-comparisons"&gt;Unstable comparisons&lt;/h3&gt;
&lt;p&gt;When sorting a table for a GROUP BY or ORDER BY, it may be necessary to compare the value of one row to other rows multiple times in order to determine where the row goes. The value compared to other rows can change while this happens, and will change more often the more load the server has. The end result is unstable. Also, as the table you sort may be larger on a server under load, the row may need more comparisons, making this even more likely to happen. The table you look at may produce correct results on your stable, underutilized test systems, but the monitoring you base on this will fail on a loaded test system. Do not use GROUP BY or ORDER BY on &lt;code&gt;P_S&lt;/code&gt; tables.&lt;/p&gt;
&lt;h3 id="no-indexes-meaning-slow-joins-on-loaded-systems"&gt;No indexes, meaning slow joins on loaded systems&lt;/h3&gt;
&lt;p&gt;When JOINing a &lt;code&gt;P_S&lt;/code&gt; table against other tables, the join is done without indexes. There are no indexes defined in &lt;code&gt;P_S&lt;/code&gt;, and if there were they would make updates to values in &lt;code&gt;P_S&lt;/code&gt; more expensive, which is against the initial design tenet - “keep the impact on production small”. In practice that means your join against the processlist or session variables tables in &lt;code&gt;P_S&lt;/code&gt; do little harm in test, but will fail in production environments with many connections. You will be losing monitoring the moment you need it most - under load, in critital situations. Do not JOIN &lt;code&gt;P_S&lt;/code&gt; tables to anything.&lt;/p&gt;
&lt;h2 id="how-to-monitor"&gt;How to monitor&lt;/h2&gt;
&lt;p&gt;About the only type of query you can successfully run on &lt;code&gt;P_S&lt;/code&gt; is a single table &lt;code&gt;SELECT * FROM P_S.table&lt;/code&gt;, maybe with a simple &lt;code&gt;WHERE&lt;/code&gt; clause. That is, you can download and materialize data from a single &lt;code&gt;P_S&lt;/code&gt; table at a time, unsorted, unaggregated. Connection to other tables, aggregation and sorting have to be done on tables that are not &lt;code&gt;P_S&lt;/code&gt; tables. There are multiple ways to do this.&lt;/p&gt;
&lt;h3 id="subqueries-without-optimization"&gt;Subqueries, without optimization&lt;/h3&gt;
&lt;p&gt;It used to be that the MySQL optimizer did not resolve simple subqueries properly. So&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; select &lt;complicated stuff&gt; from
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -&gt; ( select * from performance_schema.sometable ) as t
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -&gt; order by &lt;something&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;used to work. The subquery &lt;code&gt;t&lt;/code&gt; would materialize the &lt;code&gt;P_S&lt;/code&gt; table as whatever your version of MYSQL used for implicit temporary tables, and the rest of the query resolution would happen on the materialized temptable. This is a snapshot, and would be stable. It still would not have indexes. And it still would not add up to 100%, of course. That is, queries like Dennis Kaarsemakers “How loaded is the SQL_THREAD” Replication Load analysis never came out at 100%, because the various values changed while the temporary table would be materialized, so you do not get a consistent snapshot (and by construction, this kind of consistency is impossible in &lt;code&gt;P_S&lt;/code&gt;). Anyway, with older versions of MySQL, this results in the query plan we want. Since MySQL 5.7, this does no longer work:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; select version();
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| version() |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 8.0.22 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; explain select * from ( select * from processlist ) as t;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------------+------------+------+---------------+------+---------+------+------+----------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------------+------------+------+---------------+------+---------+------+------+----------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 | SIMPLE | processlist | NULL | ALL | NULL | NULL | NULL | NULL | 256 | 100.00 | NULL |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------------+------------+------+---------------+------+---------+------+------+----------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set, 1 warning (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Newer MySQL (5.7 and above) will apply the &lt;code&gt;derived_merge&lt;/code&gt; optimization and fold the subquery into the outer query, resulting in a rewritten single query that again is executed on &lt;code&gt;P_S&lt;/code&gt; directly. You either need to &lt;code&gt;SET SESSION optimizer_switch = "derived_merge=off";&lt;/code&gt; or provide an advanced &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html#optimizer-hints-table-level" target="_blank" rel="noopener noreferrer"&gt;MySQL 8 optimizer hint&lt;/a&gt; to prevent the optimizer from ruining your cunning plan:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; explain select /*+ NO_MERGE(t) */ * from ( select * from processlist ) as t;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------------+------------+------+---------------+------+---------+------+------+----------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------------+------------+------+---------------+------+---------+------+------+----------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 | PRIMARY | &lt;derived2&gt; | NULL | ALL | NULL | NULL | NULL | NULL | 256 | 100.00 | NULL |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 2 | DERIVED | processlist | NULL | ALL | NULL | NULL | NULL | NULL | 256 | 100.00 | NULL |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+-------------+-------------+------------+------+---------------+------+---------+------+------+----------+-------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here we get the &lt;code&gt;DERIVED&lt;/code&gt; table as a non-&lt;code&gt;P_S&lt;/code&gt; temptable, and then run our “advanced” SQL on that as &lt;code&gt;PRIMARY&lt;/code&gt; on it.&lt;/p&gt;
&lt;h3 id="in-the-client"&gt;In the client&lt;/h3&gt;
&lt;p&gt;The alternative is, of course, to completely download the tables in question into client side hashes, and then perform the required operations on them on the client side, in memory. The important thing here is to limit the amount of memory spent - do not download unconstrained result sets into your client monitoring program. Then use a linearly scaling join method to construct the connections between the tables. Effectively, load data into hashes, and then program a client side hash join. This is additive (n + m) instead of quadratic (n * m), so you can survive this. This is the recommended method.&lt;/p&gt;
&lt;h2 id="who-is-doing-it-wrong"&gt;Who is doing it wrong?&lt;/h2&gt;
&lt;p&gt;Getting monitoring queries that use &lt;code&gt;P_S&lt;/code&gt; wrongly is common - it understands SQL, it handles &lt;code&gt;SHOW CREATE TABLE&lt;/code&gt;, so it is treated as a table and exposed to full SQL all the time. And on idle test boxen, it even looks like it works. At work, see this in our own code (still using a deprecated Diamond collector) and in SolarWinds nee Vividcortex. SolarWinds kindly highlights itself:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;sql&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sql" data-lang="sql"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;-- Most time consuming query - Coming from solar winds monitoring itself ¯_(ツ)_/¯
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;ifnull&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;sql_text&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;ifnull&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;processlist_user&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;ifnull&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;processlist_host&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;events_statements_history&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;left&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;using&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="o"&gt;`=?&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="o"&gt;`=?&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="c1"&gt;-- Coming from the "table ownership write identifier".
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;cnt&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;digest_text&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;current_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;processlist_user&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;system_user&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;events_statements_history&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;esh&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;inner&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="o"&gt;`=`&lt;/span&gt;&lt;span class="n"&gt;esh&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;event_name&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;current_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;group&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;by&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;digest_text&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;current_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;processlist_user&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="c1"&gt;-- Coming from diamond collector
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;&lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;processlist_user&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;sbt&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;variable_value&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;status_by_thread&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;sbt&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;performance_schema&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;using&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;sbt&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;variable_name&lt;/span&gt;&lt;span class="o"&gt;`=?&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;processlist_user&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;not&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="k"&gt;group&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;by&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;processlist_user&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;span class="n"&gt;variable_value&lt;/span&gt;&lt;span class="o"&gt;`&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Many of the above examples fail in multiple ways: Using JOIN for bad scalability (this is how we spotted them), by also using unstable sorting. We also see ORDER BY statements in the &lt;a href="https://github.com/influxdata/telegraf/blob/master/plugins/inputs/mysql/mysql.go#L376" target="_blank" rel="noopener noreferrer"&gt;Telegraf MySQL plugin&lt;/a&gt; in one place. It uses LIMIT, but if the ORDER BY does not work (ie does not actually sort), you cut off randomly.&lt;/p&gt;
&lt;h2 id="is-performance_schema-broken"&gt;Is PERFORMANCE_SCHEMA broken?&lt;/h2&gt;
&lt;p&gt;Clearly, it is not. Just badly misunderstood. The alternative is &lt;code&gt;INFORMATION_SCHEMA&lt;/code&gt;, which often locks, and that can be actually deadly: Just &lt;code&gt;select * from INFORMATION_SCHEMA.INNODB_BUFFER_PAGE&lt;/code&gt; on a server with a few hundreds of GB of buffer pool, humming at 10k QPS. The query will freeze the server completely for the runtime of the query – which with a large buffer pool size can be substantial. I’d rather have this in &lt;code&gt;P_S&lt;/code&gt; and then deal with the vagaries of the data changing while I read it than lose an important production server.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;First published on &lt;a href="https://blog.koehntopp.info/" target="_blank" rel="noopener noreferrer"&gt;https://blog.koehntopp.info/&lt;/a&gt; and syndicated here with permission of the author.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Kristian Köhntopp</author>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/12/Screenshot-2020-12-01-at-23.21.51_hu_88a6bdf0c99058c4.jpg"/>
      <media:content url="https://percona.community/blog/2020/12/Screenshot-2020-12-01-at-23.21.51_hu_146e54f3884427f0.jpg" medium="image"/>
    </item>
    <item>
      <title>On the Observability of Outliers</title>
      <link>https://percona.community/blog/2020/11/23/on-the-observability-of-outliers/</link>
      <guid>https://percona.community/blog/2020/11/23/on-the-observability-of-outliers/</guid>
      <pubDate>Mon, 23 Nov 2020 17:41:16 UTC</pubDate>
      <description>At work, I am in an ongoing discussion with a number of people on the Observability of Outliers. It started with the age-old question “How do I find slow queries in my application?” aka “What would I want from tooling to get that data and where should that tooling sit?”</description>
      <content:encoded>&lt;p&gt;At work, I am in an ongoing discussion with a number of people on the Observability of Outliers. It started with the age-old question “How do I find slow queries in my application?” aka “What would I want from tooling to get that data and where should that tooling sit?”&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;As a developer, I just want to automatically identify and isolate slow queries!&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Where I work, we do have &lt;a href="https://www.solarwinds.com/database-performance-monitor" target="_blank" rel="noopener noreferrer"&gt;SolarWinds Database Performance Monitor&lt;/a&gt; aka Vividcortex to find slow queries, so that helps. But that collects data at the database, which means you get to see slow queries, but maybe not application context. There is also work done by a few developers which instead collects query strings, query execution times and query counts at the application. This has access to the call stack, so it can tell you which code generated the query that was slow. It also channels this data into events (what we have instead of &lt;a href="https://www.honeycomb.io/" target="_blank" rel="noopener noreferrer"&gt;Honeycomb&lt;/a&gt;), and that is particularly useful, because now you can generate aggregates and keep the link from the aggregates to the constituting events.&lt;/p&gt;
&lt;h2 id="how-do-you-find-outliers"&gt;How do you find outliers?&lt;/h2&gt;
&lt;p&gt;“That’s easy”, people will usually say, and then start with the average plus/minus one standard deviation. “We’ll construct this “n stddev wide corridor around the average” and then look at all the things outside.”&lt;/p&gt;
&lt;p&gt;&lt;a href="https://isotopp.github.io/uploads/2020/11/obs-no.png" target="_blank" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;No.&lt;/em&gt; That is descriptive statistics for normal distributions and for them to work we need to actually have a normal distribution. Averages and Standard Deviations work on normal distributions. So the first thing we need to do is to look at the data and ensure that we actually have a normal distribution.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://isotopp.github.io/uploads/2020/11/obs-anscombe.png" target="_blank" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Anscombe’s Quartet is a set of graphs having an identical number of points, and producing identical descriptive statistics, but being clearly extremely different distributions.&lt;/em&gt;
Because when you apply the Descriptive Statistics of Averages and Standard Deviations to things that are Not a Normal Distribution (see &lt;a href="https://en.wikipedia.org/wiki/Anscombe%27s_quartet" target="_blank" rel="noopener noreferrer"&gt;Anscombe’s Quartet&lt;/a&gt;) they do not tell you much about the data: all the graphs in the infamous Quartet have the same descriptive stats (more than just average and stddev, even), but are clearly completely different. So what we would want is a graph of the data. For a time series – which is what we usually get when dealing with metrics – a good way to plot the data is a heatmap. For the given problem, the heatmap more often than not looks like this:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://isotopp.github.io/uploads/2020/11/obs-heatmap.png" target="_blank" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;We partition the time axis into buckets of - say - 10s each, and then bucket execution times linearly or logarithmically. For each query we run, we determine the bucket it goes into and increment by one. The resulting numbers are plotted as pixels - darker, redder means more queries in that bucket. A flat 2D plot of three dimensional data.&lt;/em&gt; What you see here is a bi- or multipartite distribution. It is a common case when benchmarking: We have a (often larger) number of normally executed queries, and a second set (often smaller) of queries that need our attention because they are executed slower. The slow set is also often run with unstable execution times – an important secondary observation.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://isotopp.github.io/uploads/2020/11/obs-mixture.png" target="_blank" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is not a normal distribution, but a thing composed of two other things (hence bipartite), each of which in itself hopefully can be adequately modelled as a normal distribution: A &lt;a href="https://en.wikipedia.org/wiki/Mixture_model#Gaussian_mixture_model" target="_blank" rel="noopener noreferrer"&gt;gaussian mixture&lt;/a&gt;. Luckily we do not actually have to deal with the math of these mixtures (I hope you did not follow the Wikipedia link :-) ) when we want to find slow queries. We just want to be able to separate them, which could even be done manually, and then want the back pointer to the events that constitute the cluster of outliers we identified.&lt;/p&gt;
&lt;h2 id="unstable-execution-times"&gt;Unstable execution times&lt;/h2&gt;
&lt;p&gt;I mentioned above:“They are also often run with unstable execution times – an important secondary observation.” Slow queries are often slow because they cannot use indexes. When a tree index can be used, the number of comparisons needed to find the elements we are searching for is some kind of log of the table size. The end result is usually 4 – there are 3-5 lookups&lt;strong&gt;¹&lt;/strong&gt; needed in about any tree index to do a point lookup of the first element of a result. That means that the execution time for any query using proper indexes is usually extremely stable. When indexes cannot be used, the lookup times are scan times – linear functions of the result position or size. This varies a lot more, and so we get much more variable execution times for slow queries, and the jitter makes it only worse: your “this query takes 20s instead of 20ms to run” degrades to the even more annoying “well, sometimes it’s 5s, and sometimes 40s”. &lt;strong&gt;¹&lt;/strong&gt; In MySQL, we work with 16KB block size, and in indexes we usually have a fan out of a few hundreds to one thousand per block or tree level. The depth of the index tree is the number of comparisons, and it is log to the base of (fan out) of the table length in records. This then becomes ln(table length)/ln(fan out), because that is how you get arbitrary base logs from ln(). For a fan out of 100, we get a depth of 3 for 1 million, and 4.5 for 1 billion records. For a fan out of 1000, it’s 2 for the million, and 3 for the billion. Plus one for the actual record, so the magical database number is 4: It’s always 4 media accesses to get any record through a tree index - stable execution times for indexed queries, because math works.&lt;/p&gt;
&lt;h2 id="where-monitoring-ends-and-observability-begins"&gt;Where Monitoring ends and Observability begins&lt;/h2&gt;
&lt;p&gt;With measurements, aggregations, and the visualisation as a heatmap, I can identify my outliers – that is, I learn that I have them and where they are in time and maybe space (group of hosts). But with a common monitoring agents such as Diamond or Telegraf, what is being recorded are numbers or even aggregates of numbers - the quantisation into time and value buckets happens in the client and all that is recorded in monitoring is “there have been 4 queries of 4-8ms run time at 17:10:20 on host randomdb-2029”. We don’t know what queries they were, where they came from or whatever other context may be helpful. With events, we optionally get rich records for each query - query text, stack trace context, runtime, hostname, database pool name and many other pieces of information. They are being aggregated as they come in, or can be aggregated along other, exotic dimensions after the fact. And best of all, once we find an outlier, we can go back from the outlier and find all the events that are within the boundary conditions of the section of the heapmap that we have marked up as an outlier. This also is the fundamental difference between monitoring (“We know we had an abnormal condition in this section of time and space”) and observability (“… and these are the events that make up the abnormality, and from them we can see why and how things went wrong.”). (Written after a longer call with a colleague on this subject).&lt;/p&gt;
&lt;p&gt;&lt;em&gt;First published on &lt;a href="https://blog.koehntopp.info/" target="_blank" rel="noopener noreferrer"&gt;https://blog.koehntopp.info/&lt;/a&gt; and syndicated here with permission of the author.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Kristian Köhntopp</author>
      <category>Honeycomb</category>
      <category>Monitoring</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>SolarWinds</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/11/obs-no_hu_14a32059177eeed0.jpg"/>
      <media:content url="https://percona.community/blog/2020/11/obs-no_hu_414f3a22a9e300c7.jpg" medium="image"/>
    </item>
    <item>
      <title>Zero downtime schema change with Liquibase &amp; Percona</title>
      <link>https://percona.community/blog/2020/10/26/zero-downtime-schema-change-with-liquibase-percona/</link>
      <guid>https://percona.community/blog/2020/10/26/zero-downtime-schema-change-with-liquibase-percona/</guid>
      <pubDate>Mon, 26 Oct 2020 14:14:50 UTC</pubDate>
      <description>I am always surprised to learn something new whenever I talk to a member of the open-source community. No matter how much I think I have heard of every use case there is for Liquibase (and database change management in general), I always hear something that makes this space still feel new. There’s always something left to discover.</description>
      <content:encoded>&lt;p&gt;I am always surprised to learn something new whenever I talk to a member of the open-source community. No matter how much I think I have heard of every use case there is for &lt;a href="https://www.liquibase.org" target="_blank" rel="noopener noreferrer"&gt;Liquibase&lt;/a&gt; (and database change management in general), I always hear something that makes this space still feel new. There’s always something left to discover.&lt;/p&gt;
&lt;p&gt;Today, that new something is the problem of how to perform large batches of changes with SQL ALTER TABLE statements. No problem you say? Okay, but this ALTER needs to happen in production. Still not worried? Well, let’s say you have millions of rows, and because you’re so successful, you have many transactions happening per minute (maybe even per second). Yeah…now we are talking. You can’t alter the table because you can’t afford to &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/alter-table.html" target="_blank" rel="noopener noreferrer"&gt;lock that table&lt;/a&gt; for the 30 minutes (or more) it may take to execute the ALTER command.&lt;/p&gt;
&lt;p&gt;Well, what do you do? A Liquibase user just spoke to me about this very use case, and that they use &lt;a href="https://www.percona.com/doc/percona-toolkit/LATEST/index.html" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt; with MySQL to solve this problem. (Thanks Erin Kolp!) In particular, &lt;a href="https://www.percona.com/doc/percona-toolkit/LATEST/pt-online-schema-change.html" target="_blank" rel="noopener noreferrer"&gt;pt-online-schema-change&lt;/a&gt; (which is a part of the &lt;a href="https://www.percona.com/software/database-tools/percona-toolkit" target="_blank" rel="noopener noreferrer"&gt;Percona Toolkit&lt;/a&gt;) that allows you to perform the ALTER to a table without interrupting table access. Under the covers it makes a temporary table from the actual table being altered, makes the DDL change, then copies the data over, and swaps out the tables.&lt;/p&gt;
&lt;p&gt;Great! No more writing one-off scripts as a DBA to manage this problem! The advantage of using Percona may be obvious, but I think Percona said it best:&lt;/p&gt;
&lt;p&gt;“These tools are ideal alternatives to private or ‘one-off’ scripts, because they are professionally developed, formally tested, and fully documented. They are also fully self-contained, so installation is quick and easy, and no libraries are installed.”
Percona and Liquibase are kindred spirits. I’ve seen folks rip out their old school CI/CD setup for the database and replace it with Liquibase for the same reason. It was made and tested by a community so you can have confidence it works and you can concentrate on delivery.&lt;/p&gt;
&lt;p&gt;So now I have solved production interruption due to changes like alters that can cause tables to become unavailable, how do I automate this? By combining Liquibase with a &lt;a href="https://github.com/adangel/liquibase-percona" target="_blank" rel="noopener noreferrer"&gt;Liquibase/Percona extension&lt;/a&gt; written by &lt;a href="https://github.com/adangel" target="_blank" rel="noopener noreferrer"&gt;Andreas Dangle&lt;/a&gt;.
Here are the basic steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.liquibase.org/download" target="_blank" rel="noopener noreferrer"&gt;Download and install Liquibase&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://www.percona.com/doc/percona-toolkit/LATEST/installation.html" target="_blank" rel="noopener noreferrer"&gt;Percona Toolkit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/adangel/liquibase-percona" target="_blank" rel="noopener noreferrer"&gt;Download the Percona Liquibase extension&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Place the jar file in your “lib” directory in your Liquibase install directory.&lt;/li&gt;
&lt;li&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/10/image1-1.png" alt="Zero downtime schema change with Liquibase &amp; Percona" /&gt;&lt;/figure&gt;&lt;/li&gt;
&lt;li&gt;Update any changeset that needs to use Percona to include `usePercona:true` (see example below)&lt;/li&gt;
&lt;li&gt;Profit&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="example"&gt;Example&lt;/h2&gt;
&lt;p&gt;Here, we want to add a column: Example:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;changeSet id="2" author="Alice"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;    &lt;addColumn tableName="person"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;        &lt;column name="address" type="varchar(255)"/&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;    &lt;/addColumn&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;/changeSet&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Corresponding command that Liqubase would run: pt-online-schema-change –alter=“ADD COLUMN address VARCHAR(255)” … Enjoy all the PTO you get because your deployments happen super fast with no downtime. Hey in the meantime, why don’t you smack talk and shit post on social media? I’m available, I’ve got thick skin, and I’m online a bunch:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Twitter: &lt;a href="https://twitter.com/RonakRahman" target="_blank" rel="noopener noreferrer"&gt;@ronakrahman&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/ronak/" target="_blank" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/ronak/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Discord: &lt;a href="https://discord.gg/9yBwMtj" target="_blank" rel="noopener noreferrer"&gt;https://discord.gg/9yBwMtj&lt;/a&gt; (ronak#8065)&lt;/li&gt;
&lt;li&gt;Github: &lt;a href="https://github.com/ro-rah" target="_blank" rel="noopener noreferrer"&gt;https://github.com/ro-rah&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Ronak Rahman</author>
      <category>ronak.rahman</category>
      <category>Liquibase</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Tools</category>
      <category>Toolkit</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/image1-1_hu_a727cac9ef2c40d6.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/image1-1_hu_750e1a1d9d939c3e.jpg" medium="image"/>
    </item>
    <item>
      <title>Mayastor: Lightning Fast Storage for Kubernetes</title>
      <link>https://percona.community/blog/2020/10/23/mayastor-lightning-fast-storage-for-kubernetes/</link>
      <guid>https://percona.community/blog/2020/10/23/mayastor-lightning-fast-storage-for-kubernetes/</guid>
      <pubDate>Fri, 23 Oct 2020 14:03:08 UTC</pubDate>
      <description>At MayaData we like new tech. Tech that makes our databases perform better. Tech like lockless ring buffers, NVMe-oF, and Kubernetes. In this blog post we’re going to see those technologies at work to give us awesome block storage performance with flexibility and simple operations.</description>
      <content:encoded>&lt;p&gt;At MayaData we like new tech. Tech that makes our databases perform better. Tech like &lt;a href="https://www.kernel.org/doc/Documentation/trace/ring-buffer-design.txt" target="_blank" rel="noopener noreferrer"&gt;lockless ring buffers&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/NVM_Express" target="_blank" rel="noopener noreferrer"&gt;NVMe-oF&lt;/a&gt;, and &lt;a href="https://kubernetes.io/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;. In this blog post we’re going to see those technologies at work to give us awesome block storage performance with flexibility and simple operations.&lt;/p&gt;
&lt;h2 id="mayastor--spdk--nvme--fast-databases"&gt;Mayastor + SPDK + NVMe = fast databases&lt;/h2&gt;
&lt;p&gt;Mayastor is new tech, it’s fast, and it’s based on &lt;a href="https://spdk.io/" target="_blank" rel="noopener noreferrer"&gt;SPDK&lt;/a&gt;. Why is SPDK exciting? It’s a new generation in storage software, designed for super high speed low latency &lt;a href="https://en.wikipedia.org/wiki/NVM_Express" target="_blank" rel="noopener noreferrer"&gt;NVMe&lt;/a&gt; devices. I’ll save you the scrolling and just tell you I believe Mayastor was able to max out the practical throughput of the nvme device I used for my benchmark, allowing for multiple high performance (20kqps+) database instances on a single node. Perfect for a database farm in Kubernetes&lt;/p&gt;
&lt;h2 id="why-test-with-a-relational-db"&gt;Why Test With a Relational DB?&lt;/h2&gt;
&lt;p&gt;Open source relational databases are a staple component for app developers. People use them all the time for all kinds of software projects. It’s easy to build relationships between different groups of data, the syntax is well known, and they’ve been around for as long as modern computing.  When a dev wants a relational database to hack on, odds are good that it’s going to be &lt;a href="https://www.postgresql.org/" target="_blank" rel="noopener noreferrer"&gt;Postgres&lt;/a&gt; or &lt;a href="https://www.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt;. They’re Free. They’re open source. They’ve both been quite stable for a long time, and they both run in Kubernetes just great. The good folks at Percona make containerized, production ready versions of these databases, and we’re going to use their &lt;a href="https://www.percona.com/software/mysql-database" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MySQL&lt;/a&gt; for the following tests.&lt;/p&gt;
&lt;h2 id="kubernetes-and-the-learning-curve"&gt;Kubernetes and the Learning Curve&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/10/image1.png" alt="Mayastor 1" /&gt;&lt;/figure&gt;
So what is the difficulty with running relational databases, or databases in general, inside of Kubernetes?  Given all the features of Kubernetes for managing highly available application deployments: Automation with control, Common declarative configuration interface, and build-in observability, one would think Databases are the application to deploy to it. The main difficulty is storage. Until now.&lt;/p&gt;
&lt;h2 id="dbs-are-often-io-bound"&gt;DBs are Often IO Bound&lt;/h2&gt;
&lt;p&gt;The trick is, databases are notoriously disk intensive and latency sensitive. The reason this has an impact on your Kubernetes deployments is that storage support in stock settings and untuned K8s clusters is rudimentary at best. That’s created a number of projects that are out to provide for storage in K8s projects, including, of course, the popular OpenEBS project.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nn"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;apps/v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Deployment&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;matchLabels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;nodeSelector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;db&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;securityContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;fsGroup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1001&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;limits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;cpu&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"20"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;8Gi&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;"--ignore-db-dir"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;"lost+found"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;foobarbaz&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;containerPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;3306&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumeMounts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;mountPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;/var/lib/mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;percona-mysql&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;claimName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;vol2&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this post I’m going to investigate the newest of the storage engines that comprise the data plane for OpenEBS. As a challenge, I’d like to be able to achieve 20,000 queries per second out of a MySQL database using this storage engine for block storage underneath.&lt;/p&gt;
&lt;p&gt;Now, getting to 20kqps could be easy with the right dataset. But I want to achieve this with data that’s significantly larger than available RAM. In that scenario, 20kqps is pretty fast (as you can see below by the disk traffic and cpu load it generates).&lt;/p&gt;
&lt;p&gt;There are a number of great options available for deploying MySQL in Kubernetes, but for this test we really just want a good, high performance database to start with. I won’t really need fancy DBaaS functionality, an operator to take care of backups, or anything of the sort. We’ll start from scratch with Percona’s MySQL container, and build a little deployment manifest for it. Now, maybe you’re thinking: “don’t you mean a stateful set?” But no, we’re going to use a deployment for this. Simple and easy to configure alongside of Container Attached Storage.&lt;/p&gt;
&lt;p&gt;The deployment pictured references an external volume, vol2. Now we could create a PV for this on the local system, but then if our MySQL instance gets scheduled on a different machine, the storage won’t be present.  &lt;/p&gt;
&lt;h2 id="enter-mayastor"&gt;Enter Mayastor&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;span class="code-block__lang"&gt;yaml&lt;/span&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nn"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;PersistentVolumeClaim&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;vol2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;&lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;storageClassName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;mayastor-nvmf-fast&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;accessModes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;ReadWriteOnce&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;20Gi&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Mayastor is the latest storage engine for OpenEBS and MayaData’s Kubera offering. Mayastor represents the state-of-the-art in feature-rich storage for Linux systems. Mayastor creates virtual volumes that are backed by fast NVMe disks, and exports those volumes over the super-fast NVMf protocol. It’s a fresh implementation of the Container Attached Storage model. By &lt;a href="https://www.cncf.io/blog/2018/04/19/container-attached-storage-a-primer/" target="_blank" rel="noopener noreferrer"&gt;CAS&lt;/a&gt;, I mean it’s purpose built for the multi-tenant distributed world of the cloud. CAS means each workload gets its own storage system, with knobs for tuning and everything. The beauty of the CAS architecture is that it decouples your apps from their storage. You can attach to a disk locally or via NVMf or iSCSI.&lt;/p&gt;
&lt;p&gt;Mayastor is CAS and it is purpose built to support cloud native workloads at speed with very little overhead. At MayaData we wrote it in Rust; we worked with Intel to implement new breakthrough technology called SPDK; made it easy to use with Kubernetes and possible to use with anything; and open-sourced it because, well, it improves the state of the art of storage in k8s and community always wins (eventually).&lt;/p&gt;
&lt;p&gt;If you’d like to set up Mayastor on a new or existing cluster, have a look at: &lt;a href="https://mayastor.gitbook.io/introduction/" target="_blank" rel="noopener noreferrer"&gt;https://mayastor.gitbook.io/introduction/&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="the-speed-hypothesis"&gt;The Speed Hypothesis&lt;/h2&gt;
&lt;p&gt;The first thing I want to do is get an idea of how many queries per second (QPS) at which the DB maxes out. My suspicion at the outset is that the limiter for QPS is typically storage latency. We can deploy our Mayastor pool and storage class manifests in a small test cluster just to make sure they’re working as expected, and then tune our test to drive the DB as hard as we can. Performance characteristics of databases are very much tied to the specifics of the workload and table structure. So the first challenge here is to sort out what kind of workload is going to exercise the disk effectively.&lt;/p&gt;
&lt;p&gt;Sysbench is a great tool for exercising various aspects of Linux systems, and it includes some database tests we can use to get some baselines. &lt;a href="https://github.com/akopytov/sysbench" target="_blank" rel="noopener noreferrer"&gt;https://github.com/akopytov/sysbench&lt;/a&gt; is where you can find it. We can put it in a container and point that mysql OLTP test right at our database service.&lt;/p&gt;
&lt;p&gt;After a little bit of experimentation with sysbench options to set different values for the table size, number of tables, etc., I arrived at very stable results on a small cluster in AWS using m5ad.xlarge nodes. I’ve settled on 10 threads and 10 tables, with 10M rows in each table. With no additional tuning on MySQL, sysbench settles into about 4300 queries per second with an average latency at 46ms. Pretty good for a small cloud setup.&lt;/p&gt;
&lt;p&gt;With that as a baseline, let’s see how much we can get out of it on a larger system. Intel makes high core-count cpus and very fast Optane NVMe devices, and they’ve generously allowed us to use their benchmarking labs for a little while for some database testing. Without going into too much hardware geekery, we have three 96 core boxes running at 2.2Ghz with more RAM than I need and 100Gb networking to string them together. Each box has a small Optane NVMe device, and this single little drive is capable of at least 400k iops and 1.7GB/s through an ext4 filesystem. That’s fast. The published specs for this device are a little bit higher (about 500k iops and 2GB/s) but we’ll take this to be peak perf for our purposes.&lt;/p&gt;
&lt;h2 id="results-of-the-first-test"&gt;Results of the First Test&lt;/h2&gt;
&lt;p&gt;For the first test, just to characterize the setup, I threw 80 or so cores at the database, and ran sysbench against it with a whole lot of threads. Like 300.&lt;/p&gt;
&lt;p&gt;I started with a smaller table size just to save a little time on the load phase.  It took a few iterations to get the test to run - adjustments to &lt;code&gt;max_connections&lt;/code&gt;. The smaller table size means it might fit into memory, but it’ll test our test framework quickly.  Sure enough, running our OLTP test gets us close to 100k queries per second. But, there’s no real disk activity. We need more data in order to test the underlying disks.&lt;/p&gt;
&lt;p&gt;I cranked up the table size to 20,000,000 rows per table, tuned Mayastor to use three of the cores on each box, and started tuning the test to get max queries per second out of it. Three tables seem to be enough to overflow the 8G of RAM we have allocated to the container. Now when I check the disk stats on the node, there’s plenty of storage traffic. Still less than a gigabyte per second though. The system settles down into a comfortably speedy 30kqps or thereabouts, with disk throughput right around 700MB/s and a latency right around 50ms per query. Curiously the database is using about 8 cores. Clearly we don’t need to allocate all 80.&lt;/p&gt;
&lt;p&gt;We’ve seen more than 700MB/s out of the storage already from our synthetic tests. That’s pretty far off of the peak measured perf of 1.7GB/s.&lt;/p&gt;
&lt;h2 id="i-wonder-if-we-can-get-another-mysql-on-here"&gt;I wonder if we can get another MySQL on here…&lt;/h2&gt;
&lt;p&gt;Sure enough, this system is fast enough to host two high performance relational database instances on the same nvme drive, with cpu to spare.  If only I had another one of those NVMe drives in this box….&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/10/image3_hu_87e0210307088601.png 480w, https://percona.community/blog/2020/10/image3_hu_8de5514575f4b675.png 768w, https://percona.community/blog/2020/10/image3_hu_4fca178fb10e261.png 1400w"
src="https://percona.community/blog/2020/10/image3.png" alt="A screenshot showing Mayastor in action" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;That’s about 1.1GB/s, with 52k IOPs. Not bad. We might even be able to fit a third in if we’re willing to sacrifice a little bit of speed across all the instances.&lt;/p&gt;
&lt;p&gt;There’s more work to be done to characterize database workloads like this one. There’s also an opportunity to investigate why the database scales up to 20-30k IOPs but leaves some storage and system resources available.&lt;/p&gt;
&lt;p&gt;Perhaps most importantly - Maystor provides a complete abstraction for kubernetes volumes, and allows for replicating to multiple nodes, snapshotting volumes, encrypting traffic, and generally everything you’ve come to expect from enterprise storage.  Mayastor is showing the promise here of LocalPV like performance - at least maxing out the capabilities of our DB as configured - while also providing the ease of use and ability to add resilience.&lt;/p&gt;
&lt;p&gt;Lastly, if you are interested in Percona and OpenEBS, there are a lot of blogs from the OpenEBS community and a recent one by the CTO of Percona on the use of OpenEBS LocalPV as their preferred LocalPV solution here: &lt;a href="https://www.percona.com/blog/2020/10/01/deploying-percona-kubernetes-operators-with-openebs-local-storage/" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/blog/2020/10/01/deploying-percona-kubernetes-operators-with-openebs-local-storage/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://forums.percona.com/categories/percona-distribution-for-mysql" target="_blank" rel="noopener noreferrer"&gt;Percona Community Forum&lt;/a&gt;, &lt;a href="https://openebs.io/community/" target="_blank" rel="noopener noreferrer"&gt;OpenEBS&lt;/a&gt;, and &lt;a href="https://dok.community/" target="_blank" rel="noopener noreferrer"&gt;Data on Kubernetes&lt;/a&gt;communities are increasingly overlapping and I hope and expect this write up will result in yet more collaboration. Come check out Check out &lt;a href="https://mayastor.gitbook.io/introduction/" target="_blank" rel="noopener noreferrer"&gt;Mayastor&lt;/a&gt; on your own and let us know how Mayastor works for your use case in the comments below!&lt;/p&gt;
&lt;p&gt;Brian Matheson has spent twenty years doing things like supporting developers, tuning networks, and writing tools. A serial entrepreneur with an intense customer focus, Brian has helped a number of startups in a technical capacity. You can read more of Brian’s blog posts at &lt;a href="https://blog.mayadata.io/author/brian-matheson" target="_blank" rel="noopener noreferrer"&gt;https://blog.mayadata.io/author/brian-matheson&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Brian Matheson</author>
      <category>Kubernetes</category>
      <category>Kubernetes</category>
      <category>MayaData</category>
      <category>Mayastor</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/image1_hu_73722df26ba683eb.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/image1_hu_ef94108f054a471a.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL 8.0 Document Store, Discovery of a New World – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/19/mysql-8-0-document-store-discovery-of-a-new-world-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/19/mysql-8-0-document-store-discovery-of-a-new-world-percona-live-online-talk-preview/</guid>
      <pubDate>Mon, 19 Oct 2020 14:01:12 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Wed 21 Oct • New York 5:00 a.m. • London 10:00 a.m. • New Delhi 2:30 p.m. • Singapore 5:00 p.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Wed 21 Oct • New York 5:00 a.m. • London 10:00 a.m. • New Delhi 2:30 p.m. • Singapore 5:00 p.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;MySQL Document Store enables us to work with SQL relational tables and schema-less JSON collections. So instead of having a mixed bag of databases, you can just rely on MySQL, where the JSON documents can be stored in collections and managed with CRUD operations. All you need to do is install the X plugin. In this session, you will learn what a document store is, how to install and use it, and all the reasons for considering it. We will also see several specific features helping developers and illustrate how the usual MySQL DBA can manage this new world.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;This talk is very exciting because it’s focus on new capabilities that is available only in MySQL and that many people are not aware of it. Every time I talk about that topic, the audience is really surprised and enthusiast about MySQL Document Store. It’s not common to have a JSON document store will all the capabilities of MySQL, fully transactional but at the same time using CRUD operations where you can mix your relational data and your schemaless document in the same query.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;This particular talk is more focused on developers but I tried to also include content for DBAs. However it’s not a talk oriented on operators like I usually do during Percona Live shows.&lt;/p&gt;
&lt;h3 id="what-other-talks-are-you-most-looking-forward-to"&gt;What other talks are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;I’m looking forward to hear again from René and ProxySQL, a project that I really appreciate. And also I’m curious to see which recommendation Øystein will provide for MySQL analytics queries.&lt;/p&gt;
&lt;h3 id="is-there-any-other-question-you-would-like-to-answer"&gt;Is there any other question you would like to answer?&lt;/h3&gt;
&lt;p&gt;I will also have the honor to present the State of the Dolphin during the show, don’t miss it if you want to learn about MySQL 8.0 and our Community. Of course I won’t deliver a full list of features as it would take almost the full conference time ;)&lt;/p&gt;</content:encoded>
      <author>Frédéric Descamps</author>
      <category>frederic.descamps</category>
      <category>MySQL</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>The State of ProxySQL, 2020 Edition – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/16/the-state-of-proxysql-2020-edition-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/16/the-state-of-proxysql-2020-edition-percona-live-online-talk-preview/</guid>
      <pubDate>Fri, 16 Oct 2020 20:08:01 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Wed 21 Oct • New York 7:30 a.m. • London 12:30 p.m. • New Delhi 5:00 p.m. • Singapore 7:30 p.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Wed 21 Oct • New York 7:30 a.m. • London 12:30 p.m. • New Delhi 5:00 p.m. • Singapore 7:30 p.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;ProxySQL is a high performance, high available, protocol aware proxy for MySQL. 2.0 has been GA for some time now, and there have been a lot of changes that come in point releases as well, such that you can benefit from them. Listen to René, the founder of ProxySQL, take you through some of the new features in 2.0, and how you can effectively utilize them. Some topics that are covered include: - LDAP authentication - SSL for client connections - AWS Aurora usage - Native clustering support for Percona XtraDB Cluster (PXC) / Galera Cluster / group replication - Kubernetes deployments This is the talk to take you from intermediate ProxySQL user to expert in the 2.0 feature set. There will also be talk about the roadmap for what is coming next.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;This talk is exciting because it will bring to the community all the latest features in ProxySQL, and short term plans for even more exciting features.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Devs, DBAs, sysadmins: all users interested in optimizing and manging traffic against MySQL backends can benefit from this talk, no matter their level of experience.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;All the talks on the Percona Live agenda are exciting, but the two talks that rank very high in my list are “&lt;a href="https://sched.co/eouq" target="_blank" rel="noopener noreferrer"&gt;Why Public Database as a Service is Prime for Open Source Disruption&lt;/a&gt;” by Peter Zaitsev, and “&lt;a href="https://sched.co/ePpr" target="_blank" rel="noopener noreferrer"&gt;Sharding: DIY or Out of the Box Solution?&lt;/a&gt;” by Art van Scheppingen.&lt;/p&gt;</content:encoded>
      <author>René Cannaò</author>
      <category>rene.cannao</category>
      <category>MySQL</category>
      <category>PLO-2020-10</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>Engineering Data Reliably Using SLO Theory – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/15/engineering-data-reliably-using-slo-theory-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/15/engineering-data-reliably-using-slo-theory-percona-live-online-talk-preview/</guid>
      <pubDate>Thu, 15 Oct 2020 02:43:47 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 20 Oct • New York 12:30 p.m. • London 5:30 p.m. • New Delhi 10:00 p.m. • Singapore 12:30 a.m. (next day)</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Tue 20 Oct • New York 12:30 p.m. • London 5:30 p.m. • New Delhi 10:00 p.m. • Singapore 12:30 a.m. (next day)&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;Not so long ago, operations specialists worked much like today’s data engineers do: with specialized skills, they were the people who kept sites running, who responded to emergencies, and who—unfortunately—spent much of their time dealing with incidents and other “fires.” When the DevOps revolution came, this began to change. Better tools, better practices, and better culture shaped how Ops folks worked. A subset of that DevOps culture soon emerged: Site Reliability Engineers. These were people whose focus was not just on the day-to-day deployment of applications, but running platforms, products, and services with very high performance, very large scale, and with very high demand for reliability. Data Engineering was left out of this revolution.&lt;/p&gt;
&lt;p&gt;But it is not too late! By taking concepts from SRE culture, in particular, the theory of Service Level Objectives, we look at how teams operating and developing data platforms and data products can be built more reliably through the use of quantitative measures and product thinking. This talk will discuss concrete examples of the benefits of this approach for data teams and how organizations can benefit from this mindset.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;I see a lot of the same pains over and over in the data engineering world: data engineers spending too much time firefighting or dealing with ad hoc requests to innovate, data scientists pained by long lead times for pipeline engineering, and poor data quality eroding trust and leading organizations to make “gut” decisions instead of data-driven ones. This doesn’t have to be our world. The reality that many data engineers face today is similar to the one ops folks faced years ago, before Site Reliability Engineering (SRE) practices began to solidify. However, most of the learnings in the SRE space, particularly Service Level Objective (SLO) theory, don’t translate directly to the data space unless we adapt them to our unique reality. But if we can build solid, data-driven best practices, we can achieve so much—less firefighting, more creation; less guesswork, more trust.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Certainly, data engineers will benefit. But also managers, executives, and product owners will all benefit from learning how we can deliberately craft data engineering practices to optimize for reliability. Data should be a business driver, but too often I see it as a cost center. We need to change that calculus.&lt;/p&gt;
&lt;h3 id="what-other-talks-are-you-most-looking-forward-to"&gt;What other talks are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;I’m excited to see &lt;a href="https://sched.co/eouw" target="_blank" rel="noopener noreferrer"&gt;Karen Ambrose’s talk&lt;/a&gt;. I think that building technology to address rapidly-evolving crises is an enormous challenge, and frankly, I think that maybe a lot of organizations have been too complacent and risk averse to manage rapid pivots. I’m really curious to hear the story about how people came together to change the status quo in an effort to literally save the world.&lt;/p&gt;
&lt;h3 id="is-there-any-other-question-you-would-like-to-answer"&gt;Is there any other question you would like to answer?&lt;/h3&gt;
&lt;p&gt;There’s a Millennial Prize Problem or two still unsolved, and I’d love to answer one of those.&lt;/p&gt;</content:encoded>
      <author>Emily Gorcenski</author>
      <category>emily.gorcenski</category>
      <category>DevOps</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>DBdeployer, the Community Edition – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/14/dbdeployer-the-community-edition-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/14/dbdeployer-the-community-edition-percona-live-online-talk-preview/</guid>
      <pubDate>Wed, 14 Oct 2020 18:44:09 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Wed 21 Oct • New York 3:30 a.m. • London 8:30 a.m. • New Delhi 1:00 p.m. • Singapore 3:30 p.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Wed 21 Oct • New York 3:30 a.m. • London 8:30 a.m. • New Delhi 1:00 p.m. • Singapore 3:30 p.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/datacharmer/dbdeployer" target="_blank" rel="noopener noreferrer"&gt;DBdeployer&lt;/a&gt;, an open source tool that allows easy deployment of many MySQL/Percona servers in the same host, has passed two years of development. Its latest additions have aimed at improving ease of use for both beginners and experts. This talk will show how to start with dbdeployer with an empty box, and quickly populate it with recent and less recent server versions, all at the command line.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;This talk is a celebration of collaboration in the community. I will present recent features that were requested, or suggested, by the community. I will also show how those suggestions came to fruition, to encourage more of the same from attendees.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Any current or future user of dbdeployer. They will see the process of refining the usability of the tool through interaction with the community.&lt;/p&gt;
&lt;h3 id="is-there-any-other-question-you-would-like-to-answer"&gt;Is there any other question you would like to answer?&lt;/h3&gt;
&lt;p&gt;There is a recurring question that I get from people who are about to use dbdeployer but haven’t gotten to know it well: “is it cloud friendly?” It pains me to answer that it isn’t, not because of a deficiency of the tool, but because it was designed to stay out of the cloud. Using dbdeployer, the cloud is your laptop, or the tiny Linux server in your workroom. The main purpose of dbdeployer is to enable developers, support engineers, QA engineers, database administrators, to have on demand deployments of MySQL always, even when there is no connection with the cloud or when you want something faster than the cloud.&lt;/p&gt;
&lt;h3 id="what-other-talks-are-you-most-looking-forward-to"&gt;What other talks are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;Most of the talks and keynote are promising. I look forward in particular to watch “&lt;a href="https://sched.co/ePp6" target="_blank" rel="noopener noreferrer"&gt;Vitess Online Schema Migration Automation&lt;/a&gt;” by Shlomi Noach and “&lt;a href="https://sched.co/ePpc" target="_blank" rel="noopener noreferrer"&gt;MySQL 8.0 Document Store - Discovery of a New World&lt;/a&gt;” by Frédéric Descamps.&lt;/p&gt;</content:encoded>
      <author>Giuseppe Maxia</author>
      <category>Events</category>
      <category>MySQL</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>Sharding: DIY or Out of the Box Solution? – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/13/sharding-diy-or-out-of-the-box-solution-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/13/sharding-diy-or-out-of-the-box-solution-percona-live-online-talk-preview/</guid>
      <pubDate>Tue, 13 Oct 2020 03:04:37 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Wed 21 Oct • New York 7:00 a.m. • London 12:00 noon • New Delhi 4:30 p.m. • Singapore 7:00 p.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Wed 21 Oct • New York 7:00 a.m. • London 12:00 noon • New Delhi 4:30 p.m. • Singapore 7:00 p.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;I’m not sure if my talk is exciting, but I’m quite positive the subject is! Vitess has been gaining a lot of traction over the past few years and I must admit that I’ve been keen to get hands-on experience with it for years. As we, MessageBird, encounter rapid growth standard (read) scaling wasn’t applicable anymore and we were in need for a solution. Late 2019 we implemented our (quick) DIY sharding solution based upon existing components. A few months later we encountered the next scaling issue and we found our own built solution wasn’t suitable in this case. That’s when we considered investing our time instead in a Vitess proof of concept (community edition) and this talk will compare the two paths chosen and show some of the choices and compromises you have to make.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;People who need to shard their (write) workloads and are considering using Vitess for this purpose. Our intention is to do a fair comparison between the two to help others make a well prepared decision.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;The agenda is full of presentations I’m looking forward to. However as my current world is dominated by productionalising Vitess, I will be looking forward to &lt;a href="https://perconaliveonline2020.sched.com/event/ePp6/vitess-online-schema-migration-automation" target="_blank" rel="noopener noreferrer"&gt;Shlomi Noach’s talk&lt;/a&gt; about online schema migrations automation in Vitess.&lt;/p&gt;</content:encoded>
      <author>Art van Scheppingen</author>
      <category>art.vanscheppingen</category>
      <category>MySQL</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>What If We Could Use Machine Learning Models as Tables – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/12/what-if-we-could-use-machine-learning-models-as-tables-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/12/what-if-we-could-use-machine-learning-models-as-tables-percona-live-online-talk-preview/</guid>
      <pubDate>Mon, 12 Oct 2020 17:56:51 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 20 Oct • New York 1:30 p.m. • London 6:30 p.m. • New Delhi 11:00 p.m. • Singapore 1:30 a.m. (next day)</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Tue 20 Oct • New York 1:30 p.m. • London 6:30 p.m. • New Delhi 11:00 p.m. • Singapore 1:30 a.m. (next day)&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;In most machine learning tasks, one has to first organize data in some form and then turn it into information about the problem that needs to be solved. One could say that the requirement to train many machine learning algorithms is information, not just data. Given that most of the world’s structured and semi-structured data (information) lives in databases, it makes sense to bring ML capabilities straight to the databases themselves. In this talk we want to present to the Percona community what we have learned in the effort of enabling existing databases like MariaDB and Postgres with frictionless ML powers.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;ML straight in databases is exciting because it enables hundreds of thousands of people that already know SQL to solve problems using machine learning without any extra skills.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Anyone knows how to query a SQL database.&lt;/p&gt;
&lt;h3 id="is-there-any-other-question-you-would-like-to-answer"&gt;Is there any other question you would like to answer?&lt;/h3&gt;
&lt;p&gt;What databases can we do machine learning in now, and which ones are coming?&lt;/p&gt;
&lt;h3 id="what-other-talks-are-you-most-looking-forward-to"&gt;What other talks are you most looking forward to?&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://perconaliveonline2020.sched.com/#" target="_blank" rel="noopener noreferrer"&gt;The Cloud is Inevitable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://perconaliveonline2020.sched.com/#" target="_blank" rel="noopener noreferrer"&gt;Serverless Databases: The Good, the Bad, and the Ugly &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://perconaliveonline2020.sched.com/#" target="_blank" rel="noopener noreferrer"&gt;The State of MongoDB, Its Open Source Community, and Where Percona Is Going With It&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Jorge Torres</author>
      <category>jorge.torres</category>
      <category>MariaDB</category>
      <category>PLO-2020-10</category>
      <category>PostgreSQL</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>Vitess Online Schema Migration Automation – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/09/vitess-online-schema-migration-automation-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/09/vitess-online-schema-migration-automation-percona-live-online-talk-preview/</guid>
      <pubDate>Fri, 09 Oct 2020 17:28:19 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Wed 21 Oct • New York 2:30 a.m. • London 7:30 a.m. • New Delhi 12:00 noon • Singapore 2:30 p.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Wed 21 Oct • New York 2:30 a.m. • London 7:30 a.m. • New Delhi 12:00 noon • Singapore 2:30 p.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;For many, running an online schema migration operation is still a manual job: from building the correct command, through identifying where the migration should run and which servers are to be affected, to auditing progress and completing the migration. Sharded environment poses an additional burden, as any logical migration must be applied multiple times, once for each shard.&lt;/p&gt;
&lt;p&gt;What if you could just issue an ALTER TABLE … statement, and have all that complexity automated away? Vitess, an open source sharding framework for MySQL, is in a unique position to do just that. This session shows how Vitess’s proxy/agent/topology architecture, together with gh-ost, are used to hide schema change complexity, and carefully schedule and apply schema migrations.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;My work unifies multiple open source solutions (gh-ost, freno, and others) in a single, managed place. Vitess becomes an infrastructure solution, which can automate away the complexities of schema migrations: running, tracking, handling errors, cleaning up. It offers a completely automated cycle for most users, yet still gives them the controls.&lt;/p&gt;
&lt;p&gt;Whether with gh-ost or pt-online-schema-change, vitess is able to abstract away the migration process such that the user will normally just run and forget. Having worked as an operational engineer, and having developed schema migration automation in my past experience, I’m just excited to think about the users who will save hours of manual labor a week with this new offering.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Operational DBAs and engineers who perform manual schema migrations, or are looking to automate their database infrastructure.&lt;/p&gt;
&lt;h3 id="what-other-talks-are-you-most-looking-forward-to"&gt;What other talks are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;I’m in particular curious to hear about what’s new in distributed databases and geo replication. Otherwise, as always, I’m keen to hear about open source tools in the MySQL ecosystem.&lt;/p&gt;
&lt;h3 id="is-there-any-other-question-you-would-like-to-answer"&gt;Is there any other question you would like to answer?&lt;/h3&gt;
&lt;p&gt;Q: Is this work public? A: Yes, it is. This work is expected to be released as an experimental feature as part of Vitess 8.0, end of October 2020. It is public, free and open source.&lt;/p&gt;</content:encoded>
      <author>Shlomi Noach</author>
      <category>shlomi.noach</category>
      <category>MySQL</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>Analytical Queries in MySQL – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/09/analytical-queries-in-mysql-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/09/analytical-queries-in-mysql-percona-live-online-talk-preview/</guid>
      <pubDate>Fri, 09 Oct 2020 17:05:14 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 20 Oct • New York 6:00 p.m. • London 11:00 p.m. • New Delhi 3:30 a.m. • Singapore 6:00 a.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Tue 20 Oct • New York 6:00 p.m. • London 11:00 p.m. • New Delhi 3:30 a.m. • Singapore 6:00 a.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;MySQL’s sweet spot is known to be online transaction processing (OLTP), and it can support a very high load of short transactions. Many users will also want to run analytical queries (OLAP) on their MySQL data. Often they achieve this by exporting their data to another database system that is tailored for analytical queries. However, this introduces overhead and delay that can be avoided by running your analytical queries directly in your MySQL database.&lt;/p&gt;
&lt;p&gt;This presentation will discuss how you can tune your complex analytical queries to achieve better performance with MySQL. We will look at some of the queries from the well-known TPC-H/DBT-3 benchmark, and show how we can improve the performance of these queries through query rewrites, optimizer hints, and improved configuration settings.&lt;/p&gt;
&lt;p&gt;We will also compare the performance of these queries to other database systems like MariaDB and PostgreSQL, and discuss how MySQL could be improved to better support these queries. While this presentation will mainly focus on MySQL, we will also compare with MariaDB and Postgres and discuss what causes the difference in performance between the systems.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;This talk is exciting because we will show several ways you can improve the performance of complex queries in MySQL. We will also compare the performance of MySQL to other database systems, and discuss what MySQL could learn from those systems.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Developers who use MySQL will learn how to write more efficient queries, and DBAs will learn how to tune their systems for better performance of complex queries. People that are interested in implementation aspects of database systems, should find the discussion of what.can be learned from other database systems interesting.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;I look forward to the other presentations on analytical queries: “&lt;a href="https://sched.co/ePo2" target="_blank" rel="noopener noreferrer"&gt;SQL Row Store vs Data Warehouse: Which Is Right for Your Application?&lt;/a&gt;” by Robert Hodges, and “&lt;a href="https://sched.co/ePr2" target="_blank" rel="noopener noreferrer"&gt;Building Data Lake with MariaDB ColumnStore&lt;/a&gt;” by Sasha Vaniachine. (However, I will probably not get up at 5:30am to watch the latter live :-)).&lt;/p&gt;
&lt;p&gt;I also look forward to the presentations on “&lt;a href="https://sched.co/eN9q" target="_blank" rel="noopener noreferrer"&gt;How Can Databases Capitalize on Computational Storage?&lt;/a&gt;” by Tong Zhang and JB Baker, and “&lt;a href="https://sched.co/ePo7" target="_blank" rel="noopener noreferrer"&gt;How to Protect the SQL Engine From Running Out of Memory&lt;/a&gt;” by Huaiyu Xu and Song Gao.&lt;/p&gt;</content:encoded>
      <author>Øystein Grøvlen</author>
      <category>oystein.grovlen</category>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>PLO-2020-10</category>
      <category>PostgreSQL</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>Serverless Databases: The Good, the Bad, and the Ugly – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/08/serverless-databases-the-good-the-bad-and-the-ugly-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/08/serverless-databases-the-good-the-bad-and-the-ugly-percona-live-online-talk-preview/</guid>
      <pubDate>Thu, 08 Oct 2020 20:58:38 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Wed 21 Oct • New York 4:30 a.m. • London 9:30 a.m • New Delhi 2:00 p.m. • Singapore 4:30 p.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Wed 21 Oct • New York 4:30 a.m. • London 9:30 a.m • New Delhi 2:00 p.m. • Singapore 4:30 p.m.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="abstract"&gt;Abstract&lt;/h2&gt;
&lt;p&gt;Starting with AWS, the major cloud providers offer different options to run a MySQL or a MySQL-compatible database on the cloud. A new approach is to rely on so-called serverless (relational) databases like Aurora Serverless that offer both traditional TCP connections and HTTP API access. Can serverless really be the future? Can data API really replace a MySQL connector? What are the major limitations of a serverless database cluster and do they really protect from inefficient use of database resources?&lt;/p&gt;
&lt;h2 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h2&gt;
&lt;p&gt;The database is the most challenging layer to optimize resources in the cloud and achieve elasticity. Serverless relational databases can help in that but introduce as well new limitations and challenges, including cloud vendor lock-in. We will discuss the good, the bad and the ugly of running a MySQL database serverless.&lt;/p&gt;
&lt;h2 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h2&gt;
&lt;p&gt;DevOps and cloud architects, almost all the lazy ones. The ones who would like to hide the complexity of managing a relational database on the cloud and optimise price-performances on their deployments with the click of a button.&lt;/p&gt;
&lt;h2 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h2&gt;
&lt;p&gt;Many exciting topics in the agenda but I am really looking forward to “&lt;a href="https://sched.co/eN9q" target="_blank" rel="noopener noreferrer"&gt;How Can Databases Capitalize on Computational Storage?&lt;/a&gt;” and “&lt;a href="https://sched.co/ePnR" target="_blank" rel="noopener noreferrer"&gt;MySQL Ecosystem on ARM&lt;/a&gt;” among many others. For the keynotes, I am very interested in “&lt;a href="https://sched.co/eov2" target="_blank" rel="noopener noreferrer"&gt;The Cloud is Inevitable&lt;/a&gt;” and I am looking forward to Peter’s one as well ("&lt;a href="https://perconaliveonline2020.sched.com/#" target="_blank" rel="noopener noreferrer"&gt;Why Public Database as a Service is Prime for Open Source Disruption").&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Renato Losio</author>
      <category>renato-losio</category>
      <category>AWS</category>
      <category>Events</category>
      <category>MySQL</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL Ecosystem on ARM – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/07/mysql-ecosystem-on-arm-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/07/mysql-ecosystem-on-arm-percona-live-online-talk-preview/</guid>
      <pubDate>Wed, 07 Oct 2020 02:28:58 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 20 Oct • New York 8:00 p.m. • London 1:00 a.m (next day) • New Delhi 5:30 a.m. • Singapore 8:00 a.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Tue 20 Oct • New York 8:00 p.m. • London 1:00 a.m (next day) • New Delhi 5:30 a.m. • Singapore 8:00 a.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract?&lt;/h3&gt;
&lt;p&gt;The ARM ecosystem is quickly evolving as a cost-effective alternative to run High-Performance Computing (HPC) software. It continues to grow with some major cloud players hosting ARM-based cloud servers. MySQL too joined the ecosystem with 8.x. MariaDB already has made its presence. But besides the mainline server, a lot of tools are yet to get ported to ARM.&lt;/p&gt;
&lt;p&gt;In this talk, we will explore what all aspects of the MySQL ecosystem are part of ARM, work in progress, optimization being done for ARM, challenges involved, Is it safe to run MySQL (or its variant) on ARM?, community and industry support, performance aspect (especially with x86_64), etc.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;MySQL recently added support for ARM (starting 8.x). ARM on the other hand is gaining popularity as a cost-effective solution for running High-Performance Computing Software with multiple cloud providers (Huawei, Amazon, Oracle cloud) providing ARM instances. The community is excited to learn how the MySQL ecosystem is evolving on ARM and what kind of advantage users could get by running it on ARM.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Talk is mainly meant for end-user/DBA/dev-ops all those who need to decide how to optimally deploy MySQL and still ensure maximum throughput. The talk will explore the pros and cons of running MySQL on ARM and supporting ecosystems that should give audiences fair ideas if it is time for them to consider the said route.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;Percona Live, as always, has a lined up number of good and new talks. Personally, I am interested in checking MySQL deployment that scales geographically. Users are not only moving to the cloud but also considering if the said setup could now be globalized through geo-distribution keeping a tight check on cost especially with the situation of the current pandemic that has forced all businesses to re-look at their spending. Managing Database @ Scale, Best Practices in Design, and Implementing MySQL Geographic Distributed HA solutions are some of my short-lists to attend.&lt;/p&gt;
&lt;h3 id="is-there-any-other-question-you-would-like-to-answer"&gt;Is there any other question you would like to answer?&lt;/h3&gt;
&lt;p&gt;I think there are a plethora of options available for users in the DB ecosystem space and the ecosystem is evolving at a pretty good pace. My only message to the users is to keep all options open and be flexible because you never know which options may work wonders for you. With an open-source ecosystem, try/experiment with new things is the key.&lt;/p&gt;</content:encoded>
      <author>Krunal Bauskar</author>
      <category>krunal.bauskar</category>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>NoSQL Endgame – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/05/nosql-endgame-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/05/nosql-endgame-percona-live-online-talk-preview/</guid>
      <pubDate>Mon, 05 Oct 2020 01:08:04 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 20 Oct • New York 3:00 p.m. • London 8:00 p.m. • New Delhi 12:30 a.m. (next day) • Singapore 3:00 a.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online Agenda&lt;/a&gt; Slot: Tue 20 Oct • New York 3:00 p.m. • London 8:00 p.m. • New Delhi 12:30 a.m. (next day) • Singapore 3:00 a.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;The amount of data collected by applications nowadays is growing at a scary pace. Many of them need to handle billions of users generating and consuming data at an incredible speed. Maybe you are wondering how to create an application like this? What is required? What works best for your project?&lt;/p&gt;
&lt;p&gt;In this session we’ll compare popular Java and JVM persistence frameworks for NoSQL databases: Spring Data, Micronaut Data, Hibernate OGM, Jakarta NoSQL, and GORM. How do they compare, what are the strengths, weaknesses, differences, and similarities? We’ll show each of them with a selection of different NoSQL database systems (Key-Value, Document, Column, Graph).&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;The data load on applications has increased exponentially in recent years. We know the JVM (Java Virtual Machine) can cope with heavy loads very well yet we often come across the big dilemma: there are tons of persistence frameworks out there but which one performs best for my case? It would normally take ages to evaluate and choose the best fit for your use case. We’ve done those comparisons for you.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Those who make technical roadmap decisions such as software architects, engineering managers, and developers involved in new technology decisions.&lt;/p&gt;
&lt;h3 id="what-other-talks-are-you-most-looking-forward-to"&gt;What other talks are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;The conference agenda is simply amazing. It’s difficult to choose which sessions to attend, but we’re pretty sure we’ll attend “&lt;a href="https://sched.co/ePlw" target="_blank" rel="noopener noreferrer"&gt;The 411 PMM&lt;/a&gt;” by Brandon Fleisher and Steve Hoffman, and also “&lt;a href="https://sched.co/eN9q" target="_blank" rel="noopener noreferrer"&gt;How Can Databases Capitalize on Computational Storage&lt;/a&gt;” by Tong Zhang and JB Baker.&lt;/p&gt;</content:encoded>
      <author>Thodoris Bais</author>
      <author>Werner Keil</author>
      <category>thodoris.bais</category>
      <category>werner.keil</category>
      <category>Entry Level</category>
      <category>Events</category>
      <category>Open Source Databases</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>Kunlun Distributed DB Cluster Intro – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/02/kunlun-distributed-db-cluster-intro-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/02/kunlun-distributed-db-cluster-intro-percona-live-online-talk-preview/</guid>
      <pubDate>Fri, 02 Oct 2020 22:25:34 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 20 Oct • New York 9:30 p.m. • London 02:30 a.m. (next day) • New Delhi 7:00 a.m. • Singapore 09:30 a.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online&lt;/a&gt; Agenda Slot: Tue 20 Oct • New York 9:30 p.m. • London 02:30 a.m. (next day) • New Delhi 7:00 a.m. • Singapore 09:30 a.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;Kunlun Distributed Database Cluster is a distributed DBMS that aims to combine the best of both MySQL and PostgreSQL for a highly performant, highly available, highly scalable and fault-tolerant, easy to use and manage database system that requires minimal human maintenance.  It enables users to define table sharding rules so that it automatically distributes tables properly to available storage shards; implements the two-phase commit protocol to do distributed transaction commit; uses MySQL group replication for high availability in storage shards; fixes a series of MySQL XA bugs to make distributed transactions highly reliable, among many other enhancements.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;Audience will get to know Kunlun — a brand new distributed database cluster built from two most popular open source database systems — MySQL and PostgreSQL, and how Kunlun can make developers and DBAs’ life much easier. They will also know why it’s troublesome and error prone to use MySQL group replication as is and how Kunlun make it totally easy by hiding all the complexity.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Application developers and DBAs who use MySQL and/or PostgreSQL clusters, especially those having to deal with multi terabytes of relational data that no single db instance can manage.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;Those about alternative ways to deal with ever growing and multi terabytes of relational data which far exceeds the capacity of a single db instance.&lt;/p&gt;</content:encoded>
      <author>David Zhao</author>
      <category>david.zhao</category>
      <category>Events</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>PLO-2020-10</category>
      <category>PostgreSQL</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>MariaDB 10.5 New Features for Troubleshooting – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/10/01/mariadb-10-5-new-features-for-troubleshooting-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/10/01/mariadb-10-5-new-features-for-troubleshooting-percona-live-online-talk-preview/</guid>
      <pubDate>Thu, 01 Oct 2020 23:42:03 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Wed 21 Aug • New York 12:00 midnight • London 05:00 a.m. • New Delhi 9:30 a.m. • Singapore 12:00 noon</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online&lt;/a&gt; Agenda Slot: Wed 21 Aug • New York 12:00 midnight • London 05:00 a.m. • New Delhi 9:30 a.m. • Singapore 12:00 noon&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;I want to help DBAs and Support engineers find out what’s really going on when some problem strikes.  My goal is to show new ways to diagnose problems now available in MariaDB 10.5.   See &lt;a href="https://perconaliveonline2020.sched.com/event/ePoK/mariadb-105-new-features-for-troubleshooting" target="_blank" rel="noopener noreferrer"&gt;the full abstract&lt;/a&gt; for more.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why Is Your Talk Exciting?&lt;/h3&gt;
&lt;p&gt;It provides a lot of details and practical examples of how MariaDB 10.5 new features for troubleshooting may help DBAs and developers in understanding the production load and tuning MariaDB server for it. The process of documenting these new features is not completed yet, so you may not be able to easily find the information presented elsewhere.&lt;/p&gt;
&lt;h3 id="who-would-benefit-from-your-talk"&gt;Who Would Benefit From Your Talk?&lt;/h3&gt;
&lt;p&gt;DBAs and consultants who use or plan to use MariaDB server 10.5 in production.&lt;/p&gt;
&lt;h3 id="what-is-the-most-useful-new-feature-in-mariadb-105"&gt;What Is the Most Useful New Feature in MariaDB 10.5?&lt;/h3&gt;
&lt;p&gt;For me it’s memory instrumentation. There are alternative ways to find memory leaks or trace memory allocations in detail, but they either have notable performance impacts or are hard to implement in production. This feature potentially can bring DBAs many insights and help to resolve nasty problems. I’ve missed it for years.&lt;/p&gt;
&lt;h3 id="what-other-talks-are-you-most-looking-forward-to"&gt;What Other Talks Are You Most Looking Forward To?&lt;/h3&gt;
&lt;p&gt;For me these presentations look really interesting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://perconaliveonline2020.sched.com/event/ePnR/mysql-ecosystem-on-arm?iframe=yes&amp;w=100%25&amp;sidebar=no&amp;bg=no" target="_blank" rel="noopener noreferrer"&gt;MySQL Ecosystem on ARM By Krunal Bauskar &amp; Mike Grayson&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;I think ARM is a future for servers and historically I was always interested in MySQL implementations on non-x86 hardware.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://perconaliveonline2020.sched.com/event/ePp6/vitess-online-schema-migration-automation?iframe=yes&amp;w=100%25&amp;sidebar=no&amp;bg=no" target="_blank" rel="noopener noreferrer"&gt;Vitess Online Schema Migration Automation By Shlomi Noach &amp; Evgeniy Patlan&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;Whatever Shlomi speaks about, it’s always interesting and useful!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See the full conference agenda &lt;a href="https://www.percona.com/live/agenda" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Valeriy Kravchuk</author>
      <category>valeriy.kravchuk</category>
      <category>Events</category>
      <category>MariaDB</category>
      <category>perconalive</category>
      <category>PLO-2020-10</category>
      <media:thumbnail url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_ee2e8da0448380e1.jpg"/>
      <media:content url="https://percona.community/blog/2020/10/DB-PLO-Blog-Image-2020-10-05_hu_e5cd6fdb7763e3c5.jpg" medium="image"/>
    </item>
    <item>
      <title>Google Summer of Code Refactor PMM Framework Project with Percona</title>
      <link>https://percona.community/blog/2020/09/07/google-summer-of-code-refactor-pmm-framework-project-with-percona/</link>
      <guid>https://percona.community/blog/2020/09/07/google-summer-of-code-refactor-pmm-framework-project-with-percona/</guid>
      <pubDate>Mon, 07 Sep 2020 11:08:19 UTC</pubDate>
      <description>I am Meet Patel, 2nd year undergraduate at DAIICT, Gandhinagar, India; pursuing a bachelor’s degree in Information and Communication Technology with a minor in Computational Science.</description>
      <content:encoded>&lt;p&gt;I am &lt;strong&gt;Meet Patel&lt;/strong&gt;, 2nd year undergraduate at DAIICT, Gandhinagar, India; pursuing a bachelor’s degree in Information and Communication Technology with a minor in Computational Science.&lt;/p&gt;
&lt;p&gt;I am proud to be selected for the &lt;strong&gt;Google Summer of Code&lt;/strong&gt; program under an open source organization as big and impactful as &lt;strong&gt;Percona&lt;/strong&gt;. As we head towards the end of this amazing program, I’ll try to share a general overview of what and how all of it has been implemented.&lt;/p&gt;
&lt;h2 id="about-the-project"&gt;About the project&lt;/h2&gt;
&lt;p&gt;PMM-Framework is a shell based tool to quickly deploy Percona Monitoring and Management, add different database clients to it and load test them; all fully automated. It can automatically download and install through Tarball installers and Docker images for the specific version provided. It incorporates usage of tools like DB Deployer to deploy MySQL databases. Other supported DBs by PMM-Framework include Percona Server, MongoDB, Percona Server for MongoDB, PostgreSQL, MariaDB and PXC. It can also be used to wipe all the PMM configuration after tests are done.&lt;/p&gt;
&lt;p&gt;The main objective of the project was to make bug fixes, refactor the framework, add stability to it and make it more robust and useful. In the first half of the project timeline, I worked on implementing the above tasks and tested PMM using the PMM-Framework. Being a shell based tool, PMM-Framework had a slightly steep learning curve for newcomers. So given the mentors’ suggestions, I made a user friendly CLI tool from scratch, namely PMM-Framework-CLI, that would query the user and execute PMM-Framework on the machine, or inside a VagrantBox.&lt;/p&gt;
&lt;p&gt;You can check out the quick demo here:  &lt;a href="https://youtu.be/qPXlTMrsBcU" target="_blank" rel="noopener noreferrer"&gt;https://youtu.be/qPXlTMrsBcU&lt;/a&gt; You can check out my contributions to PMM-Framework at &lt;a href="https://github.com/percona/pmm-qa/tree/GSOC-2020" target="_blank" rel="noopener noreferrer"&gt;GSoC Project Branch&lt;/a&gt;. The source code to the PMM-Framework-CLI tool can be found &lt;a href="https://github.com/Percona-Lab/pmm-framework-cli" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;. This tool is soon to be published on NPM so that everyone can quickly start using it through the NPM repository.&lt;/p&gt;
&lt;h2 id="challenges-faced"&gt;Challenges faced&lt;/h2&gt;
&lt;p&gt;There would be many unforeseen challenges regardless of the project, overcoming them teaches you a lot. The first challenge that I faced in this project was to understand how everything was working in PMM. I went through every documentation that I could find to understand PMM architecture. Working with shell scripts of this size and debugging them was also a challenge. Due to Covid-19 my university exams timelines were uncertain, mentors also helped me manage that. I didn’t have a lot of prior knowledge about many Linux, Database and Networking concepts, learning which only has added to my skills.&lt;/p&gt;
&lt;h2 id="experiences"&gt;Experiences&lt;/h2&gt;
&lt;p&gt;It has been a great learning experience as well. I have got a really great opportunity to experiment and work hands on numerous tools and technologies in such a short timespan. To list some of them:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Bash Scripting&lt;/li&gt;
&lt;li&gt;NodeJS (and publishing package to NPM)&lt;/li&gt;
&lt;li&gt;Linux, Networking, Databases&lt;/li&gt;
&lt;li&gt;Ansible&lt;/li&gt;
&lt;li&gt;SSH&lt;/li&gt;
&lt;li&gt;Jenkins Pipelines&lt;/li&gt;
&lt;li&gt;Percona Monitoring and Management (of course!)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Apart from these, the common and biggest advantage of any Google Summer of Code project is that you get to understand a huge codebase that you wouldn’t otherwise. You also get exposed to the best coding practices, development workflows, issues management, time management to name a few. Apart from this, although not part of GSoC but related to the work, I wrote an article about Encryption in SSH/HTTPS that has been trending on the Cybersecurity domain of Medium. The article can be read here: &lt;a href="https://medium.com/code-dementia/demystifying-secure-in-ssh-tls-https-ad7473106c6a" target="_blank" rel="noopener noreferrer"&gt;https://medium.com/code-dementia/demystifying-secure-in-ssh-tls-https-ad7473106c6a&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The part I loved most is the exposure that I got, let alone the learning. The mentors have been extremely friendly and supportive about everything. I also got to improve on my communication skills because of my regular interaction with the mentors. This project has for sure been a great addition to my résumé. I’m happy to announce that this also helped me secure a summer internship at Goldman Sachs for the next summer!  Being a student, learning directly from people having 10x the experience you have not only teaches you well but also prepares for how the team work really happens.&lt;/p&gt;
&lt;p&gt;Overall, it has been an absolutely amazing experience working with Percona and a special thanks to the mentors &lt;strong&gt;Puneet Kala, Nailya Kutlubaeva, Vasyl Yurkovych&lt;/strong&gt; of Percona for guiding me throughout.&lt;/p&gt;</content:encoded>
      <author>Meet Patel</author>
      <category>Entry Level</category>
      <category>Google Summer of Code</category>
      <category>GSoC</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>PMM</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/09/Screenshot-2020-09-07-at-14.46.59_hu_ad54675b8ef67ddf.jpg"/>
      <media:content url="https://percona.community/blog/2020/09/Screenshot-2020-09-07-at-14.46.59_hu_ca47f506bcbe7cac.jpg" medium="image"/>
    </item>
    <item>
      <title>Two weeks to MariaDB Server Fest</title>
      <link>https://percona.community/blog/2020/09/04/two-weeks-to-mariadb-server-fest/</link>
      <guid>https://percona.community/blog/2020/09/04/two-weeks-to-mariadb-server-fest/</guid>
      <pubDate>Fri, 04 Sep 2020 09:34:53 UTC</pubDate>
      <description>There is still time to register for the MariaDB Server Fest 2020!</description>
      <content:encoded>&lt;p&gt;There is still time to &lt;a href="https://mariadb.org/fest-registration/" target="_blank" rel="noopener noreferrer"&gt;register&lt;/a&gt; for the MariaDB Server Fest 2020!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/09/mariadb_fest_video.jpeg" alt="MariaDB Fest 2020" /&gt;&lt;/figure&gt; MariaDB Fest 2020[/caption]&lt;/p&gt;
&lt;p&gt;Our Fest is the opportunity to have live interactions with the key players on MariaDB Server: the developers of MariaDB Server, the service providers, the experts, the system integrators, and – perhaps most importantly – your fellow users!&lt;/p&gt;
&lt;p&gt;Interactivity happens all the time, with the presenters being cloned and available for answering questions throughout the presentation. This is because the presentations (including voice, a talking head, and the slide decks) are pre-recorded, freeing up the presenter’s attention to be fully devoted to the audience. Multithreading!&lt;/p&gt;
&lt;p&gt;Sessions are listed in full on the &lt;a href="https://mariadb.org/fest2020-sessions" target="_blank" rel="noopener noreferrer"&gt;web&lt;/a&gt;, with the exact timing for the three virtual locations still being fine-tuned. Turn in to listen to 30 presenters from eg. Supermetrics, MariaDB Corporation, Percona, Microsoft, Galera, Tencent, Bilibili and MariaDB Foundation.&lt;/p&gt;
&lt;p&gt;Timing is during your day-time, and spread out across three days, five hours a day, so you can still get most of your normal job done.&lt;/p&gt;
&lt;p&gt;On Monday-Wednesday 14-16 Sep 2020 we have the Paris conference, on Tuesday-Thursday 15-17 Sep 2020 we have the New York conference, and on Friday-Sunday 18-20 Sep 2020 the Beijing conference. Exact agendas vary slightly between the locations, to cater to the sleeping patterns of the presenters from other time zones.&lt;/p&gt;
&lt;p&gt;Talk to you in less than two weeks!&lt;/p&gt;
&lt;p&gt;Links:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Registration: &lt;a href="https://mariadb.org/fest-registration/" target="_blank" rel="noopener noreferrer"&gt;https://mariadb.org/fest-registration/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Session list: &lt;a href="https://mariadb.org/fest2020-sessions/" target="_blank" rel="noopener noreferrer"&gt;https://mariadb.org/fest2020-sessions/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Kaj Arnö</author>
      <category>kaj.arno</category>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <media:thumbnail url="https://percona.community/blog/2020/09/mariadb_fest_video_hu_c1795516dda166aa.jpeg"/>
      <media:content url="https://percona.community/blog/2020/09/mariadb_fest_video_hu_49e15c79741cdaae.jpeg" medium="image"/>
    </item>
    <item>
      <title>IoT Performance Bottlenecks &amp; Open Source Databases</title>
      <link>https://percona.community/blog/2020/09/03/iot-performance-bottlenecks-and-open-source-databases/</link>
      <guid>https://percona.community/blog/2020/09/03/iot-performance-bottlenecks-and-open-source-databases/</guid>
      <pubDate>Thu, 03 Sep 2020 17:35:59 UTC</pubDate>
      <description>The Internet of Things (IoT), in essence, is all about everyday devices that are readable, recognizable, trackable, and/or controllable via the Internet, regardless of the communication means — RFID, wireless LAN, and so on. The total installed base of IoT connected devices is projected to amount to 21.5 billion units worldwide by 2025. Thanks to IoT, the proliferation of data can be quite daunting. Hence, businesses should effectively organize and work with this enormous amount of valuable data.</description>
      <content:encoded>&lt;p&gt;The Internet of Things (IoT), in essence, is all about everyday devices that are readable, recognizable, trackable, and/or controllable via the Internet, regardless of the communication means — RFID, wireless LAN, and so on. The total installed base of IoT connected devices is projected to amount to &lt;a href="https://www.statista.com/statistics/1101442/iot-number-of-connected-devices-worldwide/" target="_blank" rel="noopener noreferrer"&gt;21.5 billion units worldwide by 2025&lt;/a&gt;. Thanks to IoT, the proliferation of data can be quite daunting. Hence, businesses should effectively organize and work with this enormous amount of valuable data.&lt;/p&gt;
&lt;p&gt;Databases play a pivotal role in enabling enterprises to make the most of IoT by facilitating proper organization, storage, and manipulation of data. IoT applications typically make use of both relational and non-relational (aka NoSQL) types of databases. While the selection of the type of database is made based on the type of application, in most cases, a mix of both types is used.  However, picking the most efficient database for a particular IoT application can be tricky. There are so many parameters to consider, such as scalability, availability, data handling ability, processing speed, schema flexibility, integration with required analytical tools, security, and cost.&lt;/p&gt;
&lt;h2 id="key-business-drivers-of-iot"&gt;&lt;strong&gt;Key Business Drivers of IoT&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/08/iot1-scaled_hu_268480ab1f6ed250.jpg 480w, https://percona.community/blog/2020/08/iot1-scaled_hu_8c06d89066a05198.jpg 768w, https://percona.community/blog/2020/08/iot1-scaled_hu_8bd5e8192c40d276.jpg 1400w"
src="https://percona.community/blog/2020/08/iot1-scaled.jpg" alt=" " /&gt;&lt;/figure&gt; &lt;a href="https://www.freepik.com/vectors/coffee" target="_blank" rel="noopener noreferrer"&gt;Coffee vector created by macrovector - www.freepik.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In the implementation of IoT applications for enterprises, there’s a need for flexibility in processing the data at the edge and to synchronize the data between edge servers and the cloud. No single commercial database can fulfill all such needs of an organization.  IoT is the basis of DevOps, agile software, and other development methodologies. Plus, thousands of developers are coming up with innovative IoT products, exponentially increasing the number of new devices and sources of data. Hence, the faster they come up with an idea and develop it the better it is. An &lt;a href="https://www.percona.com/blog/2020/04/30/the-state-of-the-open-source-database-industry-in-2020-part-four/" target="_blank" rel="noopener noreferrer"&gt;open-source database&lt;/a&gt; is a cost-effective and versatile option for business IoT applications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The database can bring together data from all the devices and sensors, allowing developers to be creative and develop internal tools, standalone products, or components of bigger systems.&lt;/li&gt;
&lt;li&gt;It offers several tool kits and libraries for the faster development of IoT devices while keeping the risk and costs under control. Further, open-source hardware like Arduino and Raspberry Pi can help turn up several IoT devices, from home security to health monitors.&lt;/li&gt;
&lt;li&gt;An open source database lowers the cost of the device. That’s because it offers a variety of accessible open source databases such as MongoDB, Cassandra, and MySQL/MariaDB that help manage data at a lower cost. This allows enterprises to experiment with various solutions that would otherwise be ignored because of the high cost of licenses for development tools and software components.&lt;/li&gt;
&lt;li&gt;It makes it easy for developers to prototype IoT devices and convert them into full-fledged products like aquariums and thermostats. As open source is accessible to all, developers just need to tap a few pre-existing open source libraries, customize it as per their needs, and contribute it back to the community.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, several startups are building wearables that can sense environmental factors, such as air composition and microbial content, and matching it with public databases to warn the wearer about traces of a specific pathogen in real-time. This is feasible because they are leveraging existing open source libraries and tools.  &lt;/p&gt;
&lt;h2 id="iot-database-architecture"&gt;&lt;strong&gt;IoT Database Architecture&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In a typical IoT architecture, hundreds to thousands of sensors and actuators are connected with the edge server, and the enterprise IoT solution collects data from all these devices continuously.  Cloud MQTT, Apache Kafka, and Rest Service components are used to &lt;a href="https://dzone.com/articles/iot-and-event-streaming-at-scale-with-kafka-and-mq" target="_blank" rel="noopener noreferrer"&gt;ingest the IoT data streams&lt;/a&gt; from the devices to the database. Next, edge analytics performs the translation, aggregation, and filtering of the incoming data, which allows real-time decision making at the edge. &lt;/p&gt;
&lt;p&gt;The database must support high-speed read and write operations with sub-millisecond latency. It helps in performing complex analytical operations on the data from the edge server. The database then communicates commands to the IoT devices and stores the data for as long as required.  Simply put, the whole IoT implementation is centered around the idea of data collection/insertion through sensors and sending instructions back to those devices. And so, open-source software like databases and even VPNs (check out &lt;a href="https://vpn-review.com/" target="_blank" rel="noopener noreferrer"&gt;VPN reviews&lt;/a&gt; before deciding on one), which helps boost device security by protecting against IoT attacks such as botnets and MITM, is vital to enterprise-grade IoT applications.&lt;/p&gt;
&lt;p&gt;IoT applications generate enormous volumes of data like RFID data, streaming data, sensory data, and many others. Moreover, IoT solutions are distributed across geographical regions. Thus, the dynamic nature of IoT data demands the use of a suitable database that can allow you to efficiently manage the data  IoT solutions operate across a diverse environment; thus, it’s tough to choose an adequate database. Here are a few points to bear in mind when choosing a fitting database for your IoT system:&lt;/p&gt;
&lt;h3 id="scalability"&gt;Scalability&lt;/h3&gt;
&lt;p&gt;An IoT solution scales out automatically to serve a growing load to prevent blackouts due to a lack of resources. Therefore, the database you choose for IoT applications must be scalable. Ideally, IoT databases should be linearly scalable, such that a server to a node cluster increases the throughput.  Distributed databases work best for IoT solutions as they can run on commodity hardware and scale by adding and removing servers from the database cluster as needed. On the other hand, if the application collects a small amount of data, a centralized database works.&lt;/p&gt;
&lt;h3 id="ability-to-manage-voluminous-data"&gt;Ability to Manage Voluminous Data&lt;/h3&gt;
&lt;p&gt;As mentioned earlier, IoT generates vast amounts of data in real-time. The success of an open source database lies in the efficient management of data while processing events as they stream and dealing with data security. &lt;/p&gt;
&lt;h3 id="fault-tolerant--high-availability"&gt;Fault-Tolerant &amp; High Availability&lt;/h3&gt;
&lt;p&gt;An ideal IoT database should be fault-tolerant and highly available. For instance, hardware and software updates are often known to interrupt normal data operations. This should not be the case. Similarly, if a node in the database cluster is down for some reason, it should still be able to read and write requests.  Open source distributed SQL database management systems like CrateDB provide automated replication of data across the cluster to ensure high availability. It can also self-heal the infected nodes.&lt;/p&gt;
&lt;h3 id="improved-flexibility"&gt;Improved Flexibility&lt;/h3&gt;
&lt;p&gt;An increasing number of IoT solutions are adopting a &lt;a href="https://www.digiteum.com/cloud-fog-edge-computing-iot" target="_blank" rel="noopener noreferrer"&gt;combination of cloud and fog computing at the edge&lt;/a&gt;. Therefore, the open source database you choose should be flexible enough to process data at the edge servers and then synchronize it between these servers and the cloud.&lt;/p&gt;
&lt;h3 id="advanced-capabilities"&gt;Advanced Capabilities&lt;/h3&gt;
&lt;p&gt;Depending on the IoT solution, you would require a database that is capable of real-time data streaming, data filtering, data aggregation, real-time analytics, near-zero latency read operations, geo distribution, and schema flexibility among others. Use these questions to determine your data needs for the IoT solution and select a database that’s most suitable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What kind of data processing and decision making is being delegated to the edge servers?&lt;/li&gt;
&lt;li&gt;Is the cloud solution deployed in one geographical region, or distributed across various regions?&lt;/li&gt;
&lt;li&gt;What’s the volume of data transferred from the IoT device to the edge server to the central server? (peak volume)&lt;/li&gt;
&lt;li&gt;Does your IoT solution control any devices or actuators? Do they need a real-time response?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="top-open-source-dbs-for-iot-apps"&gt;&lt;strong&gt;Top Open Source DBs for IoT Apps&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;It’s clear that open-source databases serve as catalysts for IoT applications, but every business has a unique requirement which means that choosing the right database for the various stages of IoT implementation is important.  Further, IoT applications are mostly heterogeneous and domain-centric. That makes it tough to choose an appropriate database. When looking for an open source database for IoT applications, it’s critical to consider parameters like scalability, availability, the ability to handle huge volumes of data, processing speed and schema flexibility, integration with varied analytical tools, security, and cost.  So, let’s end this piece with three of the best open source databases for enterprise-level IoT applications:&lt;/p&gt;
&lt;h3 id="mongodb"&gt;MongoDB&lt;/h3&gt;
&lt;p&gt;A flexible and powerful open-source database that supports features like indexes, range queries, sorting, aggregations, and JSON. It also supports a rich query language for CRUD (create, read, update, delete) operations as well as data aggregation, text search, and geospatial queries. In fact, Bosch has built its IoT suite on &lt;a href="https://www.percona.com/software/mongodb" target="_blank" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt;.  MongoDB has a few clear benefits for IoT data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It’s a powerful database that’s easily scalable and can effectively manage huge volumes of data.&lt;/li&gt;
&lt;li&gt;It is document-oriented.&lt;/li&gt;
&lt;li&gt;It can be used for general purposes.&lt;/li&gt;
&lt;li&gt;Being a NoSQL database, MongoDB uses JSON-like documents with schemas.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="cassandra"&gt;Cassandra&lt;/h3&gt;
&lt;p&gt;A highly scalable and distributed open-source database for managing enormous amounts of structured data across numerous commodity servers. The Apache Cassandra provides linear scale performance, simplicity, and easy distribution of data across multiple database servers, ideal for many large-scale IoT applications.  The advantages of &lt;a href="http://cassandra.apache.org/" target="_blank" rel="noopener noreferrer"&gt;Apache Cassandra&lt;/a&gt; include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It’s a free and open source distributed NoSQL database management system that can handle voluminous data through multiple commodity servers. Thus, it can ensure high availability with zero single-point failure.&lt;/li&gt;
&lt;li&gt;It’s decentralized. Each node in the cluster is identical.&lt;/li&gt;
&lt;li&gt;It demonstrates high performance.&lt;/li&gt;
&lt;li&gt;It utilizes the immense scale of time-series data coming from devices, users, sensors, and similar mechanisms across locations.&lt;/li&gt;
&lt;li&gt;Each update gives you a choice of synchronous and asynchronous replication, thus giving you complete control.&lt;/li&gt;
&lt;li&gt;Avoids downtime as both read and write execute in real-time.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="rethinkdb"&gt;RethinkDB&lt;/h3&gt;
&lt;p&gt;Since RethinkDB is a super scalable JSON database for real-time web, it’s one of the best and most preferred open source databases available today. Its real-time push architecture dramatically minimizes the time and effort required to build scalable IoT apps. Plus, it has an adaptable query language for examining APIs, which is easy to set up and learn.  Here are a few reasons, &lt;a href="https://rethinkdb.com/" target="_blank" rel="noopener noreferrer"&gt;RethinkDB&lt;/a&gt; is ideal for IoT solutions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It’s an adaptable query language for examining APIs.&lt;/li&gt;
&lt;li&gt;Offers asynchronous queries via Eventmachine in Ruby and Tornado.&lt;/li&gt;
&lt;li&gt;Offers a variety of mathematical operators like the floor, ceil, and round.&lt;/li&gt;
&lt;li&gt;If the primary server fails, the commands are automatically shifted to a new one.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;  Handling IoT data effectively requires you to choose a suitable open source database. However, finding an efficient database can be a tricky undertaking, considering the fact that the IoT environment keeps changing. The information shared in this post will take you a step closer to understanding why open source databases help developers and organizations manage IoT data effectively.&lt;/p&gt;</content:encoded>
      <author>Gaurav Belani</author>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/08/iot1-scaled_hu_79a12bc897293f49.jpg"/>
      <media:content url="https://percona.community/blog/2020/08/iot1-scaled_hu_62147ffb1751e118.jpg" medium="image"/>
    </item>
    <item>
      <title>IIoT platform databases - How Mail.ru Cloud Solutions deals with petabytes of data coming from a multitude of devices</title>
      <link>https://percona.community/blog/2020/07/24/iiot-platform-databases-how-mail-ru-cloud-solutions-deals-with-petabytes-of-data-coming-from-a-multitude-of-devices/</link>
      <guid>https://percona.community/blog/2020/07/24/iiot-platform-databases-how-mail-ru-cloud-solutions-deals-with-petabytes-of-data-coming-from-a-multitude-of-devices/</guid>
      <pubDate>Fri, 24 Jul 2020 14:11:14 UTC</pubDate>
      <description> Hello, my name is Andrey Sergeyev and I work as a Head of IoT Solution Development at Mail.ru Cloud Solutions. We all know there is no such thing as a universal database. Especially when the task is to build an IoT platform that would be capable of processing millions of events from various sensors in near real-time.</description>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image6.png" alt="IIoT platform databases - How Mail.ru Cloud Solutions" /&gt;&lt;/figure&gt; Hello, my name is Andrey Sergeyev and I work as a Head of IoT Solution Development at &lt;a href="https://mcs.mail.ru/" target="_blank" rel="noopener noreferrer"&gt;Mail.ru Cloud Solutions&lt;/a&gt;. We all know there is no such thing as a universal database. Especially when the task is to build an IoT platform that would be capable of processing millions of events from various sensors in near real-time.&lt;/p&gt;
&lt;p&gt;Our product &lt;a href="https://mcs.mail.ru/iot/" target="_blank" rel="noopener noreferrer"&gt;Mail.ru IoT Platform&lt;/a&gt; started as a Tarantool-based prototype. I’m going to tell you about our journey, the problems we faced and the solutions we found. I will also show you a current architecture for the modern Industrial Internet of Things platform. In this article we will look into:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;our requirements for the database, universal solutions, and the CAP theorem&lt;/li&gt;
&lt;li&gt;whether the database + application server in one approach is a silver bullet&lt;/li&gt;
&lt;li&gt;the evolution of the platform and the databases used in it&lt;/li&gt;
&lt;li&gt;the number of Tarantools we use and how we came to this&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="mailru-iot-platform-today"&gt;Mail.ru IoT Platform today&lt;/h2&gt;
&lt;p&gt;Our product Mail.ru IoT Platform is a scalable and hardware-independent platform for building Industrial Internet of Things solutions. It enables us to collect data from hundreds of thousands devices and process this stream in near real-time by using user-defined rules (scripts in Python and Lua) among other tools.&lt;/p&gt;
&lt;p&gt;The platform can store an unlimited amount of raw data from the sources. It also has a set of ready-made components for data visualization and analysis as well as built-in tools for predictive analysis and platform-based app development.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image1.png" alt="Mail.ru IoT Platform set-up" /&gt;&lt;/figure&gt; Mail.ru IoT Platform set-up[/caption] The platform is currently available for on-premise installation on customers’ facilities. In 2020 we are planning its release as a public cloud service.&lt;/p&gt;
&lt;h2 id="tarantool-based-prototype-how-we-started"&gt;Tarantool-based prototype: how we started&lt;/h2&gt;
&lt;p&gt;Our platform started as a pilot project – a prototype with a single instance Tarantool. Its primary functions were receiving a data stream from the OPC server, processing the events with Lua scripts in real-time, monitoring key indicators on its basis, and generating events and alerts for upstream systems.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image3.png" alt="Flowchart of the Tarantool-based prototype" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Flowchart of the Tarantool-based prototype[/caption]   This prototype has even shown itself in the field conditions of a multi-well pad in Iraq. It worked at an oil platform in the Persian Gulf, monitoring key indicators and sending data to the visualization system and the event log. The pilot was deemed successful, but then, as it often happens with prototypes, it was put into cold storage until we got our hands on it.&lt;/p&gt;
&lt;h2 id="our-aims-in-developing-the-iot-platform"&gt;Our aims in developing the IoT platform&lt;/h2&gt;
&lt;p&gt;Along with the prototype, we got ourselves a challenge of creating a fully functional, scalable, and failsafe IoT platform that could then be released as a public cloud service.&lt;/p&gt;
&lt;p&gt;We had to build a platform with the following specifications:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Simultaneous connection of hundreds of thousands of devices&lt;/li&gt;
&lt;li&gt;Receiving millions of events every second&lt;/li&gt;
&lt;li&gt;Datastream processing in near real-time&lt;/li&gt;
&lt;li&gt;Storing several years of raw data&lt;/li&gt;
&lt;li&gt;Analytics tools for both streaming and historic data&lt;/li&gt;
&lt;li&gt;Support for deployment in multiple data centers to maximize disaster tolerance&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="pros-and-cons-of-the-platform-prototype"&gt;Pros and cons of the platform prototype&lt;/h2&gt;
&lt;p&gt;At the start of active development the prototype had the following structure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tarantool that was used as a database + Application Server&lt;/li&gt;
&lt;li&gt;all the data was stored in Tarantool’s memory&lt;/li&gt;
&lt;li&gt;this Tarantool had a Lua app that performed the data reception and processing and called the user scripts with incoming data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;This type of app structure has its advantages:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The code and the data are stored in one place – that enables to manipulate the data right in the application memory and get rid of extra network manipulations, which are typical for traditional apps&lt;/li&gt;
&lt;li&gt;Tarantool uses the JIT (Just in Time Compiler) for Lua. It compiles Lua code into machine code, allowing simple Lua scripts to execute at the C-like speed (40,000 RPS per core and even higher!)&lt;/li&gt;
&lt;li&gt;Tarantool is based upon cooperative multitasking. This means that every call of stored procedure runs in its own coroutine-like fiber. It gives a further performance boost for the tasks with I/O operations, e.g. network manipulations&lt;/li&gt;
&lt;li&gt;Efficient use of resources: tools capable of handling 40,000 RPS per core are quite rare&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;There are also significant disadvantages:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;We need storing several years of raw data from the devices, but we don’t have hundreds of petabytes for Tarantool&lt;/li&gt;
&lt;li&gt;This item directly results from advantage #1. All of the platform code consists of procedures stored in the database, which means that any codebase update is basically a database update, and that sucks&lt;/li&gt;
&lt;li&gt;Dynamic scaling gets difficult because the whole system’s performance depends on the memory it uses. Long story short, you can’t just add another Tarantool to increase the bandwidth capacity without losing 24 to 32 Gb of memory (while starting, Tarantool allocates all the memory for data) and resharding the existent data. Besides, when sharding, we lose the advantage #1 – the data and the code may not be stored in the same Tarantool&lt;/li&gt;
&lt;li&gt;Performance deteriorates as the code gets more complex with the platform progress. This happens not only because Tarantool executes all the Lua code in a single system stream, but also because the LuaJIT goes into interpreting mode instead of compiling when dealing with complex code&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt; Tarantool is a good choice for creating an MVP, but it doesn’t work for a fully functional, easily maintained, and failsafe IoT platform capable of receiving, processing, and storing data from hundreds of thousands of devices.&lt;/p&gt;
&lt;h2 id="two-primary-problems-that-we-wanted-to-solve"&gt;Two primary problems that we wanted to solve&lt;/h2&gt;
&lt;p&gt;First of all, there were two main issues we wanted to sort out:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Ditching the concept of database + application server. We wanted to update the app code independently of the database.&lt;/li&gt;
&lt;li&gt;Simplifying the dynamic scaling under stress. We wanted to have an easy independent horizontal scaling of the greatest possible number of functions&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To solve these problems, we took an innovative approach that was not well tested – the microservice architecture divided into Stateless (the applications) and Stateful (the database).&lt;/p&gt;
&lt;p&gt;In order to make maintenance and scaling the Stateless services out even simpler, we containerized them and adopted Kubernetes.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image9.png" alt="Kubernetes" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now that we figured out the Stateless services, we have to decide what to do with the data.&lt;/p&gt;
&lt;h2 id="basic-requirements-for-the-iot-platform-database"&gt;Basic requirements for the IoT platform database&lt;/h2&gt;
&lt;p&gt;At first, we tried not to overcomplicate things – we wanted to store all the platform data in one single universal database. Having analyzed our goals, we came up with the following list of requirements for the universal database:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;ACID transactions&lt;/strong&gt; – the clients will keep a register of their devices on the platform, so we wouldn’t want to lose some of them upon data modification&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Strict consistency&lt;/strong&gt; – we have to get the same responses from all of the database nodes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Horizontal scaling for writing and reading&lt;/strong&gt; – the devices send a huge stream of data that has to be processed and saved in near real-time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fault tolerance&lt;/strong&gt; – the platform has to be capable of manipulating the data from multiple data centers to maximize fault tolerance&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Accessibility&lt;/strong&gt; – no one would use a cloud platform that shuts down whenever one of the nodes fails&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Storage volume and good compression&lt;/strong&gt; – we have to store several years (petabytes!) of raw data that also needs to be compressed.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Performance&lt;/strong&gt; – quick access to raw data and tools for stream analytics, including access from the user scripts (tens of thousands of reading requests per second!)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SQL&lt;/strong&gt; – we want to let our clients run analytics queries in a familiar language&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="checking-our-requirements-with-the-cap-theorem"&gt;Checking our requirements with the CAP theorem&lt;/h2&gt;
&lt;p&gt;Before we started examining all the available databases to see if they meet our requirements, we decided to check whether our requirements are adequate by using a well-known tool – the CAP theorem.&lt;/p&gt;
&lt;p&gt;The CAP theorem states that a distributed system cannot simultaneously have more than two of the following qualities:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Consistency&lt;/strong&gt; – data in all of the nodes have no contradictions at any point in time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Availability&lt;/strong&gt; – any request to a distributed system results in a correct response, however, without a guarantee that the responses of all system nodes match&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Partition tolerance&lt;/strong&gt; – even when the nodes are not connected, they continue working independently&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image11.png" alt="Checking our requirements with the CAP theorem" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;For instance, the Master-Slave PostgreSQL cluster with synchronous replication is a classic example of a CA system and Cassandra is a classic AP system.&lt;/p&gt;
&lt;p&gt;Let’s get back to our requirements and classify them with the CAP theorem:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;ACID transactions and strict (or at least not eventual) consistency are C.&lt;/li&gt;
&lt;li&gt;Horizontal scaling for writing and reading + accessibility is A (multi-master).&lt;/li&gt;
&lt;li&gt;Fault tolerance is P: if one data center shuts down, the system should stand.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image10.png" alt="ACID" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt; the universal database we require has to offer all of the CAP theorem qualities, which means that none of the existing databases can fulfill all of our needs.&lt;/p&gt;
&lt;h2 id="choosing-the-database-based-on-the-data-the-iot-platform-works-with"&gt;Choosing the database based on the data the IoT platform works with&lt;/h2&gt;
&lt;p&gt;Being unable to pick a universal database, we decided to split the data into two types and choose a database for each type the database will work with.&lt;/p&gt;
&lt;p&gt;With a first approximation we subdivided the data into two types:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Metadata&lt;/strong&gt; – the world model, the devices, the rules, the settings. Practically all the data except the data from the end devices&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Raw data from the devices&lt;/strong&gt; – sensor readings, telemetry, and technical information from the devices. These are time series of messages containing a value and a timestamp&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="choosing-the-database-for-the-metadata"&gt;Choosing the database for the metadata&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Our requirements&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Metadata is inherently relational. It is typical for this data to have a small amount and be rarely modified, but the metadata is quite important. We can’t lose it, so consistency is important – at least in terms of asynchronous replication, as well as ACID transactions and horizontal read scaling.&lt;/p&gt;
&lt;p&gt;This data is comparatively little in amount and it will be changed rather infrequently, so you can ditch horizontal read scaling, as well as the possible inaccessibility of the read database in case of failure. That is why, in the language of the CAP theorem, we need a CA system.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What usually works.&lt;/strong&gt; If we put a question like this, we would do with any classic relational database with asynchronous replication cluster support, e.g. PostgreSQL or MySQL.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Our platform aspects.&lt;/strong&gt; We also needed support for trees with specific requirements. The prototype had a feature taken from the systems of the RTDB class (real-time databases) – modeling the world using a tag tree. They enable us to combine all the client devices in one tree structure, which makes managing and displaying a large number of devices much easier.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image4.png" alt="This is how the device tree looks like" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is how the device tree looks like  &lt;/p&gt;
&lt;p&gt;This tree enables linking the end devices with the environment. For example, we can put devices physically located in the same room in one subtree, which facilitates the work with them in the future. This function is very convenient, besides, we wanted to work with RTDBs in the future, and this functionality is basically the industry standard there.&lt;/p&gt;
&lt;p&gt;To have a full implementation of the tag trees, a potential database must meet the following requirements:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Support for trees with arbitrary width and depth.&lt;/li&gt;
&lt;li&gt;Modification of tree elements in ACID transactions.&lt;/li&gt;
&lt;li&gt;High performance when traversing a tree.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Classic relational databases can handle small trees quite well, but they don’t do as well with arbitrary trees.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Possible solution.&lt;/strong&gt; Using two databases: a graph one for the tree and the relational one for all the other metadata.&lt;/p&gt;
&lt;p&gt;This approach has major disadvantages:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;To ensure consistency between two databases, you need to add an external transaction coordinator.&lt;/li&gt;
&lt;li&gt;This design is difficult to maintain and not so reliable.&lt;/li&gt;
&lt;li&gt;As a result, we get two databases instead of one, while the graph database is only required for supporting limited functionality.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image7.png" alt="A possible, but not a perfect solution with two databases" /&gt;&lt;/figure&gt;
A possible, but not a perfect solution with two databases  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Our solution for storing metadata.&lt;/strong&gt; We thought a little longer and remembered that this functionality was initially implemented in a Tarantool-based prototype and it turned out very well.&lt;/p&gt;
&lt;p&gt;Before we continue, I would like to give an unorthodox definition of Tarantool: &lt;em&gt;Tarantool is not a database, but a set of primitives for building a database for your specific case.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Available primitives out of the box:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Spaces – an equivalent of tables for storing data in the databases.&lt;/li&gt;
&lt;li&gt;Full-fledged ACID transactions.&lt;/li&gt;
&lt;li&gt;Asynchronous replication using WAL logs.&lt;/li&gt;
&lt;li&gt;A sharding tool that supports automatic resharding.&lt;/li&gt;
&lt;li&gt;Ultrafast LuaJIT for stored procedures.&lt;/li&gt;
&lt;li&gt;Large standard library.&lt;/li&gt;
&lt;li&gt;LuaRocks package manager with even more packages.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our CA solution was a relational + graph Tarantool-based database. We assembled perfect metadata storage with Tarantool primitives:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Spaces for storage.&lt;/li&gt;
&lt;li&gt;ACID transactions – already in place.&lt;/li&gt;
&lt;li&gt;Asynchronous replication – already in place.&lt;/li&gt;
&lt;li&gt;Relations – we built them upon stored procedures.&lt;/li&gt;
&lt;li&gt;Trees – built upon stored procedures too.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our cluster installation is classic for systems like these – one Master for writing and several Slaves with asynchronous replications for reading scaling.&lt;/p&gt;
&lt;p&gt;As a result, we have a fast scalable hybrid of relational and graph databases.&lt;/p&gt;
&lt;p&gt;One Tarantool instance is able to process thousands of reading requests, including those with active tree traversals.&lt;/p&gt;
&lt;h3 id="choosing-the-database-for-storing-the-data-from-the-devices"&gt;Choosing the database for storing the data from the devices&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Our requirements&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This type of data is characterized by frequent writing and a large amount of data: millions of devices, several years of storage, petabytes of both incoming messages, and stored data. Its high availability is very important since the sensor readings are important for the user-defined rules and our internal services.&lt;/p&gt;
&lt;p&gt;It is important that the database offers horizontal scaling for reading and writing, availability, and fault tolerance, as well as ready-made analytical tools for working with this data array, preferably SQL-based. We can sacrifice consistency and ACID transactions, so in terms of the CAP theorem, we need an AP system.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Additional requirements.&lt;/strong&gt; We had a few additional requirements for the solution that would store the gigantic amounts of data:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Time Series – sensor data that we wanted to store in a specialized base.&lt;/li&gt;
&lt;li&gt;Open-source – the advantages of open source code are self-explanatory.&lt;/li&gt;
&lt;li&gt;Free cluster – a common problem among modern databases.&lt;/li&gt;
&lt;li&gt;Good compression – given the amount of data and its homogeneity, we wanted to compress the stored data efficiently.&lt;/li&gt;
&lt;li&gt;Successful maintenance – in order to minimize risks, we wanted to start with a database that someone was already actively exploiting at loads similar to ours.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Our solution.&lt;/strong&gt; The only database suiting our requirements was ClickHouse – a columnar time-series database with replication, multi-master, sharding, SQL support, and a free cluster. Moreover, Mail.ru has many years of successful experience in operating one of the largest ClickHouse clusters.&lt;/p&gt;
&lt;p&gt;But ClickHouse, however good it may be, didn’t work for us.&lt;/p&gt;
&lt;h3 id="problems-with-the-database-for-device-data-and-their-solution"&gt;Problems with the database for device data and their solution&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Problem with writing performance.&lt;/strong&gt; We immediately had a problem with the large data stream writing performance. It needs to be delivered to the analytical database as soon as possible so that the rules analyzing the flow of events in real-time can look at the history of a particular device and decide whether to raise an alert or not.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Solution.&lt;/strong&gt; ClickHouse is not good with multiple single inserts, but works well with large packets of data, easily coping with writing millions of lines in batches. We decided to buffer the incoming data stream, and then paste this data in batches.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image5.png" alt="This is how we dealt with poor writing performance" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is how we dealt with poor writing performance  &lt;/p&gt;
&lt;p&gt;The writing problems were solved, but it cost us several seconds of lag between the data coming into the system and its appearance in our database.&lt;/p&gt;
&lt;p&gt;This is critical for various algorithms that react to the sensor readings in real-time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Problem with reading performance.&lt;/strong&gt; Stream analytics for real-time data processing constantly needs information from the database – tens of thousands of small queries. On average, one ClickHouse node handles about a hundred analytical queries at any time. It was created to infrequently process heavy analytical queries with large amounts of data. Of course, this is not suitable for calculating trends in the data stream from hundreds of thousands of sensors.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image2.png" alt="ClickHouse doesn’t handle a large number of queries well" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;ClickHouse doesn’t handle a large number of queries well&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Solution.&lt;/strong&gt; We decided to place a cache in front of Clickhouse. The cache was meant to store the hot data that has been requested in the last 24 hours most often.&lt;/p&gt;
&lt;p&gt;24 hours of data is not a year but still quite a lot – so we need an AP system with horizontal scaling for reading and writing and a focus on performance while writing single events and numerous readings.&lt;/p&gt;
&lt;p&gt;We also need high availability, analytic tools for time series, persistence, and built-in TTL. So, we needed a fast ClickHouse that could store everything in memory. Being unable to find any suitable solutions, we decided to build one based on the Tarantool primitives:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Persistence – check (WAL-logs + snapshots).&lt;/li&gt;
&lt;li&gt;Performance – check; all the data is in the memory.&lt;/li&gt;
&lt;li&gt;Scaling – check; replication + sharding.&lt;/li&gt;
&lt;li&gt;High availability – check.&lt;/li&gt;
&lt;li&gt;Analytics tools for time series (grouping, aggregation, etc.) – we built them upon stored procedures.&lt;/li&gt;
&lt;li&gt;TTL – built upon stored procedures with one background fiber (coroutine).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The solution turned out to be powerful and easy to use. One instance handled 10,000 reading RPCs, including analytic ones.&lt;/p&gt;
&lt;p&gt;Here is the architecture we came up with:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/07/image8.png" alt="Final architecture: ClickHouse as the analytic database and the Tarantool cache storing 24 hours of data. " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Final architecture: ClickHouse as the analytic database and the Tarantool cache storing 24 hours of data.  &lt;/p&gt;
&lt;h2 id="a-new-type-of-data--the-state-and-its-storing"&gt;A new type of data – the state and it’s storing&lt;/h2&gt;
&lt;p&gt;We found a specific database for each type of data, but as the platform developed, another one appeared – the status. The status consists of current statuses of sensors and devices, as well as some global variables for stream analytics rules.&lt;/p&gt;
&lt;p&gt;Let’s say we have a lightbulb. The light may be either on or off, and we always need to have access to its current state, including one in the rules. Another example is a variable in stream rules – e.g., a counter of some sort.&lt;/p&gt;
&lt;p&gt;This type of data needs frequent writing and fast access but doesn’t take a lot of space.&lt;/p&gt;
&lt;p&gt;Metadata storage doesn’t suit this type of data well, because the status may change quite often and we only have one Master for writing. Durable and operating storage doesn’t work well too, because our status was last changed three years ago, and we need to have quick reading access.&lt;/p&gt;
&lt;p&gt;This means that the status database needs to have horizontal scaling for reading and writing, high availability, fault tolerance, and consistency on the values/documents level. We can sacrifice global consistency and ACID transactions.&lt;/p&gt;
&lt;p&gt;Any Key-Value or a document database should work: Redis sharding cluster, MongoDB, or, once again, Tarantool.&lt;/p&gt;
&lt;p&gt;Tarantool advantages:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It is the most popular way of using Tarantool.&lt;/li&gt;
&lt;li&gt;Horizontal scaling – check; asynchronous replication + sharding.&lt;/li&gt;
&lt;li&gt;Consistency on the document level – check.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As a result, we have three Tarantools that are used differently: one for storing metadata, a cache for quick reading from the devices, and one for storing status data.&lt;/p&gt;
&lt;h2 id="how-to-choose-a-database-for-your-iot-platform"&gt;How to choose a database for your IoT platform&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;There is no such thing as a universal database.&lt;/li&gt;
&lt;li&gt;Each type of data should have its own database, the one most suitable.&lt;/li&gt;
&lt;li&gt;There is a chance you may not find a fitting database in the market.&lt;/li&gt;
&lt;li&gt;Tarantool can work as a basis for a specialized database&lt;/li&gt;
&lt;/ol&gt;</content:encoded>
      <author>Andrey Sergeev</author>
      <category>Advanced Level</category>
      <category>Cache</category>
      <category>ClickHouse</category>
      <category>DevOps</category>
      <category>IoT</category>
      <category>Kubernetes</category>
      <category>Kubernetes</category>
      <category>Lua</category>
      <category>NoSQL</category>
      <category>Open Source Databases</category>
      <category>SQL</category>
      <category>Tarantool</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/07/image6_hu_919c9cc21495b346.jpg"/>
      <media:content url="https://percona.community/blog/2020/07/image6_hu_cedf6902f00c5464.jpg" medium="image"/>
    </item>
    <item>
      <title>MariaDB Server Fest: Call for Papers</title>
      <link>https://percona.community/blog/2020/06/26/mariadb-server-fest-call-for-papers/</link>
      <guid>https://percona.community/blog/2020/06/26/mariadb-server-fest-call-for-papers/</guid>
      <pubDate>Fri, 26 Jun 2020 21:42:28 UTC</pubDate>
      <description>In the week of 14-20 September 2020, MariaDB Foundation will host the MariaDB Server Fest Online Conference. We welcome the Percona Community not just to participate, but also to submit papers for the event. We already have Peter Zaitsev joining as keynoter; we hope for more to come.</description>
      <content:encoded>&lt;p&gt;In the week of 14-20 September 2020, MariaDB Foundation will host the MariaDB Server Fest Online Conference. We welcome the Percona Community not just to participate, but also to submit papers for the event. We already have Peter Zaitsev joining as keynoter; we hope for more to come.&lt;/p&gt;
&lt;p&gt;Our target audience are the users of MariaDB Server – current and future ones. We are looking for use cases, practices, tools and insights from our user base as well as from application developers, service providers and other experts.&lt;/p&gt;
&lt;p&gt;When planning and phrasing your CfP submission at &lt;a href="https://mariadb.org/fest2020cfp/" target="_blank" rel="noopener noreferrer"&gt;https://mariadb.org/fest2020cfp/&lt;/a&gt;, think about what makes MariaDB Server unique, and what insights you can give the demanding audience.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Our audience is interested in your insights about new cool features of the latest releases, but also in underused MariaDB functionality that has been there for a while.&lt;/li&gt;
&lt;li&gt;Functionality such as system versioned tables, JSON functionality, and security features is interesting, and the same goes for usage patterns and best practices.&lt;/li&gt;
&lt;li&gt;Share your knowledge of PL/SQL, SEQUENCEs and other Oracle compatibility features, but also in experiences from overall migration strategies.&lt;/li&gt;
&lt;li&gt;Our audience is interested in comparing HA, Galera and general replication functionality to that of other similar databases, but would likely want to avoid overly confrontational flame wars on, say, Global Transaction ID.&lt;/li&gt;
&lt;li&gt;Developers and DBAs are used to seeing MariaDB positioned in contrast to MySQL (level of compatibility; differences in feature set), but may also find it insightful with comparisons to PostgreSQL, MongoDB and Oracle.&lt;/li&gt;
&lt;li&gt;Developers, sysadmins and devops are focused on technology and functionality, but is also very mindful of the implications of release schedules, security fix processes, and engaging the community in submitting code contributions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more about our conference, see our announcement at &lt;a href="https://mariadb.org/fest/" target="_blank" rel="noopener noreferrer"&gt;https://mariadb.org/fest/&lt;/a&gt; and &lt;a href="https://mariadb.org/fest2020cfp/" target="_blank" rel="noopener noreferrer"&gt;https://mariadb.org/fest2020cfp/&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Finally: thank you to Tom Basil of Percona, who opened up the opportunity for us to write this guest blog on the Percona Community Blog!&lt;/p&gt;
&lt;p&gt;We hope for many interesting submissions – and, later on, attendees – from the Percona Community. Footnote: The Call for Papers is open for one more week, until the end of June.&lt;/p&gt;</content:encoded>
      <author>Kaj Arnö</author>
      <category>kaj.arno</category>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/MDBS_Fest_logowhite_bg_hu_bcf01eb058e3a10.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/MDBS_Fest_logowhite_bg_hu_980005dd80e7e01.jpg" medium="image"/>
    </item>
    <item>
      <title>Cassandra Where and How by John Schulz</title>
      <link>https://percona.community/blog/2020/06/24/cassandra-where-and-how-by-john-schulz/</link>
      <guid>https://percona.community/blog/2020/06/24/cassandra-where-and-how-by-john-schulz/</guid>
      <pubDate>Wed, 24 Jun 2020 12:20:22 UTC</pubDate>
      <description>If Percona Live ONLINE had graded its talks by skill level this year, John Schulz’s talk would have been essential viewing in the Beginners track. (You can watch all the event’s presentations now on Percona’s YouTube channel.) This talk was a good overview and meant for anyone who had heard of the Apache Cassandra distributed database but wasn’t sure whether it would be suitable for their project or not.</description>
      <content:encoded>&lt;p&gt;If Percona Live ONLINE had graded its talks by skill level this year, John Schulz’s talk would have been essential viewing in the Beginners track. (You can watch all the event’s presentations now on &lt;a href="https://www.youtube.com/user/PerconaMySQL/videos" target="_blank" rel="noopener noreferrer"&gt;Percona’s YouTube channel.&lt;/a&gt;) This talk was a good overview and meant for anyone who had heard of the Apache Cassandra distributed database but wasn’t sure whether it would be suitable for their project or not.&lt;/p&gt;
&lt;p&gt;Database-veteran, John Schulz has been tinkering with Cassandra for about a decade and to help anyone get started he gave a whistle-stop tour of the Cassandra ecosystem. He introduced Apache Cassandra by laying out some important characteristics of the database. These include the way Cassandra is designed to handle high-traffic volumes, especially writes, and is designed from the ground-up for high availability. John briefly talked about the ‘democratized nature’ of the database; how all its nodes are designed to be equal. However, while Cassandra is designed to scale linearly, he stressed that this ability comes with some serious caveats: “You have to understand the way it was designed,” John cautioned an audience of over 500 attendees. “You have to understand how you need to model data with it, otherwise its linear scaling will go out the window.”&lt;/p&gt;
&lt;h2 id="not-relational"&gt;Not relational&lt;/h2&gt;
&lt;p&gt;Cassandra has many strengths, but it’s not suitable for every use case. For instance, John said he would discourage using Cassandra for analytics as “it’s not a massive parallel processing engine.”&lt;/p&gt;
&lt;p&gt;He also highlighted the fact that Cassandra uses an SQL-like language called the &lt;a href="https://en.wikipedia.org/wiki/Apache_Cassandra#Cassandra_Query_Language" target="_blank" rel="noopener noreferrer"&gt;Cassandra Query Language (CQL)&lt;/a&gt;, which despite its similarities is definitely not SQL. Similarly, while you can add &lt;a href="https://spark.apache.org/sql/" target="_blank" rel="noopener noreferrer"&gt;Spark SQL&lt;/a&gt; to Cassandra and perform &lt;a href="https://en.wikipedia.org/wiki/Join_%28SQL%29" target="_blank" rel="noopener noreferrer"&gt;JOINs&lt;/a&gt;, Cassandra is not a relational database and shouldn’t be used as one. He also warned against implementing &lt;a href="https://en.wikipedia.org/wiki/Record_locking" target="_blank" rel="noopener noreferrer"&gt;locks&lt;/a&gt; in Cassandra. Apparently, he’s seen many customers do this only to regret it later. In fact, he suggested that if using a lock is essential for your application, then perhaps you shouldn’t be looking at Cassandra.&lt;/p&gt;
&lt;p&gt;After cautioning his virtual attendees, John shared some of the circumstances and use cases where Cassandra does excel. As a general principle, Cassandra works best in environments where the database writes exceed the reads by a large margin and where the sheer amount of traffic would normally overwhelm a traditional relational database.&lt;/p&gt;
&lt;p&gt;By way of example, John said that Cassandra works well for tracking ad hit rates. The database is also popularly used in the IoT industry for capturing raw data from devices, such as fitness trackers and vehicles. Also, many phone companies in North America are using Cassandra for customer service and a number of companies use it to provide metrics collection as a service.&lt;/p&gt;
&lt;h2 id="first-steps"&gt;First steps&lt;/h2&gt;
&lt;p&gt;Before getting started with Cassandra, John strongly recommended setting aside some time to design your database: “Badly designed data models, produce badly performing databases.”&lt;/p&gt;
&lt;p&gt;He suggested a couple of resources that would help with that including an &lt;a href="https://cassandra.apache.org/doc/latest/data_modeling/" target="_blank" rel="noopener noreferrer"&gt;overview of the topic from the Apache Cassandra project itself&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Next, he shared some of the questions you need to ask yourself before using Cassandra. For instance, what’s your main purpose for using Cassandra? The answer to that question will have a bearing on how you want to run Cassandra. That’s because the database offers plenty of options that range from a traditional data center environment to various cloud solutions. You can run Cassandra on your laptop, which is a good environment for tinkering with it. For a production environment though you can deploy Cassandra on physical servers, or inside VMs, or wrapped in containers.&lt;/p&gt;
&lt;p&gt;The next piece of the puzzle is to decide on a Cassandra flavour or distribution. John rounded up some of the most popular including Apache Cassandra, DataStax Enterprise, Scylla Open Source and Enterprise, Yugabyte, CosmosDB, Amazon Keyspaces, and Elassandra. He spent some time explaining them all and the key differences between them, but besides Apache Cassandra and DataStax Enterprise, he classified all other solutions as Cassandra API upstarts that look and behave like Cassandra, but aren’t exactly Cassandra under the covers. He was particularly excited about Elassandra, the mashup of Elasticsearch and Cassandra and pointed out that the former’s global index helps negate the limitations of Cassandra’s secondary indexes that are local-only by default.&lt;/p&gt;
&lt;h2 id="at-your-service"&gt;At your service&lt;/h2&gt;
&lt;p&gt;You can run Cassandra on various platforms, though John recommended using one of the Database-as-a-Service (DBaaS) providers as he felt it made very little sense to do it any other way. He briefly talked about some of the most popular services including InstaClustr, DataStax Astra, Amazon KeySpaces, Scylla Cloud, IBM Compose for Scylla, YugaByte Cloud, and CosmosDB.&lt;/p&gt;
&lt;p&gt;The main advantage of these services, John felt, was that they get you a Cassandra cluster instantly. Furthermore, they also come with lots of useful features such as automatic backups, automatic repairs, as well as monitoring. However, if you don’t want to deploy Cassandra on your own hardware, John supplied a list of things you’ll want to think about.&lt;/p&gt;
&lt;p&gt;He suggested using an automation tool, such as Chef, Puppet, Ansible, to build your clusters. He also recommended using a log aggregator and monitoring the cluster in real-time. He cautioned anyone looking to deploy Cassandra to never run an installation with a single node. John says that while you can do this, you won’t be able to observe all of the interactions that go on between the nodes, which will eventually affect the real-world performance and behaviour of your application. However, John recommended running a cluster of at least n nodes where n equals your replication factor. This is a talk in its own right, but, in essence, he suggested a replication factor of at least three.&lt;/p&gt;
&lt;p&gt;In the final section of his talk he covered the two mechanisms for deploying Cassandra: inside a Docker container and with the &lt;a href="https://github.com/riptano/ccm" target="_blank" rel="noopener noreferrer"&gt;Cassandra Cluster Manager (CCM)&lt;/a&gt;. Written in Python, John says CCM makes starting a Cassandra cluster on your laptop or desktop, or even a Raspberry Pi, just as easy as using a Database-as-a-Service option on the cloud. He ended by detailing the procedure for both mechanisms using which you can spin up a Cassandra cluster in a matter of minutes. You can watch the whole of &lt;a href="https://www.percona.com/resources/videos/cassandra-where-and-how-john-schulz-percona-live-online-2020" target="_blank" rel="noopener noreferrer"&gt;John Schulz’s Apache Cassandra talk&lt;/a&gt; through the link.&lt;/p&gt;</content:encoded>
      <author>Mayank Sharma</author>
      <category>Mayank Sharma</category>
      <category>Cassandra</category>
      <category>DBaaS</category>
      <category>DevOps</category>
      <category>Docker</category>
      <category>Events</category>
      <category>Open Source Databases</category>
      <category>Percona Live</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/PLO-Card-Cassandra_hu_7c6defe97179165d.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/PLO-Card-Cassandra_hu_eb3f6bd7bb1747b9.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live ONLINE Talk: Optimize and Troubleshoot MySQL using Percona Monitoring and Management by Peter Zaitsev</title>
      <link>https://percona.community/blog/2020/06/23/percona-live-online-talk-optimize-and-troubleshoot-mysql-using-percona-monitoring-and-management-by-peter-zaitsev/</link>
      <guid>https://percona.community/blog/2020/06/23/percona-live-online-talk-optimize-and-troubleshoot-mysql-using-percona-monitoring-and-management-by-peter-zaitsev/</guid>
      <pubDate>Tue, 23 Jun 2020 15:09:57 UTC</pubDate>
      <description>Incorporating a database in an organization is a complicated task that involves a lot of people besides the DBAs. This is something that Peter Zaitsev, co-founder and CEO of Percona, understands very well.</description>
      <content:encoded>&lt;p&gt;Incorporating a database in an organization is a complicated task that involves a lot of people besides the DBAs. This is something that Peter Zaitsev, co-founder and CEO of Percona, understands very well.&lt;/p&gt;
&lt;p&gt;In the build-up to his hands-on presentation with the open source &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management&lt;/a&gt; (PMM) platform, Peter spoke about how inducting a database in an organization is a constant tussle between the developers, the management and the DBAs. While the developers want a solution that just works, the managers don’t want the database to break the bank: “The DBAs just want to make sure they don’t spend too much time keeping them both happy,” he shared.&lt;/p&gt;
&lt;p&gt;This is why, Peter argues, DBAs want to make sure the databases in their realm are optimized for performance. Like security, performance optimization is an on-going process that begins during development and continues into the production environment as well.&lt;/p&gt;
&lt;h2 id="cover-all-bases"&gt;Cover all bases&lt;/h2&gt;
&lt;p&gt;Based on his experience, Peter talked about the two factors that impact the performance of a database. On the one hand, you have applications that are responsible for the volume and type of queries they generate. If an application sends an unoptimized query it can put the database under unnecessary strain. On the other hand, you have hardware resources that when stretched to the limit can even delay the simplest of queries.&lt;/p&gt;
&lt;p&gt;Peter pointed out that PMM takes both these aspects into consideration, before launching into his hands-on demo of the latest version of the platform, PMM 2. He began with an overview of the new features in the release particularly its ability to look at groups of servers instead of a single server, something that Peter refers to as “treating the servers as a herd and not as pets”.&lt;/p&gt;
&lt;p&gt;He began the demo with the Query Analytics dashboard that shows all the database queries running across all deployed servers. He ran through the various metrics on which DBAs can sort the queries to get different kinds of results, such as the list of queries that run most frequently or the queries that take the longest to complete.&lt;/p&gt;
&lt;p&gt;As looking at averages doesn’t usually make a lot of sense for performance optimization, Peter demonstrated how you can use PMM 2 to drill down to particular problematic queries. He used the platform to pinpoint a particular inefficient query that was returning one row on average, but only after scanning about 100,000 rows leading to degradation in performance.&lt;/p&gt;
&lt;h2 id="a-360-degree-view"&gt;A 360-degree view&lt;/h2&gt;
&lt;p&gt;He also demonstrated how DBAs can visualize the performance of the database using different parameters. For instance, you can sort it by users, which is particularly useful if you’ve followed the good practice of configuring different apps to run with different users. Viewing loads by users will help you identify the applications that are consuming the most resources.&lt;/p&gt;
&lt;p&gt;Next, he headed to the Node Summary dashboard, which is useful for observing the usage of the hardware resources on the servers. This dashboard tracks several additional parameters that help DBAs make more sense of the resource usage. For instance, instead of just CPU usage, you’re also able to see CPU saturation and max core utilization. The latter is particularly useful since single queries in MySQL can only execute on one CPU core. Peter showed how you can use this dashboard to make sure your multi-core CPU is being used efficiently.&lt;/p&gt;
&lt;p&gt;He ran through similar examples with memory utilization and Disk IO throughput, both of which display additional parameters to help you ensure the concerned resource is being used efficiently. He also demonstrated the MySQL Instance summary dashboard that displays various information about the MySQL servers as well as the InnoDB Details dashboard, which visualizes all kinds of InnoDB activity and is useful for identifying and diagnosing bottlenecks. One metric that Peter pointed out was InnoDB pending IOs, which can be very valuable for weeding out storage bottlenecks, especially when using cloud storage.&lt;/p&gt;
&lt;h2 id="advanced-usage"&gt;Advanced usage&lt;/h2&gt;
&lt;p&gt;One of the interesting features of PMM 2 is that you can ask it to &lt;a href="https://www.percona.com/blog/2020/03/30/advanced-query-analysis-in-percona-monitoring-and-management-with-direct-clickhouse-access/" target="_blank" rel="noopener noreferrer"&gt;use ClickHouse&lt;/a&gt; to store query performance data. Peter demoed how you can access ClickHouse on PMM 2 and showed off a dashboard he built on top that isn’t yet part of the platform but promised to share it publicly soon.&lt;/p&gt;
&lt;p&gt;PMM 2 is &lt;a href="https://www.percona.com/blog/2019/11/22/designing-grafana-dashboards/" target="_blank" rel="noopener noreferrer"&gt;powered by Grafana&lt;/a&gt; and Peter rounded up the presentation by sharing some interesting tips and tricks for using Grafana, such as ad-hoc filtering, which you can use to filter a dashboard by any of the defined clauses. For instance, Peter showed how you can use it to look at all the queries that send a maximum of ten rows.&lt;/p&gt;
&lt;p&gt;One of the new additions in PMM 2 is the Security Threat tool and Peter briefly ran through this during his demonstration. The tool runs daily checks for common database security issues and flags any non-compliance.&lt;/p&gt;
&lt;p&gt;Fielding questions, Peter clarified that while he focussed on MySQL, PMM 2 supports MariaDB as well. PMM monitoring doesn’t add much overhead and at the end of the day will surely help you save a lot more resources than it consumes.&lt;/p&gt;
&lt;p&gt;You can &lt;a href="https://www.percona.com/resources/videos/optimize-and-troubleshoot-mysql-using-pmm-2-peter-zaitsev-percona-live-online-2020" target="_blank" rel="noopener noreferrer"&gt;watch Peter’s presentation&lt;/a&gt; and follow along on the publicly accessible &lt;a href="https://pmmdemo.percona.com/" target="_blank" rel="noopener noreferrer"&gt;PMM 2 demo server&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Mayank Sharma</author>
      <category>Mayank Sharma</category>
      <category>DevOps</category>
      <category>MariaDB</category>
      <category>Monitoring</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <category>Percona Monitoring and Management</category>
      <category>PMM</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/SC-3-Matt-Percona_hu_1395b6e2186771a6.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/SC-3-Matt-Percona_hu_c0e4c47b55fa22a9.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live ONLINE Opening Keynote: State of Open Source Databases by Peter Zaitsev</title>
      <link>https://percona.community/blog/2020/06/15/percona-live-online-opening-keynote-state-of-open-source-databases-by-peter-zaitsev/</link>
      <guid>https://percona.community/blog/2020/06/15/percona-live-online-opening-keynote-state-of-open-source-databases-by-peter-zaitsev/</guid>
      <pubDate>Mon, 15 Jun 2020 15:44:28 UTC</pubDate>
      <description>Peter Zaitsev is CEO and co-founder of Percona. He opened Percona Live ONLINE with a keynote which took a look at the historical foundations of open source software and how they have shaped the field today.</description>
      <content:encoded>&lt;p&gt;Peter Zaitsev is CEO and co-founder of Percona. He opened &lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live ONLINE&lt;/a&gt; with a keynote which took a look at the historical foundations of open source software and how they have shaped the field today.&lt;/p&gt;
&lt;h2 id="the-history-of-open-source-software"&gt;The history of open source software&lt;/h2&gt;
&lt;p&gt;In the early days of computing, software and hardware were bundled together. While the term open source wasn’t coined, software was open by default. According to Peter:  “One of the big reasons for that was copyrights on software was not a thing, because the software was not really a thing before that. Laws tend to move slower and only kind of catch up with technological development.”&lt;/p&gt;
&lt;p&gt;The source code for software was shipped with early hardware. Early adopters - typically from universities - would modify the code to fix bugs and add needed functionality, akin to the advanced open source users of today. The changes back then were openly shared under academic principles.&lt;/p&gt;
&lt;h2 id="enter-antitrust-in-the-1970s"&gt;Enter antitrust in the 1970s&lt;/h2&gt;
&lt;p&gt;In the late 1960s and the early 1970s, computing was growing into a significant industry, where IBM was controlling the vast majority of the mainframe market. This resulted in an antitrust lawsuit against IBM in the US, who as a response unbundled software from hardware.&lt;/p&gt;
&lt;p&gt;The Copyright Act was moved by Congress to make software copyrightable and created a separate software industry distinct from hardware. Software becomes the major class of intellectual property.&lt;/p&gt;
&lt;h2 id="the-1980s-and-1990s-the-era-of-romantic-open-source-and-free-software"&gt;The 1980s and 1990s: The Era of Romantic Open source (and free) software&lt;/h2&gt;
&lt;p&gt;After the development of copyright for software, new projects started that rejected applying copyright and restrictive licenses to their development. Peter asserted: “I would call that an era of romantic open source software. Right? Because a lot of software was started by hobbyists or as according to Linus Torvald ‘just for fun.’”&lt;/p&gt;
&lt;h2 id="the-2000s-a-dramatic-decade-for-oss"&gt;The 2000s: A dramatic decade for OSS&lt;/h2&gt;
&lt;p&gt;The 2000s was a dramatic decade for open source software, part in response to the .com crash. “A lot of companies needed ways to build their solutions very efficiently and Linux, Apache MySQL, a lot of other open source options allowed them to do just that,” said Peter.&lt;/p&gt;
&lt;p&gt;Prior to 2000, big OSS companies were limited to Red Hat which went through an IPO in the late 1990s. Enter the 2000s and Sun acquired MySQL for $1 billion, which was hugely significant to the OSS market. It was during this period that Steve Ballmer famously asserted, “Linux is a cancer that attaches itself in an intellectual property sense to everything it touches.”&lt;/p&gt;
&lt;p&gt;In the 2000s, many businesses started to recognize the value of open source software, and with an increasing number of large enterprises starting to adopt the open-source first mentality. This included adoption by governments “to help them avoid reliance on companies from other countries,” according to Peter.&lt;/p&gt;
&lt;p&gt;The use of open source software had a range of benefits for both companies and for developers as individuals. For enterprise customers, moving to open source resulted in lower direct costs both short term and long term. As for developers, using open source became the preference for many of them, as it was easier to experiment and get familiar with tools. Over time, it became easier to find developers that were proficient in open source technologies compared to proprietary software. This led to better productivity and faster innovation. Customers were also able to avoid the historical barrier of vendor lock-in.&lt;/p&gt;
&lt;p&gt;The decade then led to a new generation of open source companies being created. However, the fact that many of these were venture capital funded lead to the need for fast, high returns on those investments. Thus, many of these companies found they had the need to build a monopoly based on the pervasive message as to the advantages of open source while also increasing “stickiness” for their own businesses.&lt;/p&gt;
&lt;h2 id="romantic-vs-business-values-lead-to-not-quite-open-source"&gt;Romantic vs business values lead to ’not quite open source’&lt;/h2&gt;
&lt;p&gt;For Peter, the time of new open source companies is a new challenge. “If you really look at those approaches to business values, many are in conflict with the early stage of romantic open-source software, and the values and ideas about sharing and letting other people innovate on your software, because hey, that actually can create competition for you,” he explained.&lt;/p&gt;
&lt;p&gt;A lot of business models were evolving from open source to ’not quite open-source’. Some of those models would be open source eventually, such as shared source licenses and open-source compatible software, which is used by a lot of cloud vendors. Peter noted that vendors would spruik this by saying, “You can move from open source to our open-source compatible software. You probably would have a very hard time moving back, but we don’t talk about that.”&lt;/p&gt;
&lt;p&gt;On the positive side, the availability of funding meant there were a lot of investments and a high pace of innovation in the software around the open source community. On the negative side, the market became more complicated with the challenge to differentiate between open source software and ‘not quite open’ software that didn’t provide the same value of truly open source software.&lt;/p&gt;
&lt;h2 id="the-2010s-the-rise-of-the-cloud-unique-challenges-and-opportunities-for-oss"&gt;The 2010s: The rise of the cloud: unique challenges and opportunities for OSS&lt;/h2&gt;
&lt;p&gt;While AWS was started in the previous decade, the 2010s were critical for open source databases - specifically, around the cloud and open source. Peter asserted, “Cloud really hijacked the GPL license. Before the Software as a Service deployment model, software vendors who did not want others to build commercial software on their solutions could just use the GPL. Not anymore. Now, AWS probably makes more money on MySQL than Oracle does. And they can just use the GPL software and don’t have to pay Oracle anything.”&lt;/p&gt;
&lt;p&gt;Unlike the 1970s, cloud services are now bundling hardware usage costs with software. This meant open source software could no longer benefit from a zero price effect.&lt;/p&gt;
&lt;p&gt;This was important psychology, as Peter noted: “Previously I would have to buy a server separately. And then I have a choice, either I could go and pay thousands of dollars to license Oracle to run on that server, or I could go ahead and download Postgres and use it for free.  That is not the case anymore. It just becomes a case of a difference in the price which may not be very well understood.”&lt;/p&gt;
&lt;h2 id="market-acceptance-of-not-fully-open-source-software-models"&gt;Market acceptance of Not fully open source software models&lt;/h2&gt;
&lt;p&gt;Peter asserted that acceptance of not fully Open Source Software models is on the rise. “It’s very important for us as an open source database community to really educate folks in the market about the difference of an open source software offering and one which is marketed using an open source term but not providing the true values of open source software.”&lt;/p&gt;
&lt;h2 id="2020s-great-momentum-for-commercial-open-source"&gt;2020s: Great Momentum for Commercial Open Source&lt;/h2&gt;
&lt;p&gt;It’s a fantastic time for Commercial Open Source, with many companies getting billion dollar valuations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;RedHat - $24B (acquired by IBM)&lt;/li&gt;
&lt;li&gt;MongoDB - $11.2B (current valuation)&lt;/li&gt;
&lt;li&gt;GitHub - $7.5B (acquired by Microsoft)&lt;/li&gt;
&lt;li&gt;Databricks - $6.2B (current valuation)&lt;/li&gt;
&lt;li&gt;Elastic - $5.8B (current valuation)&lt;/li&gt;
&lt;li&gt;Hashicorp - $5B (current valuation)&lt;/li&gt;
&lt;li&gt;Confluent - $4.5B (current valuation)&lt;/li&gt;
&lt;li&gt;Cloudera - $2.5B  (current valuation)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Peter commented, “Because of the success of MongoDB, Elastic and some other open source companies, we see a lot of investment and a lot of innovation in the Open Source Database space.” This includes new technologies like Planet-Scale, InfluxDB, yugabyteDB, and others. It’s not limited to relational databases, it includes multimodal cloud, graph databases, and time series focused.&lt;/p&gt;
&lt;h2 id="covid-19-pandemic"&gt;COVID-19 Pandemic&lt;/h2&gt;
&lt;p&gt;The pandemic has led to an acceleration of digital transformation including service delivery online and digital education. This requires lower costs and/or cost-cutting due to the predicted economic slowdown. This can be another reason for open source success, as companies have to innovate and keep their costs down. These two desires will encourage companies to both consider open source, and to keep a close key on the cost for running those systems whether this is on existing hardware or in the cloud.&lt;/p&gt;
&lt;h2 id="dbaas"&gt;DBaaS&lt;/h2&gt;
&lt;p&gt;Today database as a service (DBaaS) is a preferred way to consume open source database software. According to Peter, “This allows the development team to use multiple database technologies more easily, matching them to application needs because they don’t really need to install and maintain them.”&lt;/p&gt;
&lt;p&gt;However, Peter did point to one problem around DBaaS that can affect the success of implementation for companies and for developer teams. For many use cases, DBaaS is commonly marketed by cloud vendors as ‘fully managed.’ “Because of that, we don’t have to get any DBAs or other database experts on the team. However this ‘fully managed’ approach still needs to be configured for security, somebody still needs to advise us on the schema, help us to design the queries, etc,” explained Peter.&lt;/p&gt;
&lt;p&gt;The rise of DBaaS has meant that developers can choose and use databases directly without the supervision of database professionals. This can cause various bad outcomes ranging from security leaks to very inefficient delivery of database services over time. For developers that assume their DBaaS provider will deliver more insights or advice, this can lead to wasted time and budget.&lt;/p&gt;
&lt;h2 id="dbaas-and-multiverse"&gt;DBaaS and Multiverse&lt;/h2&gt;
&lt;p&gt;Peter then provided an overview of the future as he sees it: “From an open source prism, you can think of the cloud as a commodity with many compatible implementations. Or think about highly differentiated clouds, where you have proprietary solutions available from a single vendor. The latter can be a huge vendor lock-in.  However, many are trying to avoid lock-in.”&lt;/p&gt;
&lt;p&gt;Thus, he said, we are increasingly seeing multiple database technologies: multiple environments, hybrid cloud, multi-cloud, Many proprietary solutions are available around cloud and hybrid environments, like Google Anthos, VMware and AWS Outposts.  Simultaneously Kubernetes also has emerged as the leading open source API for hybrid and public clouds.&lt;/p&gt;
&lt;p&gt;Kubernetes is ubiquitous. There are proprietary solutions to simplify Kubernetes management, and the Kubernetes interface is supported by Multi and Hybrid Cloud Platforms. The is relevant to open source databases and Peter believes we should be focusing on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Adapting Cloud Native deployments in Multi and Hybrid Cloud&lt;/li&gt;
&lt;li&gt;Kubernetes as the API of choice for Open Source database deployments&lt;/li&gt;
&lt;li&gt;Making things simple and comparable to integrated DBaaS Solutions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An important question to ask is: “If I am choosing DBaaS as my software consumption model, how do I get the most value from what Open Source Software provides?”&lt;/p&gt;
&lt;p&gt;According to Peter, Percona is embracing the cloud-native and multi-cloud approach through Kubernetes. Percona has released &lt;a href="https://www.percona.com/doc/kubernetes-operator-for-pxc/index.html" target="_blank" rel="noopener noreferrer"&gt;Kubernetes Operator for  XtraDB Cluster&lt;/a&gt; and &lt;a href="https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html" target="_blank" rel="noopener noreferrer"&gt; Kubernetes Operator for Percona Server for MongoDB&lt;/a&gt;. “We are also working through &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitor and Management&lt;/a&gt; to really help you to reduce the friction and run the open source database successfully in those cloud environments and on-premises,” he said.&lt;/p&gt;
&lt;p&gt;Peter also advised attendees to take the time to fill out the &lt;a href="https://www.percona.com/blog/2020/03/31/share-your-database-market-insight-by-completing-perconas-annual-survey/" target="_blank" rel="noopener noreferrer"&gt;Open Source Data Management Survey&lt;/a&gt;. Peter closed the keynote with: “Finally, I want to say Happy 25th Birthday to MySQL. Great job, MySQL team!”&lt;/p&gt;
&lt;p&gt;You can also watch Peter’s &lt;a href="https://www.percona.com/resources/videos/state-open-source-database-plo2020" target="_blank" rel="noopener noreferrer"&gt;keynote&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Cate Lawrence</author>
      <category>AWS</category>
      <category>DBaaS</category>
      <category>Kubernetes</category>
      <category>Kubernetes</category>
      <category>MariaDB</category>
      <category>MongoDB</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <category>opensource</category>
      <category>Percona</category>
      <category>PostgreSQL</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/SC-3-Matt-Percona_hu_1395b6e2186771a6.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/SC-3-Matt-Percona_hu_c0e4c47b55fa22a9.jpg" medium="image"/>
    </item>
    <item>
      <title>Matt Yonkovit: It's a crazy world, and these trends are disrupting and breaking your database infrastructure</title>
      <link>https://percona.community/blog/2020/06/12/matt-yonkovit-its-a-crazy-world-and-these-trends-are-disrupting-and-breaking-your-database-infrastructure/</link>
      <guid>https://percona.community/blog/2020/06/12/matt-yonkovit-its-a-crazy-world-and-these-trends-are-disrupting-and-breaking-your-database-infrastructure/</guid>
      <pubDate>Fri, 12 Jun 2020 15:30:28 UTC</pubDate>
      <description>Matt Yonkovit, Chief Experience Officer at Percona, presented a session at this year’s Percona Live ONLINE, sharing initial insights from the Open Source Data Management Survey 2020. The survey provides a critical insight first-hand into how enterprises of all sizes are using, developing, and troubleshooting open source database software. The full data will be released later this year with a detailed analysis.</description>
      <content:encoded>&lt;p&gt;Matt Yonkovit, Chief Experience Officer at Percona, presented a session at this year’s Percona Live ONLINE, sharing initial insights from the &lt;a href="https://www.percona.com/open-source-data-management-software-survey" target="_blank" rel="noopener noreferrer"&gt;Open Source Data Management Survey 2020.&lt;/a&gt; The survey provides a critical insight first-hand into how enterprises of all sizes are using, developing, and troubleshooting open source database software. The full data will be released later this year with a detailed analysis.&lt;/p&gt;
&lt;h2 id="he-who-controls-the-application-controls-the-stack"&gt;He who controls the application controls the stack&lt;/h2&gt;
&lt;p&gt;Matt started with discussing the challenge that developers face: “Those building it are not the ones managing it. And those who are building it are the ones deciding what to put in it.”&lt;/p&gt;
&lt;p&gt;Last year a survey asked Who gets to choose the database technology at companies? Most people choosing database technology are outside the database or the infrastructure side. More architects (32%) and developers (26%) are choosing the tech than management (17%) or DBAs (23%).&lt;/p&gt;
&lt;p&gt;However, the challenge is that the DBAs are inheriting technology from the development stack, and all of a sudden they have to support it. Matt said he likes to call this, “The technology inheritance problem: So now you’ve got a team of people who are not necessarily skilled at managing those technologies all of a sudden being responsible for new technologies.”&lt;/p&gt;
&lt;h2 id="the-multiverse-of-technology"&gt;The multiverse of technology&lt;/h2&gt;
&lt;p&gt;Enter the multiverse of technology: multi-database, multi-cloud, multi-location, multi-skilled. Matt explained this as follows:&lt;/p&gt;
&lt;p&gt;“Instead of saying we’re going to run on AWS and we’re going to consolidate on a single database or a set of databases, you’re running on multiple databases, you’re running in multiple locations, you’re running multi skilled people, because you’re no longer, you know, an expert Oracle DBA on its own. You’re a DBA of everything. And it’s leading to these multi-database environments.”&lt;/p&gt;
&lt;h2 id="the-database-footprint-is-growing"&gt;The database footprint is growing&lt;/h2&gt;
&lt;p&gt;In last year’s survey, more than 92% of companies were running more than one database, and 89% have more than one open source database in place. This year the number of companies that reported having between 100 and 1000 database instances in place grew by 40%. Those reporting over 1000 database instances grew by more than 50%. Matt noted:&lt;/p&gt;
&lt;p&gt;“Now we’ve got environments that have thousands of databases that have to be managed and supported, and that means that the care and feeding of each database is very difficult.”&lt;/p&gt;
&lt;p&gt;This is partly attributable to new technologies like machine learning and an insatiable need for more data to make better decisions. The footprints of databases continue to grow. Only 3.5% shrunk, whereas 14% stayed the same. And the vast majority, 80% saw growth, and almost 39% saw larger massive growth in the size of their environment.&lt;/p&gt;
&lt;h2 id="enter-the-multiverse"&gt;Enter the multiverse&lt;/h2&gt;
&lt;p&gt;The deluge of data and more databases leads to a multi-cloud space. In 2019, 30% reported that they were running a multi-cloud environment. In 2020 it’s 39%. Matt noted this by saying, “Some of the cloud providers are now taking notice. They’re investing in tools to let you run their platform across other competitors’ platforms. The growth also exists, albeit slower in the hybrid space: In 2019, 41% were hybrid, and in 2020 it’s 44%.&lt;/p&gt;
&lt;p&gt;So we’re seeing more databases, more data, more providers, more locations, more hybrid installations. And so, what are the consequences? “It means for a lot of us who have to work on these systems, we have less expertise in any one of them, because we don’t have the time to not only enhance our skills but to enhance the systems that we’re supporting and ensure that they’re properly managed and set up. We’ve less time per application, and we just have less time available,” continued Matt.&lt;/p&gt;
&lt;p&gt;This means more mistakes are happening, more automated cascading issues, more outages, more security issues, more complexity, more cost, and more help is needed.&lt;/p&gt;
&lt;h2 id="how-does-the-industry-respond"&gt;How does the industry respond?&lt;/h2&gt;
&lt;p&gt;Matt asserted: “There’s a pervasive debate between, ‘Do we need to automate? Or how much do we need to automate? How much do we not need people? How much do we need to focus on, the automation of things, and the AI versus bringing in experts?’ We are looking at DBaaS versus the need for DBAs, and we still need experts and people who know what they are doing.”&lt;/p&gt;
&lt;p&gt;“We need to ensure that we still have the tools and the skill set to address these problems as they occur correctly. Otherwise, we just make more problems.”&lt;/p&gt;
&lt;h2 id="dbaas"&gt;DBaaS&lt;/h2&gt;
&lt;p&gt;According to Matt: “Database as a service (DBaaS) is probably one of the best inventions that have happened in the last ten years to databases.” It enables developers to move quicker; it overcomes all kinds of skill gaps. However, it doesn’t eliminate the need for understanding and the tools to help. It helps, but it does not eliminate the need for DBAs and expertise.&lt;/p&gt;
&lt;h2 id="what-keeps-you-up-at-night"&gt;What keeps you up at night?&lt;/h2&gt;
&lt;p&gt;According to the respondents of this year’s survey, particular challenges keep developers up at night:&lt;/p&gt;
&lt;p&gt;The biggest is downtime (31%) followed by fixing some unforeseen issues (17%), security issues 15%). Bad performance and query issues are insomnia inducing for 13%, while staffing issues/a lack of resources challenge 9% of respondents.&lt;/p&gt;
&lt;h2 id="problems-happen-everywhere"&gt;Problems happen everywhere&lt;/h2&gt;
&lt;p&gt;The survey further found that problems happen everywhere, whether you’re in the cloud or not:&lt;/p&gt;
&lt;p&gt;62% in the cloud had performance issues, 54% non-cloud. Overworked staff increase by 10% when DBaaS is factored in, from 19% to 29%. According to Matt: “My speculation is when we move to a database service, we move those resources to do other things. And when database problems occur, they’ve got 17 other jobs to work on.”&lt;/p&gt;
&lt;h2 id="configuration-errors-a-significant-cause-of-data-breaches"&gt;Configuration errors a significant cause of data breaches&lt;/h2&gt;
&lt;p&gt;Outages and slowdowns persist in being a headline-grabbing problem. &lt;a href="https://www.cisomag.com/db8151dd-an-untraceable-data-breach-22-mn-emails-compromised/" target="_blank" rel="noopener noreferrer"&gt;News this week&lt;/a&gt; reported the hacking of an open Elasticsearch database containing around 22 million of email records. &lt;a href="https://enterprise.verizon.com/resources/reports/dbir/" target="_blank" rel="noopener noreferrer"&gt;Research&lt;/a&gt; by Verizon reveals that the fastest growing data breach cause is configuration errors.&lt;/p&gt;
&lt;h2 id="how-many-people-choose-to-scale-their-database-via-credit-card"&gt;How many people choose to scale their database via credit card?&lt;/h2&gt;
&lt;p&gt;From a spend perspective, survey respondents were asked: are you spending at plan, below plan, or above plan? About 24% were above plan. 33% of those using DBaaS and Cloud were above plan.&lt;/p&gt;
&lt;p&gt;Upon being asked, how often have you had to upgrade your database instances to something bigger the results are significant:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;0 times - 11%&lt;/li&gt;
&lt;li&gt;1-3 times 40.4%&lt;/li&gt;
&lt;li&gt;4-9 times 28.6%&lt;/li&gt;
&lt;li&gt;10+ times 19.5%&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Matt stated that he believes the following situation is more common than it should be: “Most of these can be avoided by fixing performance problems. If we don’t look for those performance issues, then we’re going to fix them by paying more. And that’s what a lot of people end up doing.”&lt;/p&gt;
&lt;h2 id="unexpected-costs"&gt;Unexpected costs&lt;/h2&gt;
&lt;p&gt;Several survey respondents have experienced unexpected costs, which have increased as the software complexity increases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Non-public cloud users - 8% reported unexpected costs.&lt;/li&gt;
&lt;li&gt;Public cloud users - 10% reported unexpected costs&lt;/li&gt;
&lt;li&gt;Public cloud DBaaS - 19% said that their costs were unexpectedly higher&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;“We need better automation, and we need smarter tools, we need better education, better security, better performance, we need to make us all more efficient and be able to solve these problems that come up. It’s very, very important,” commented Matt.&lt;/p&gt;
&lt;h2 id="percona-monitoring-and-management"&gt;Percona Monitoring and Management&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management&lt;/a&gt; is the company’s free and open source tool to simplify this with a single interface to reduce complexity. Matt shared this as background: “We want a simplified management system, where we can take that complexity and give you the ability to reduce the complexity with it.”&lt;/p&gt;
&lt;h2 id="matts-selfish-security-goal-and-a-simple-solution"&gt;Matt’s selfish security goal and a simple solution&lt;/h2&gt;
&lt;p&gt;When discussing databases and security, Matt provided a very personal goal for improving the current situation. He lamented, “I don’t need more credit monitoring in response to database breaching, I am good until the year 2082!”&lt;/p&gt;
&lt;p&gt;Matt has a simple solution: “I can solve more than 50% of the data breaches that exist now. And I can do it in one line of code: Set your password! db.changeUserPassword (username, password). It is the Change Password command for MongoDB. Mongo and Elastic are currently the two most breached databases. Most of those breaches are because nobody set a password!”&lt;/p&gt;
&lt;p&gt;Percona Monitoring and Management 2.6 includes the first version of Percona’s security threat tool:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This provides checks for basic security problems and the most common issues, like missing passwords or not being at the latest version&lt;/li&gt;
&lt;li&gt;More checks will be added over the next several months&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="the-first-release-of-percona-distribution-for-mongodb"&gt;The first Release of &lt;a href="https://www.percona.com/software/mongodb" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for MongoDB:&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;On the first Distribution that Percona has released for MongoDB, Matt shared: “We take all the best of the open-source components and bundle it into one there. And we’re also now offering &lt;a href="https://www.percona.com/services/managed-services/percona-managed-database-services" target="_blank" rel="noopener noreferrer"&gt;managed services for MongoDB&lt;/a&gt;.”&lt;/p&gt;
&lt;p&gt;Percona also has a Distribution for PostgreSQL currently, with a Distribution for MySQL coming up. Matt also mentioned the world’s highest, most trusted high availability solution for MySQL in &lt;a href="https://www.percona.com/software/mysql-database/percona-xtradb-cluster" target="_blank" rel="noopener noreferrer"&gt;PerconaXtraDB Cluster&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Matt described Percona’s approach as looking to remove the problems for companies running multiple databases: “We take out all of those features and fixes and bundle it on top of MySQL Community to make it truly an enterprise-ready system.”&lt;/p&gt;
&lt;h2 id="helping-you-to-scale-and-simplify"&gt;Helping you to scale and simplify:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/software/mysql-database/percona-xtradb-cluster" target="_blank" rel="noopener noreferrer"&gt;XtraDB Cluster 8&lt;/a&gt; is faster and more scalable. There are new Kubernetes operators with easier management.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/software/postgresql-distribution" target="_blank" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL&lt;/a&gt; has launched with more performance enhancements to come.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="percona-and-linode-partnership"&gt;Percona and Linode Partnership&lt;/h2&gt;
&lt;p&gt;At the end of the session Matt went through how Percona is partnering with Linode to help bring Linode’s customers an enhanced DBaaS. The community benefits from better operations, better tools, and enhancements that will show up in our distributions.&lt;/p&gt;
&lt;p&gt;Blair Lyon, VP of Marketing at Linode joined the session to share how he sees this developing:&lt;/p&gt;
&lt;p&gt;“Since 2003, Linode has been helping our clients accelerate innovation by making cloud computing simple, affordable, and accessible for all. We’re leading a growing category of alternative cloud providers with nearly a million worldwide customers and 11 global data centers. And the key to being a true alternative to the big guys is providing the best of breed enterprise solutions and DBaaS is no exception.”&lt;/p&gt;
&lt;p&gt;Finally, Matt encouraged all attendees to the Percona Live event to provide their insight as part of 2020’s Open Source Data Management research report. If you have not filled out the &lt;a href="https://www.percona.com/blog/2020/03/31/share-your-database-market-insight-by-completing-perconas-annual-survey/" target="_blank" rel="noopener noreferrer"&gt;Open Source Data Management Survey&lt;/a&gt; then you can do so. Watch Matt’s &lt;a href="https://www.percona.com/resources/videos/trends-are-disrupting-and-breaking-your-db-infrastructure-matt-yonkovit-percona" target="_blank" rel="noopener noreferrer"&gt;keynote&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Cate Lawrence</author>
      <category>Cloud</category>
      <category>DBA Tools</category>
      <category>DBaaS</category>
      <category>Kubernetes</category>
      <category>Kubernetes</category>
      <category>MongoDB</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <category>Percona Monitoring and Management</category>
      <category>PMM</category>
      <category>Postgres</category>
      <category>PostgreSQL</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/PLO-Card-Matt_hu_b48ba47ad5468a2d.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/PLO-Card-Matt_hu_b4dc25bdfce8547c.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live ONLINE: Anti-cheating tools for massive multiplayer games using Amazon Aurora and Amazon ML services</title>
      <link>https://percona.community/blog/2020/06/11/percona-live-online-anti-cheating-tools-for-massive-multiplayer-games-using-amazon-aurora-and-amazon-ml-services/</link>
      <guid>https://percona.community/blog/2020/06/11/percona-live-online-anti-cheating-tools-for-massive-multiplayer-games-using-amazon-aurora-and-amazon-ml-services/</guid>
      <pubDate>Thu, 11 Jun 2020 14:21:08 UTC</pubDate>
      <description>Would you play a multiplayer game if you discovered other people are cheating? According to a survey by Irdeto, 60% of online games were negatively impacted by cheaters, and 77% of players said they would stop playing a multiplayer game if they think opponents are cheating. Player churn grows as cheating grows.</description>
      <content:encoded>&lt;p&gt;Would you play a multiplayer game if you discovered other people are cheating? According to a survey by Irdeto, 60% of online games were negatively impacted by cheaters, and 77% of players said they would stop playing a multiplayer game if they think opponents are cheating. Player churn grows as cheating grows.&lt;/p&gt;
&lt;p&gt;Stopping this is therefore essential if you want to build and develop your community, which is essential to success for today’s gaming companies. This session at &lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live ONLINE&lt;/a&gt; was presented by Yahav Biran, specialist solutions architect, gaming technologies at Amazon Web Services, and Yoav Eilat, Senior Product Manager at Amazon Web Services, presented a talk and demonstration about anti-cheating tools in gaming based on using automation and machine learning (ML).&lt;/p&gt;
&lt;p&gt;Yoav notes that while people might think of ML in terms of text or images, but: “There’s a considerable percentage of the world’s data sitting in relational databases. How can your application use it to get results and make predictions?”&lt;/p&gt;
&lt;h2 id="six-steps-for-adding-machine-learning-to-an-application"&gt;Six steps for adding Machine Learning to an Application&lt;/h2&gt;
&lt;p&gt;Traditionally there are a lot of steps for adding ML to an application with considerable expertise required and manual work, with the efforts of an application developer, database user and some help from a machine learning database scientist:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select and train database models&lt;/li&gt;
&lt;li&gt;Write application code to read data from the database&lt;/li&gt;
&lt;li&gt;Format the data for the ML model&lt;/li&gt;
&lt;li&gt;Call a machine learning service to run the ML model on the formatted data&lt;/li&gt;
&lt;li&gt;Format the output for the application&lt;/li&gt;
&lt;li&gt;Load the results to the application&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The result is most machine learning is done offline by a data scientist in a desktop tool. “We would like to be able to add some code to your game and use the models directly from there,” explained Yahav.&lt;/p&gt;
&lt;p&gt;With multiple databases such as the customer service database or order management system, or in the instance of gaming, this would all be a lot of work to do manually. “So, we want to see how we can do that in an easier and automated way,” continued Yahav.&lt;/p&gt;
&lt;h2 id="examples-where-cheating-can-occur"&gt;Examples where cheating can occur&lt;/h2&gt;
&lt;p&gt;The duo provided some examples of common cheating behaviour that can occur in games:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Authentication: player authentication in the game, to prove they are who they say they are and that they have the right account&lt;/li&gt;
&lt;li&gt;Transactional data: what the players purchase inside the game, so they either don’t spend funds they don’t have or don’t lose items they purchased legitimately&lt;/li&gt;
&lt;li&gt;Player moves: for example where players in cahoots are walking in front of each other like a human shield&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;“Where you have a player that’s walking in one direction, shooting in the other direction and doing five other things at the same time, then it’s probably a bot,” said Yahav.&lt;/p&gt;
&lt;h2 id="demonstrating-ml-in-action"&gt;Demonstrating ML in action&lt;/h2&gt;
&lt;p&gt;The demo was built on Amazon Aurora, a relational database offered by AWS and that is compatible with MySQL and PostgreSQL. The database includes some optimizations and performance improvements, plus a few additional features. It has pay as you go pricing.&lt;/p&gt;
&lt;p&gt;As Yahav explains: “The machine learning capabilities added in 2019 allow you to do a query in your Aurora database and then transfer it to a machine learning service for making a prediction. There’s integration with Amazon SageMaker and Amazon Comprehend, which are two machine learning services offered by AWS. The whole thing was done using SQL queries.&lt;/p&gt;
&lt;p&gt;Thus, you don’t need to call API’s; there’s no need to write additional code, you’re doing a step, you can just write a statement where you’re selecting from the results of the machine learning call. You can just use the results like you would use any other data from your database.”&lt;/p&gt;
&lt;h2 id="shortening-the-process-from-six-steps-to-three"&gt;Shortening the process from six steps to three&lt;/h2&gt;
&lt;p&gt;Using this approach, the process is now made much simpler:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;(Optional) select and configure the ML model with Amazon SageMaker Autopilot&lt;/li&gt;
&lt;li&gt;Run a SQL query to invoke the ML service&lt;/li&gt;
&lt;li&gt;Use the results in the application&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This article focuses on gaming; however, the presentation also provides details about fraud detection in financial transactions, sentiment analysis in the text (such a customer review written on a website), and a classification example to sort customers by predicted spend.&lt;/p&gt;
&lt;h2 id="ml-queries-in-gaming-scenarios"&gt;ML queries in gaming scenarios&lt;/h2&gt;
&lt;p&gt;Yahav and Yoav trained a SageMaker model to recognize anomalous user authentication activities such as the wrong password. You can dig deep into the code for the demonstration over at &lt;a href="https://github.com/aws-samples/amazon-aurora-call-to-amazon-sagemaker-sample" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, so we’ll only walk through some of the code.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/image6.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The model can also use the function auth_cheat_score to find players with a significant cheat score during authentication.&lt;/p&gt;
&lt;h2 id="introducing-emustarone"&gt;Introducing EmuStarOne&lt;/h2&gt;
&lt;p&gt;The game was developed initially in 2018 and is a massively multiplayer online (MMO) game that enables players to fight, build, explore and trade goods with each other.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/image4.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The game can be viewed &lt;a href="https://yahavb.s3-us-west-2.amazonaws.com/EmuStarOne.mp4" target="_blank" rel="noopener noreferrer"&gt;https://yahavb.s3-us-west-2.amazonaws.com/EmuStarOne.mp4&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Players authenticate from supporting clients, suc as a PC or game console.&lt;/p&gt;
&lt;p&gt;Five personality traits and game events define Emulants: they can move, forge, dodge, etc. and they can transact with virtual goods.&lt;/p&gt;
&lt;h2 id="what-does-cheating-look-like-in-the-data"&gt;What does cheating look like in the data?&lt;/h2&gt;
&lt;p&gt;To understand what cheating looks like within games, we have to understand what good and bad behaviour looks like in our game data over time:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Players can cheat as they make illegal trades or run bots that manipulate game moves on behalf of other players.&lt;/li&gt;
&lt;li&gt;Cheating can manifest in different ways, such as player move anomalies and consecutive failed login attempts from two different sources.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In general, ML solutions work very well with problems that are evolving and are not static.&lt;/p&gt;
&lt;h2 id="how-can-we-stop-cheating-in-the-game"&gt;How can we stop cheating in the game?&lt;/h2&gt;
&lt;p&gt;To stop cheating requires a plan and some decisions to be made before creating the data model or ML approach:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We can form an anti-cheat team.&lt;/li&gt;
&lt;li&gt;Take action against cheaters e.g., force logout with a hard captcha as a warning.&lt;/li&gt;
&lt;li&gt;Escalate the anti-cheating actions as needed.&lt;/li&gt;
&lt;li&gt;Eventually, cheaters learn the system behavior, so there is also the consideration of false positives.&lt;/li&gt;
&lt;li&gt;Continuously redefine our cheating algorithms.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What we want to enable by forming this anti-cheat team is to stop those that cheat and continuously refine the algorithm.&lt;/p&gt;
&lt;h2 id="emustar-one-game-data-authentication"&gt;EmuStar One game data authentication&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/image3.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Yoav explained:&lt;/p&gt;
&lt;p&gt;“In the first data set, we have the player authentication; this is the authentication transaction. There is a timestamp that the player came, and in this case, the authentication method was the Xbox Live token.”&lt;/p&gt;
&lt;p&gt;It means that the user logged through to the Xbox Authentication Service. It includes the playerGuid, the user agent, which in this case, is an Xbox device. You can see the source IP and the cidir and the geo-location.&lt;/p&gt;
&lt;h2 id="player-transaction"&gt;Player transaction&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/image7.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="the-player-moves"&gt;The player moves&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/image2.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The player moves (in this case is the X and Z coordination) include the timestamp and the player. There are three more properties - the quadrant, the sector, and the events can be traversing, user traversing from one place to another, or forging or dodging or other events that the game allowed.&lt;/p&gt;
&lt;h2 id="the-three-ml-models-used-for-game-data"&gt;The three ML models used for game data&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;For authentication: IP insights is an unsupervised learning algorithm for detecting anomalous behavior and usage patterns of IP addresses&lt;/li&gt;
&lt;li&gt;For transactions: Supervised linear regression - this is because most transactions are already classified by Customer care and player surveys&lt;/li&gt;
&lt;li&gt;For player moves: “Random cut forest (RCF), assuming most player moves are legit so anomalous moves indicate potential cheaters,” explained Yoav.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="data-preparation"&gt;Data preparation&lt;/h2&gt;
&lt;p&gt;The game involves a mammoth amount of data. Yoav shared: “We have 700,000 authentication events, 1 million transactions and 65 million player moves. For the supervised data, we classified data between 0.1 to 3% to allow the model to distinguish between legit transactions. Move authentication and other Models were built using Jupyter notebooks hosted by SageMaker. Data was stored on s3.&lt;/p&gt;
&lt;p&gt;“Once that we were able to distill the data and train the model, we deployed the model with hosted inference endpoints using SageMaker as the service. We used Aurora to invoke the endpoints.”&lt;/p&gt;
&lt;h2 id="data-encoding-and-transformation"&gt;Data encoding and transformation&lt;/h2&gt;
&lt;p&gt;In general, ML models like numbers - interest, doubles, or floats. So the String attributes were encoded. The same encoding was on the Aurora side, covering for example player move events such as TraverseSector or Travel.Explore&lt;/p&gt;
&lt;p&gt;The notebook is open source so you can see how encoding strings of the player moves was achieved.&lt;/p&gt;
&lt;p&gt;Yoav explained: " I took the quadrant, encoded the sector. encoded the event, and encoded it using the pandas, in the end, and the OneHot encoder.”&lt;/p&gt;
&lt;p&gt;The code for an alternative method for achieving this was also shared:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/image5.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="the-demo"&gt;The demo&lt;/h2&gt;
&lt;p&gt;Based on the characteristics of cheating in our game, cheaters are found via:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Looking for suspicious transactions&lt;/li&gt;
&lt;li&gt;Looking for suspicious authentication by the players who executed these transactions&lt;/li&gt;
&lt;li&gt;Then seeing if the player moves were suspicious&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Yahav shared code for the materialized view for authentication, querying the parameters and filtering only the suspicious ones that are mentioned as cls&gt;zero classified as fraudulent.&lt;/p&gt;
&lt;p&gt;An anomaly score cls&gt;2 indicates a suspicious move - the tools are very flexible!&lt;/p&gt;
&lt;p&gt;Yahav then executed a query for “the timestamp and the player Guids that are basically are suspicious.”&lt;/p&gt;
&lt;p&gt;The live demo presented worked to filter suspicious transactions. Then the authentication cheat was joined with the transaction cheat. Subsequently, 13 suspicious cases were revealed based on timestamps. The suspicious moves were then queried based on the timestamps.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/image1.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The demo included lots of movements, and transactions from all directions.&lt;/p&gt;
&lt;p&gt;Through exploring the player timestamp, playerGuid, quadrant, and sector of all the suspicious cases, it revealed where suspicious behavior occurred so that monitoring could occur in that specific area.&lt;/p&gt;
&lt;h2 id="resources-from-the-presentation"&gt;Resources from the presentation&lt;/h2&gt;
&lt;p&gt;Examples on &lt;a href="https://github.com/aws-samples/amazon-aurora-call-to-amazon-sagemaker-sample" target="_blank" rel="noopener noreferrer"&gt;Github:&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/rds/aurora/" target="_blank" rel="noopener noreferrer"&gt;Amazon Aurora&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/rds/aurora/machine-learning/" target="_blank" rel="noopener noreferrer"&gt;Aurora machine learning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/sagemaker" target="_blank" rel="noopener noreferrer"&gt;Amazon SageMaker&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can also &lt;a href="https://www.percona.com/resources/videos/anti-cheating-tool-massive-multiplayer-games-using-amazon-aurora-and-ml-services" target="_blank" rel="noopener noreferrer"&gt;watch a video of the recording&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Cate Lawrence</author>
      <category>Amazon</category>
      <category>Amazon RDS</category>
      <category>AWS</category>
      <category>aws</category>
      <category>ML</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>PostgreSQL</category>
      <category>SQL</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/image4_hu_d89369f5a33f7ac7.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/image4_hu_f7cab3825f536fb9.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Projects for Google Summer of Code - 2020</title>
      <link>https://percona.community/blog/2020/06/04/percona-projects-for-google-summer-of-code-2020/</link>
      <guid>https://percona.community/blog/2020/06/04/percona-projects-for-google-summer-of-code-2020/</guid>
      <pubDate>Thu, 04 Jun 2020 11:38:43 UTC</pubDate>
      <description> We are proud to announce that Percona was selected as a participating organization for the Google Summer of Code (GSoC) 2020 program, this is our second year as a participating org with the GSoC program.</description>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/google-summer-of-code-2019-367x263-1.jpg" alt="GSC" /&gt;&lt;/figure&gt;We are proud to announce that Percona was selected as a participating organization for the &lt;a href="https://summerofcode.withgoogle.com/" target="_blank" rel="noopener noreferrer"&gt;Google Summer of Code (GSoC) 2020 program&lt;/a&gt;, this is our second year as a participating org with the GSoC program.&lt;/p&gt;
&lt;p&gt;GSoC is a great program to involve young student developers in open source projects. We participated in the program in 2019 for the first time and we were really happy and satisfied with the results. Percona Platform Engineering team decided to participate again for the 2020 program and we are glad and really happy to inform you that we were selected and welcome the student to work with our team during the summer of 2020 on their GSoC Project.&lt;/p&gt;
&lt;h2 id="preparations"&gt;Preparations&lt;/h2&gt;
&lt;p&gt;We started planning for GSoC around November-December 2019, with the help from our Product Management team, we were able to shortlist a few ideas which we thought were really the right fit for our students, with Google Summer of Code, we realized it is very important to select projects which fit the &lt;a href="https://developers.google.com/open-source/gsoc/timeline?hl=en" target="_blank" rel="noopener noreferrer"&gt;timeline of the program&lt;/a&gt; and justify the purpose of the project for both the student and organization, with the help of our Marketing and HR department were able to prepare a &lt;a href="https://www.percona.com/googlesummerofcode2020" target="_blank" rel="noopener noreferrer"&gt;landing page&lt;/a&gt; for our potential GSoC Students with all relevant information about projects and communication platforms, from our past year’s experience and observation from other organizations,  we realized most of the students start their preparations right from the mid of January.&lt;/p&gt;
&lt;p&gt;Since this is just our second year as a participating organization we are really happy with the response we got from students, let’s look at the numbers and compare them with 2019, these numbers are based on org data exported from &lt;a href="https://summerofcode.withgoogle.com/" target="_blank" rel="noopener noreferrer"&gt;https://summerofcode.withgoogle.com/&lt;/a&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/06/Screenshot-2020-06-04-at-15.25.09_hu_f4c4b9dc5bb35b6d.png 480w, https://percona.community/blog/2020/06/Screenshot-2020-06-04-at-15.25.09_hu_22c2a0a06ebe3748.png 768w, https://percona.community/blog/2020/06/Screenshot-2020-06-04-at-15.25.09_hu_5510cc0c2e035ce9.png 1400w"
src="https://percona.community/blog/2020/06/Screenshot-2020-06-04-at-15.25.09.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="student-and-projects"&gt;Student and Projects&lt;/h2&gt;
&lt;p&gt;The student intern who will be working with us is Meet Patel, This is the first time for Meet to be selected as a student intern with the GSoC Program.&lt;/p&gt;
&lt;p&gt;We selected two students for the program but unfortunately, one of our students failed to meet the eligibility criteria of the program and was dropped later.&lt;/p&gt;
&lt;h3 id="meet-patel"&gt;Meet Patel&lt;/h3&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/06/Meet_Patel-1.jpg" alt="GSoC Student Meet Patel" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Meet is a 2nd year undergraduate at DAIICT, Gandhinagar, India; pursuing a bachelor’s degree in Information and Communication Technology with a minor in Computational Science. Meet is an open-source enthusiast and an avid developer, who is always excited to learn about new technologies.&lt;/p&gt;
&lt;p&gt;Meet will work on the GSoC project for the Refactoring of PMM Framework. PMM Framework is an automated testing framework that is used to set up PMM with various databases and their multiple instances, perform load tests and wipe everything after tests are done. One of the major objectives of this project is to make a well-documented script that helps easily set up PMM to the new users as well as refactoring it to make it more usable for internal testing.&lt;/p&gt;
&lt;p&gt;To track the progress of the project, please follow the &lt;a href="https://github.com/percona/pmm-qa/tree/GSOC-2020" target="_blank" rel="noopener noreferrer"&gt;GSoC Project Branch&lt;/a&gt;. The Percona mentors for the project are Puneet Kala, Frontend/Web QA Automation Engineer, Nailya Kutlubaeva, QA Engineer The GSoC team at Percona is thankful to everyone involved in this year’s application and selection process. We are excited to have a team of mentors helping students learn about our products and working in open source. We’re looking forward to enjoying the two-way dialogue and guiding the students to hone their skills as they experience working on these valuable PMM developments.&lt;/p&gt;
&lt;p&gt;If you have any questions about GSoC Program please feel free to write to us on &lt;a href="mailto:gsoc@percona.com"&gt;gsoc@percona.com&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Puneet Kala</author>
      <category>Community</category>
      <category>Events</category>
      <category>Google Summer of Code</category>
      <category>GSoC</category>
      <category>Information</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <category>Percona Monitoring and Management</category>
      <category>PMM</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/google-summer-of-code-2019-367x263-2_hu_691cafc8631da25c.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/google-summer-of-code-2019-367x263-2_hu_d6ee98ce0984dda6.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live ONLINE: MySQL on Google Cloud: War and Peace! by Akshay Suryawanshi &amp; Jeremy Cole</title>
      <link>https://percona.community/blog/2020/06/02/percona-live-online-mysql-on-google-cloud-war-and-peace-by-akshay-suryawanshi-jeremy-cole/</link>
      <guid>https://percona.community/blog/2020/06/02/percona-live-online-mysql-on-google-cloud-war-and-peace-by-akshay-suryawanshi-jeremy-cole/</guid>
      <pubDate>Tue, 02 Jun 2020 16:12:51 UTC</pubDate>
      <description>This session at Percona Live ONLINE was presented by Akshay Suryawanshi, Senior Production Engineer at Shopify, and Jeremy Cole, Senior Staff Production Engineer - Datastores at Shopify. Shopify is an online and on-premise commerce platform, founded in 2006.</description>
      <content:encoded>&lt;p&gt;This session at &lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live ONLINE&lt;/a&gt; was presented by Akshay Suryawanshi, Senior Production Engineer at Shopify, and Jeremy Cole, Senior Staff Production Engineer - Datastores at Shopify. Shopify is an online and on-premise commerce platform, founded in 2006.&lt;/p&gt;
&lt;p&gt;Shopify is used by more than a million merchants, and hundreds of billions of dollars of sales have happened on the platform since its inception. The company is a large user of MySQL, and the Black Friday and Cyber Monday weekends are their peak dates during the year, handling hundreds of billions of queries with MySQL. This year’s presentation was an opportunity to talk about the company’s challenges and progress over the last twelve months.&lt;/p&gt;
&lt;h2 id="key-google-cloud-concepts-from-the-presentation"&gt;Key Google Cloud concepts from the presentation&lt;/h2&gt;
&lt;p&gt;As part of the presentation, it’s important to understand the naming conventions that exist around Google Cloud:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Regions - a geographic region where cloud operates (these could include a building or adjoining buildings)&lt;/li&gt;
&lt;li&gt;Zones - a subdivision inside particular regions. Typically there are three within each region, but it varies a bit by region.&lt;/li&gt;
&lt;li&gt;GCE - Google Compute Engine platform, the system provides virtual machines to run as servers (most of Shopify’s microscale infrastructure is on GCP and runs in VMs).&lt;/li&gt;
&lt;li&gt;Virtual machine instance - A GC virtual machine scheduled in a particular zone&lt;/li&gt;
&lt;li&gt;Persistent disk - A network-attached log-structured block storage zone&lt;/li&gt;
&lt;li&gt;GKE - Google’s Kubernetes Engine, a managed Kubernetes solution that is managed on top of Google Cloud Platform (GPC) and managed within Google Cloud.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="peacetime-stories"&gt;Peacetime stories&lt;/h2&gt;
&lt;p&gt;Akshay spoke about Persistent disks, which are Network-Attached, distributed log-structure, block storage: “This is the place where you basically say most of your data is, especially when you’re running MySQL data or any sort of databases.” Except for their performance, (which is usually affected by some degree of latency for network-attached storage) they provide incredible features, especially fast snapshotting of volumes.&lt;/p&gt;
&lt;p&gt;“We have utilized the snapshotting behavior to revamp our Backup and Restore infrastructure and brought down our recovery time to less than one hour for even a multi-terabyte disk. This is so incredibly fast that we actually restore each and every snapshot that we preserve or retain as a backup every single day. It’s happening in both regions where we run most of our MySQL fleet,” detailed Akshay.&lt;/p&gt;
&lt;h2 id="configurable-vms"&gt;Configurable VMs&lt;/h2&gt;
&lt;p&gt;Virtual machines (VMs) expose an extensive API which is useful to do things programmatically with: “The API is very helpful. It is well documented, and we are using it in a bunch of places,” continued Akshay.&lt;/p&gt;
&lt;p&gt;Scaling VMs up and down are seamless operations (of course, most of them require a restart) and manageable. Provisioning new VMs in an appropriate region is very easy, according to Akshay: “Again because of the extensive API, which has provided something required to build resiliency against its own failures. So we spread our VMs across multiple zones. That helps us tremendously when a particular zone goes down. All of this has allowed us to build self-healing tooling to automatically replace failed VMs easily.”&lt;/p&gt;
&lt;h2 id="gcp-is-truly-multi-regional"&gt;GCP is truly multi-regional&lt;/h2&gt;
&lt;p&gt;Google Cloud’s multi-region availability means failover from one region to another is easy and Shopify can move all its traffic from one region to another in just a few minutes, multiple times in a day. They can also expand to a distant geographical region without a lot of work, yet maintain the same stability.&lt;/p&gt;
&lt;p&gt;Akshay noted: “Isolating PII data has been a big win for Shopify in the past year when we launched a certain product where PII data needed to be preserved in a particular region, and GCP provides excellent support for that.”&lt;/p&gt;
&lt;h2 id="google-kubernetes-engine"&gt;Google Kubernetes Engine&lt;/h2&gt;
&lt;p&gt;Kubernetes is an open-source project for container orchestration and Google Kubernetes Engine (GKE) is a feature-rich tool for using and running Kubernetes. According to Akshay: “Most of our future work is happening towards containers writing MySQL and running and scheduling them inside companies. The automatic storage and file system expansion are helpful in solving database problems.”&lt;/p&gt;
&lt;p&gt;Zone aware cluster node scheduling helps schedule the Kubernetes pods so that they are fault-tolerant towards zone failures.&lt;/p&gt;
&lt;p&gt;The GCP networking is simple to set up. Inter-regional latencies are pretty low, and Shopify can perform region failovers for databases quickly in the event of a disaster. “We can do a whole region, evac within a few minutes. This is because we can keep our databases in both regions up to date due to these low latencies,” explained Akshay.&lt;/p&gt;
&lt;p&gt;Virtual private clouds (VPCs) are a great way to segment the workloads. Isolating the networking connection at VPC level has helped this achievement.&lt;/p&gt;
&lt;h2 id="war-some-of-the-things-that-can-go-wrong"&gt;War: Some of the things that can go wrong&lt;/h2&gt;
&lt;p&gt;Jeremy detailed some of the specific challenges that Shopify had faced over the last year, including stock outs which are when a resource requested (such as a VM or a disk) is not available at that time.&lt;/p&gt;
&lt;p&gt;Jeremy noted: “What that looks like is that you attempt to allocate it using some API, and it just takes a very long time to show up. In one particular instance, in one region, we had consistent PD and VM stockouts regularly occurring for several weeks.”&lt;/p&gt;
&lt;p&gt;It meant that the company had to adapt for when resources were not available at a moment’s notice, and to consider where time-critical components had to be resourced for availability.&lt;/p&gt;
&lt;h2 id="trouble-in-persistent-disk-land"&gt;Trouble in persistent disk land&lt;/h2&gt;
&lt;p&gt;According to Jeremy: “One of the bigger problems that we’ve had in general is a persistent disk (PD).” An example was a recent outage caused by a change in persistent disks backend, which caused a regression “anywhere from minor latency impacts to full stalls for several seconds of the underlying PD volume, which of course, pretends to be a disk. So that means the disk is fully stalled for several seconds.”&lt;/p&gt;
&lt;p&gt;It took several weeks to diagnose and pin the blame of the stalls on PD properly. Jeremy noted, “The fun part of the story is that the mitigation for this particular problem involves attaching a substantial PD volume to every one of our VMs to work around a problem that was happening in PD. In order to do that, since we had so many VMs in aggregate, we had to allocate petabytes of persistent disk, and leave them attached for a few months.”&lt;/p&gt;
&lt;p&gt;Crucial to solving the problem was working closely with their vendor partner. As Jeremy explained, “Sometimes you have to get pretty creative to make things work right now and get yourself back in action.&lt;/p&gt;
&lt;h2 id="troop-replacements"&gt;Troop replacements&lt;/h2&gt;
&lt;p&gt;Live migration (LM) was referred to in the previous year’s Shopify presentation at Percona Live, and the problem still persists according to Jeremy. “We continuously have machines being live migrated and their VMs being moved around between different physical machines.”&lt;/p&gt;
&lt;p&gt;The frequency of LM problems occurring and the number of times it causes this problem is directly related to the frequency of Linux kernel or Intel CDEs. “We’re still getting hostError instance failures where migrations fail and this kills the host,” explained Jeremy.&lt;/p&gt;
&lt;p&gt;Some live migrations are still breaking in NTP time sync. “And we are still periodically getting multiple migrations per VM for the same maintenance - up to 11 within a day or so.”&lt;/p&gt;
&lt;h2 id="a-regional-ally-surrenders"&gt;A regional ally surrenders&lt;/h2&gt;
&lt;p&gt;In the last year, there was a regional outage: “Google had made a change to their traffic routing in one region, causing basically an overload of their networking stack. And we went down pretty hard because of that. There was nothing really that we could do about it,” said Jeremy. This was despite being deployed across multiple zones and multiple regions.&lt;/p&gt;
&lt;p&gt;Jeremy concluded the talk with a simple statement: Running MySQL in the cloud is not magic. “There are some unique challenges to Google Cloud, unique challenges to running MySQL in cloud infrastructure and unique challenges with the cloud itself. Sometimes running databases in the cloud can feel like you are constantly at war.”&lt;/p&gt;
&lt;p&gt;Preparing in advance as much as possible around how you manage your database in the cloud can help, particularly when you run at the kind of scale that Shopify does. However there will always be unexpected events and incidents. Working with your cloud partner and support providers can help here too.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;You can &lt;a href="https://www.percona.com/resources/videos/mysql-google-cloud-war-and-peace-akshay-suryawanshi-jeremy-cole-percona-live-online" target="_blank" rel="noopener noreferrer"&gt;watch a video of the recording&lt;/a&gt; which includes a Q&amp;A at the end of the presentation.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Cate Lawrence</author>
      <category>DevOps</category>
      <category>Google</category>
      <category>Kubernetes</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <category>Shopify</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/SC-3-Matt-Percona_hu_1395b6e2186771a6.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/SC-3-Matt-Percona_hu_c0e4c47b55fa22a9.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live ONLINE Talk: Enhancing MySQL security at LinkedIn by Karthik Appigatla</title>
      <link>https://percona.community/blog/2020/06/01/percona-live-online-talk-enhancing-mysql-security-at-linkedin-by-karthik-appigatla/</link>
      <guid>https://percona.community/blog/2020/06/01/percona-live-online-talk-enhancing-mysql-security-at-linkedin-by-karthik-appigatla/</guid>
      <pubDate>Mon, 01 Jun 2020 08:01:24 UTC</pubDate>
      <description>MySQL, arguably the most popular relational database, is used pretty extensively at the popular professional social network LinkedIn. At Percona Live ONLINE 2020, the company’s flagship event held online for the first time due to the Covid-19 pandemic, Karthik Appigatla from LinkedIN’s database SRE team discussed the company’s approach to securing their database deployment without introducing operational hiccups or adversely affecting performance.</description>
      <content:encoded>&lt;p&gt;MySQL, arguably the most popular relational database, is used pretty extensively at the popular professional social network LinkedIn. At Percona Live ONLINE 2020, the company’s flagship event held online for the first time due to the Covid-19 pandemic, Karthik Appigatla from LinkedIN’s database SRE team discussed the company’s approach to securing their database deployment without introducing operational hiccups or adversely affecting performance.&lt;/p&gt;
&lt;p&gt;Instead of just performing admin duties, Karthik’s team builds automated tools to scale their infrastructure, and he talked about some of these tailored tools in his presentation. The database SREs on his team also work with the developers at LinkedIn and help them streamline their applications to make best use of the database.&lt;/p&gt;
&lt;p&gt;Talking about LinkedIn’s reliance on MySQL, Karthik said that not only do all their infrastructural tools rely on MySQL, many of the internal applications use MySQL as their backend datastore, and so do a few of the website facing applications as well.&lt;/p&gt;
&lt;h2 id="database-proliferation"&gt;Database proliferation&lt;/h2&gt;
&lt;p&gt;The magnitude of the MySQL deployment at LinkedIn is pretty impressive. Thanks to the sheer number of microservices, each of which gets its own database, Karthik’s team looks after more than 2300 databases. These are powered by different versions of the MySQL server, namely v5.6, v5.7 and v8.0, all of which are hosted atop RHEL 7 installations.&lt;/p&gt;
&lt;p&gt;As he ran through the layout of the MySQL deployments at LinkedIn, Karthik mentioned that they have a multi-tenant architecture where multiple databases are hosted on a single MySQL server instance.&lt;/p&gt;
&lt;p&gt;MySQL is consumed as-a-service at LinkedIn and all the administrative tasks like backups, bootstrapping clusters, monitoring, and such are handled by automated systems built by Karthik’s team. He said that the level of automation is so high in fact that application owners can actually provision a database for their applications with just a few mouse clicks.&lt;/p&gt;
&lt;h2 id="shared-responsibility"&gt;Shared responsibility&lt;/h2&gt;
&lt;p&gt;Given their scale of deployment, the developers at LinkedIn give special credence to the security of their databases. Karthik believes “security is a shared responsibility between the database SRE team and the application owners.”&lt;/p&gt;
&lt;p&gt;He illustrated how the databases are provisioned, from a security point of view and gave several security insights in his presentation based on his experience. For one, his team doesn’t take the easy approach of isolating databases by running multiple mysqld processes. This approach doesn’t scale well since the overhead on the server increases linearly as the number of databases it hosts increases.&lt;/p&gt;
&lt;p&gt;His description of how the various applications access different databases on the different servers was also pretty insightful for anyone looking to deploy databases at scale. One of the peculiar issues he described is that various components inside individual applications usually need to access different databases simultaneously. His team handled this by employing multiple user accounts with varying privileges.&lt;/p&gt;
&lt;h2 id="access-control"&gt;Access control&lt;/h2&gt;
&lt;p&gt;He dwelled on this some more and spent some time explaining the different access management controls they’ve built into the system to facilitate access. One of the interesting security measures he talked about is how they limit the number of hosts that can access a database by adopting an IP-based grants system, which is slightly cumbersome to implement but a lot more secure.&lt;/p&gt;
&lt;p&gt;Also interesting is their approach to granting SSH access to the database servers to the SREs. Instead of the default public-key authentication, his team uses a certificate-based authentication scheme, and Karthik presented a high-level overview of this arrangement.&lt;/p&gt;
&lt;p&gt;Auditing and monitoring are also important aspects of security. At LinkedIn, the logins are audited by the &lt;a href="https://www.percona.com/doc/percona-server/LATEST/management/audit_log_plugin.html%E2%80%9D" target="_blank" rel="noopener noreferrer"&gt;Percona Audit Log plugin&lt;/a&gt;, while the queries go through LinkedIN’s home-brewed Query Analyser agent. Karthik ran through the architecture of their Query Analyser agent, which LinkedIn plans to release under an open source license soon.&lt;/p&gt;
&lt;p&gt;Perhaps one of the biggest takeaways from the presentation was Karthik’s insight into the operational challenges that crop up due to their rather stringent security requirements, particularly their IP-based grants system. While the solutions he discussed were specific to LinkedIn, his presentation was peppered with tips and tricks that you can easily adapt for your deployments.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/resources/videos/enhancing-mysql-security-linkedin-karthik-appigatla-percona-live-online-2020" target="_blank" rel="noopener noreferrer"&gt;Click here to watch&lt;/a&gt; Karthik’s presentation at Percona Live ONLINE 2020.&lt;/p&gt;</content:encoded>
      <author>Mayank Sharma</author>
      <category>Mayank Sharma</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <category>security</category>
      <category>SRE</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/06/SC-3-Matt-Percona_hu_1395b6e2186771a6.jpg"/>
      <media:content url="https://percona.community/blog/2020/06/SC-3-Matt-Percona_hu_c0e4c47b55fa22a9.jpg" medium="image"/>
    </item>
    <item>
      <title>Join ProxySQL Tech Talks with Percona on June 4th, 2020!</title>
      <link>https://percona.community/blog/2020/05/29/join-proxysql-tech-talks-with-percona-on-june-4th-2020/</link>
      <guid>https://percona.community/blog/2020/05/29/join-proxysql-tech-talks-with-percona-on-june-4th-2020/</guid>
      <pubDate>Fri, 29 May 2020 09:16:07 UTC</pubDate>
      <description>Long months of the pandemic lockdown have brought to life many great online events enabling the MySQL community to get together and stay informed about the very recent developments and innovations available to MySQL users. It isn’t over yet! Next Thursday, June 4th, Percona &amp; ProxySQL are co-hosting the ProxySQL Tech Talks with Percona virtual meetup covering ProxySQL, MySQL and Percona XtraDB Cluster.</description>
      <content:encoded>&lt;p&gt;Long months of the pandemic lockdown have brought to life many great online events enabling the MySQL community to get together and stay informed about the very recent developments and innovations available to MySQL users. It isn’t over yet! Next &lt;strong&gt;Thursday, June 4th&lt;/strong&gt;, Percona &amp; ProxySQL are co-hosting the &lt;a href="https://bit.ly/2THdDqv" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;ProxySQL Tech Talks with Percona&lt;/strong&gt;&lt;/a&gt; virtual meetup covering ProxySQL, MySQL and Percona XtraDB Cluster.&lt;/p&gt;
&lt;p&gt;The attendees are invited to participate in the &lt;a href="https://bit.ly/2THdDqv" target="_blank" rel="noopener noreferrer"&gt;two-hour deep-dive event&lt;/a&gt; with plenty of time for questions and answers (we will have two 40-minute sessions + 20 minutes allocated for Q&amp;A). Get prepared, come with your burning questions and true war stories - we’ll have our speakers answer and comment on them! And here come the speakers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;René Cannaò&lt;/strong&gt;, ProxySQL author and CEO of ProxySQL Inc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vinicius M. Grippa&lt;/strong&gt;, Senior Support Engineer at Percona.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;René and Vinicius will give presentations covering the evolution of ProxySQL and ProxySQL’s native support for PXC 5.7 respectively:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ProxySQL, the journey from a MySQL proxy to being the de-facto multi-functional tool that scales MySQL&lt;/strong&gt; by René Cannaò starts at &lt;strong&gt;7 PM CEST&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ProxySQL 2.0 native support for Percona XtraDB Cluster (PXC) 5.7&lt;/strong&gt; by Vinicius Grippa starts at &lt;strong&gt;8 PM CEST&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The detailed abstracts, full agenda, and speaker bios are available on &lt;a href="https://bit.ly/2THdDqv" target="_blank" rel="noopener noreferrer"&gt;the event’s registration page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The list of technologies &amp; tools covered at this event will include&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ProxySQL&lt;/li&gt;
&lt;li&gt;MySQL&lt;/li&gt;
&lt;li&gt;Percona XtraDB Cluster (PXC)&lt;/li&gt;
&lt;li&gt;Kubernetes (K8s)&lt;/li&gt;
&lt;li&gt; Percona Monitoring &amp; Management (PMM)&lt;/li&gt;
&lt;li&gt;AWS Aurora&lt;/li&gt;
&lt;li&gt;LDAP&lt;/li&gt;
&lt;li&gt;Galera Cluster&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our virtual room has already started to fill up, please register now to &lt;a href="https://bit.ly/2THdDqv" target="_blank" rel="noopener noreferrer"&gt;join us at ProxySQL Tech Talks with Percona&lt;/a&gt; next Thursday at 7 PM CST! Hope to see many of you there!&lt;/p&gt;</content:encoded>
      <author>Stacy Rostova</author>
      <category>stacy</category>
      <category>Containers</category>
      <category>database</category>
      <category>DBA Tools</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>mysql-and-variants</category>
      <category>Open Source Databases</category>
      <category>Percona XtraDB Cluster</category>
      <category>ProxySQL</category>
      <category>PXC</category>
      <category>tools</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/2160x1080-cover-Proxy-Percona-3_hu_b621f13821783f82.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/2160x1080-cover-Proxy-Percona-3_hu_4dde3d5c75fdbccc.jpg" medium="image"/>
    </item>
    <item>
      <title>Anti-Cheating Tool for Massive Multiplayer Games Using Amazon Aurora and Amazon ML Services – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/05/13/anti-cheating-tool-for-massive-multiplayer-games-using-amazon-aurora-and-amazon-ml-services-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/05/13/anti-cheating-tool-for-massive-multiplayer-games-using-amazon-aurora-and-amazon-ml-services-percona-live-online-talk-preview/</guid>
      <pubDate>Wed, 13 May 2020 16:06:06 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 19 May • New York 4 p.m. • London 9 p.m. • New Delhi 1:30 a.m. (Wed) Level: Intermediate</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online&lt;/a&gt; Agenda Slot: Tue 19 May • New York 4 p.m. • London 9 p.m. • New Delhi 1:30 a.m. (Wed)&lt;/em&gt; &lt;em&gt;Level:  Intermediate&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;Multiplayer video games are among the most lucrative online services. The overall games industry worldwide generated an estimated $174B in 2019, according to IDC. With this popularity, cheating becomes a common trend. Cheating in multiplayer games negatively impacts the game experience for players who play by the rules, and it becomes a revenue issue for game developers and publishers. According to Irdeto, 60% of online games were negatively impacted by cheaters, and 77% of players said they would stop playing a multiplayer game if they think opponents are cheating.&lt;/p&gt;
&lt;p&gt;Current methods for detecting and addressing cheating become difficult and expensive to operate as cheaters respond to the evolution of anti-cheating techniques. This session will show an effective method for game developers to continuously and dynamically improve their cheat-detection mechanisms. It uses Amazon Aurora and Amazon SageMaker for cheating detection, but can be adapted to other databases with similar capabilities. We’ll utilize the recently-launched Aurora machine learning functionality, which allows game developers to add ML-based predictions using the familiar SQL programming language without building custom integrations or learning separate tools. We’ll show which ML algorithms are useful for cheat detection and how an anti-cheat developer can write a single SQL query that handles the inputs and outputs for the algorithm.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;Machine learning is everywhere these days, or at least that’s how it feels when you work at Amazon. Some types of ML get a lot of attention, like self-driving cars, or services that take a JPEG and tell you if it’s a dog or a cat. But if you think about it, a vast amount of the world’s information is plain old tabular data in traditional relational databases. What about running ML on that data? Who knows what amazing insights and secrets are lurking inside?&lt;/p&gt;
&lt;p&gt;We’ll look at a cool video game example where we’re looking for cheaters, e.g. people who write bots to play on their behalf. We’ll show which ML models can detect these cheats and how to more easily run the analysis from your application, using tools that we’ve built. You should be able to run it on other databases if they have similar ML capabilities.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Application developers and database administrators who don’t know a whole lot about machine learning but would like to start.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;We’re curious what a complete online event will be like.  We’re looking forward to see how it compares to the traditional kind of conference.&lt;/p&gt;</content:encoded>
      <author>Yoav Eilat</author>
      <author>Yahav Biran</author>
      <category>yoav.eilat</category>
      <category>yahav.biran</category>
      <category>AWS</category>
      <category>Events</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_5f15f9d3f957c60b.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_6e248db3cecf756e.jpg" medium="image"/>
    </item>
    <item>
      <title>State of the Dolphin – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/05/12/state-of-the-dolphin-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/05/12/state-of-the-dolphin-percona-live-online-talk-preview/</guid>
      <pubDate>Tue, 12 May 2020 16:30:04 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 19 May • New York 11:00 a.m. • London 4:00 p.m. • New Delhi 8:30 p.m.</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online&lt;/a&gt; Agenda Slot: Tue 19 May • New York 11:00 a.m. • London 4:00 p.m. • New Delhi 8:30 p.m.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;I will talk about the latest improvements in MySQL 8.0.20 and the MySQL Engineering Team’s steady progress with MySQL 8.0. These include solutions like Document Store, InnoDB Cluster, and InnoDB ReplicaSet where MySQL Router and MySQL Shell are playing an important role. All of these Oracle solutions are completely open source.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;This talk is exciting because we will be looking at all the latest features in MySQL 8.0 Sadly my time will be probably too short to detail them all and cover the open source code contributions we’ve received from users.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;All MySQL users would benefit, whether newbies or veterans. You would be surprised how many people still have wrong assumptions about MySQL! So this talk is really for anyone seeking a fuller experience with MySQL, whether DBAs, developers, or others.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;I’m always very happy to learn new things about Vitess from Morgan Tocker and ProxySQL from René Cannaò.&lt;/p&gt;</content:encoded>
      <author>Frédéric Descamps</author>
      <category>frederic.descamps</category>
      <category>Events</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_5f15f9d3f957c60b.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_6e248db3cecf756e.jpg" medium="image"/>
    </item>
    <item>
      <title>Kubernetes, The Swiss Army Knife For Your ProxySQL Deployments – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/05/08/kubernetes-the-swiss-army-knife-for-your-proxysql-deployments-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/05/08/kubernetes-the-swiss-army-knife-for-your-proxysql-deployments-percona-live-online-talk-preview/</guid>
      <pubDate>Fri, 08 May 2020 02:02:48 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 19 May • New York 10:00 p.m. • London 3:00 a.m. (Wed) • New Delhi 7:30 a.m. (Wed)</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Percona Live Online Agenda Slot: Tue 19 May • New York 10:00 p.m. • London 3:00 a.m. (Wed) • New Delhi 7:30 a.m. (Wed)&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;ProxySQL is a high performance proxy from design to implementation. It speaks the MySQL protocol, and can go beyond load balancing. This talk covers various deployment options for ProxySQL in a Kubernetes environment.&lt;/p&gt;
&lt;p&gt;Typically ProxySQL is deployed in one of three ways depending on the scale and needs of your environment:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Directly on each application server&lt;/li&gt;
&lt;li&gt;On a separate server (or layer)&lt;/li&gt;
&lt;li&gt;Cascaded, i.e. on each application server as well as a separate server (or layer)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This talk will cover how to successfully implement each of these ProxySQL deployment methods in Kubernetes using a highly scalable and robust approach.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;This was originally meant to be a tutorial, but it is now a talk, so it is not 3 hours cut into one, but tailored to whet your appetites for what is possible with ProxySQL on Kubernetes, which is an important topic in the community. I will share practical examples of deployment methods that have been implemented successfully in collaboration with large scale users.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;You should have an intermediate understanding of MySQL and how replication and proxying would work as well as at least a basic understanding of Kubernetes.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;All the talks on the &lt;a href="https://www.percona.com/live/percona-live-online-full-agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live agenda&lt;/a&gt; are exciting, but if I had to pick one talk, it would be “Mostly Mistaken and Ignored Parameters While Optimizing a PostgreSQL Database” by Avi Vallarapu.&lt;/p&gt;</content:encoded>
      <author>Raghavendra Prabhu</author>
      <category>rene.cannao</category>
      <category>Events</category>
      <category>Kubernetes</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_5f15f9d3f957c60b.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_6e248db3cecf756e.jpg" medium="image"/>
    </item>
    <item>
      <title>MariaDB 10.4 and the Competition – Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/05/07/mariadb-10-4-and-the-competition-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/05/07/mariadb-10-4-and-the-competition-percona-live-online-talk-preview/</guid>
      <pubDate>Thu, 07 May 2020 23:10:23 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 19 May • New York 12:30 p.m. • London 5:30 p.m. • New Delhi 10:00 p.m.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online&lt;/a&gt; Agenda Slot: Tue 19 May • New York 12:30 p.m. • London 5:30 p.m. • New Delhi 10:00 p.m.&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;There are many good databases out there. Picking the right database for your project is never easy. There are technical criteria, business criteria, perhaps even ethical criteria. In this keynote, MariaDB Foundation CEO Kaj Arno will present his - obviously completely impartial - view of the process. Should you pick a database in the cloud or on premise? Should you pick an Open Source database or a closed-source one? And if you pick relational open source databases, how should you choose between MariaDB, MySQL and PostgresSQL? Expect the recommendation to not always be “go with MariaDB 10.4”. However, do expect to get a view of how the MariaDB Foundation sees its role, in relation to MariaDB Server, to MariaDB Corporation, to its other members (Microsoft, IBM, Service Now, Alibaba, Tencent, Booking.com, et al.), and above all, to the community of database developers and users.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;Can you expect a partial person to come with a neutral comparison between competitors? No. But it can still be logical and insightful. Can you expect such a comparison to be exciting? Yes. And it can be entertaining, too. Why? Because I am starting from the basic reasoning of “Cui bono”: Who benefits? From what? What is the likely reasoning by the actors in the database industry? And what is their actual behavior?&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Developers, DBAs, sysadmins. Anyone who needs to decide how to make data persistent in their apps. Where should data be stored? How should one even think about the choice process? Technology issues, business issues, the lot.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;On &lt;a href="https://www.percona.com/live/percona-live-online-full-agenda" target="_blank" rel="noopener noreferrer"&gt;the full agenda&lt;/a&gt;, all the keynoters are great! The last few PeterZ presentations I’ve seen have been wonderful combinations of deep technical expertise and logical business reasoning. Matt Asay is always insightful. And there is a lot to be learned from Bruce Momjian and Frédéric Descamps.&lt;/p&gt;
&lt;p&gt;I’m also looking forward to MySQL on Google Cloud, by Leo Tolstoy and my former colleague Jeremy Cole. And speaking of former colleagues, Colin’s MariaDB Server talk is clearly going to be an exciting one, a different angle to what I will touch upon in my keynote.&lt;/p&gt;
&lt;p&gt;Last and by no means least: I already attended an earlier version of Valerii Kravchuk’s super-cool tracing and performance debugging presentation, but it was so good that I will want to look at it again.&lt;/p&gt;</content:encoded>
      <author>Kaj Arnö</author>
      <category>kaj.arno</category>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>PostgreSQL</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_5f15f9d3f957c60b.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_6e248db3cecf756e.jpg" medium="image"/>
    </item>
    <item>
      <title>Orchestrating Cassandra with Kubernetes Operator and Yelp PaaSTA - Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/05/06/orchestrating-cassandra-with-kubernetes-operator-and-yelp-paasta-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/05/06/orchestrating-cassandra-with-kubernetes-operator-and-yelp-paasta-percona-live-online-talk-preview/</guid>
      <pubDate>Wed, 06 May 2020 19:42:03 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 19 May • New York 3:00 p.m. • London 8:00 p.m. • New Delhi 12:30 a.m. (Wed) Level: Intermediate</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online&lt;/a&gt; Agenda Slot: Tue 19 May • New York 3:00 p.m. • London 8:00 p.m. • New Delhi 12:30 a.m. (Wed)&lt;/em&gt; &lt;em&gt;Level: Intermediate&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;At Yelp, Cassandra, our NoSQL database of choice, has been deployed on AWS compute (EC2) and AutoScaling Groups (ASG), backed by Block Storage (EBS). This deployment model has been quite robust over the years while presenting its own set of challenges. To make our Cassandra deployment more resilient and reduce the engineering toil associated with our constantly growing infrastructure, we are abstracting Cassandra deployments further away from EC2 with Kubernetes and orchestrating with our Cassandra Operator. We are also leveraging Yelp’s PaaSTA for consistent abstractions and features such as fleet autoscaling with Clusterman, and Spot fleets, features that will be quite useful for an efficient datastore deployment.&lt;/p&gt;
&lt;p&gt;In this talk, we delve into the architecture of our Cassandra operator and the multi-region multi-AZ clusters it manages, and strategies we have in place for safe rollouts and zero-downtime migration. We will also discuss the challenges that we have faced en route and the design tradeoffs done. Last but not least, our plans for the future will also be shared.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;The talk not only delves into the architecture of Yelp’s Cassandra deployment on Kubernetes, and the operator but also the various challenges that we encountered and our approaches to them.  We also talk about how we have integrated this operator with our own PaaS (Platform-as-a-Service) called PaaSTA, and leveraged capabilities such as Spot fleets and Clusterman for significant savings in cloud costs.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;Attendees interested in stateful deployments - databases, streaming pipelines - on Kubernetes and orchestration systems in general, should find this talk interesting. Also, anyone using existing Kubernetes operators or planning on writing an operator should benefit from this talk.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;Among talks on &lt;a href="https://www.percona.com/live/percona-live-online-full-agenda" target="_blank" rel="noopener noreferrer"&gt;the full agenda,&lt;/a&gt; I am looking forward to the State of Open Source Databases from Peter Zaitsev to get a snapshot of the current trends and technologies in the database world. Lefred’s talk on the State of the Dolphin should be similarly helpful in keeping up with the state of MySQL which is a rapidly growing project. Finally, given our current focus on databases and Kubernetes, I am also looking forward to Comparison of Kubernetes Operators for MySQL and A Step by Step Guide to Using Databases on Containers talks from Percona and AWS respectively.&lt;/p&gt;</content:encoded>
      <author>Raghavendra Prabhu</author>
      <category>raghu.prabhu</category>
      <category>Events</category>
      <category>Kubernetes</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_5f15f9d3f957c60b.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_6e248db3cecf756e.jpg" medium="image"/>
    </item>
    <item>
      <title>Dynamic Tracing for Finding and Solving MariaDB (and MySQL) Performance Problems on Linux - Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/05/05/dynamic-tracing-for-finding-and-solving-mariadb-and-mysql-performance-problems-on-linux-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/05/05/dynamic-tracing-for-finding-and-solving-mariadb-and-mysql-performance-problems-on-linux-percona-live-online-talk-preview/</guid>
      <pubDate>Tue, 05 May 2020 21:13:57 UTC</pubDate>
      <description>Percona Live Online Agenda Slot (CORRECTED): Wed 20 May • New York 6:00 a.m. • London 11:00 a.m. • New Delhi 3:30 p.m. Level: Advanced</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online&lt;/a&gt; Agenda Slot (CORRECTED): Wed 20 May • New York 6:00 a.m. • London 11:00 a.m. • New Delhi 3:30 p.m.&lt;/em&gt; &lt;em&gt;Level: Advanced&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;While troubleshooting MariaDB server performance problems it is important to find out where the time is spent in the mysqld process, on-CPU and off-CPU. The process of investigation should have as small influence as possible on the server we try to troubleshoot.  Performance_schema introduced in MySQL 5.5 (and inherited from MySQL 5.6 by MariaDB) is supposed to provide detailed enough instrumentation for most cases. But it comes with a cost, requires careful sizing of performance counters, and the process of instrumenting the code is not yet complete even for MySQL 8, to say nothing about MariaDB with its 3rd party storage engines, plugins and libraries like Galera.&lt;/p&gt;
&lt;p&gt;This is when perf profiler and, on recent Linux kernels (4.9+) eBPF and bpftrace tools come handy.  Specifically, perf profiler and ftrace interface can be easily used while studying MariaDB performance problems. Basic usage steps are presented and several typical real life use cases (including adding dynamic probes to almost any line of MariaDB code) are discussed.  On Linux 4.9+ eBPF is probably the most powerful and least intrusive way to study performance problems. Basic usage of , bcc tools and bpftrace, as well as main bpftrace features and commands are demonstrated.  One of the ways to present and study stack samples collected by perf or bpftrace, Flame Graphs, is also presented with examples coming from my experince as a support engineer.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is your talk exciting?&lt;/h3&gt;
&lt;p&gt;It summarizes the experience from my recent years of practical non-trivial performance problems solving for MariaDB and MySQL systems in production. It turned out that application level instrumentation of database servers in MySQL ecosystem is not detailed and dynamic enough for some complex cases. We do not see Performance Schema instrumentation as every other line of MySQL code, yet, and even less so it applied to 3rd party plugins and technologies, like Galera.&lt;/p&gt;
&lt;p&gt;We can not expect developers to promptly add instrumentation where we need it and release custom binaries for every specific case, even in such a dynamic company like MariaDB Corporation, where we in services work closely with Engineering every day. That is why I personally got so excited when I found out that Linux starting for kernels 2.6.x (RHEL6) provides tools and approaches to add instrumentation almost anywhere, from kernel code to applications, dynamically, at run time, without any change needed in kernel or application code (something I’ve seen in action with DTrace on Solaris and OS X since 2008 or so).&lt;/p&gt;
&lt;p&gt;I started with perf profiler as a way to find out why some threads hanged for minutes when Performance Schema had not provided the answer, back in 2016, and this is when I first hit Brendan Gregg’s site (&lt;a href="http://www.brendangregg.com/" target="_blank" rel="noopener noreferrer"&gt;http://www.brendangregg.com/)&lt;/a&gt;). Since that first real success with perf I follow him and dynamic tracing topic closely, and try to apply new tools added in the meantime while working on complex performance issues in MariaDB Support. I’ve shared my experience both in public and internally in MariaDB Corporation, and got several key MariaDB developers excited and happy about the details they can get from perf and dynamic tracing in general, comparing to any other approach. I’d like to convert more engineers to this faith with my presentation.&lt;/p&gt;
&lt;p&gt;I know about companies like Facebook having the entire teams working on custom dynamic tracing tools, and other MySQL Community members sharing their positive experience recently. Linux kernel developers work hard on making dynamic tracing even more safe, non-intrusive and easy to use. So, dynamic tracing (finally) becomes a hot topic that every database expert should follow!&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who would benefit the most from your talk?&lt;/h3&gt;
&lt;p&gt;I think experienced DBAs, as well as everyone working in professional services who cares about performance tuning on Linux would benefit a lot. But Linux sysadmins and application developers may get entirely new, different perspective on how to deal with performance problems when their application level instrumentation does not help to pinpoint the root cause. I consider dynamic tracing and profiling on modern Linux systems (starting from kernels 2.6.x, and especially 4.9+ with eBPF fully functional) a practice worth to be mastered by any IT professional these days.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What other presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;I am really interested in “Diagnosing Memory Utilization, Leaks and Stalls in Production” by Marcos Albe. I expect they my dear friend, former colleague and manager in Percona Support is exploring the same way of approaching performance problems (with Linux dynamic tracing tools) that I do. He probably started exploring this way earlier than me (my first attempts to use perf profiler while working on support issues date back only to 2016). He is also of the smartest people I ever worked with, and is working for a company that deals with complex performance problems on all kinds of forks of open source databases, not only MySQL, in all kinds of environments including containers. So I’ll surely benefit from his views and experience shared in this presentation. I hope to study more about eBPF-based dynamic tracing of memory allocations, cache and registers usage, memory flame graphs and similar tools applied in production to MySQL and other DBMSes.&lt;/p&gt;
&lt;p&gt;I am also looking forward to “Profiling MySQL and MariaDB Hash Join Implementations” by Jim Tommaney. MySQL Optimizer and query optimization in general are my area of interests since 2005 and I’d really want to find more details about the way hash joins are finally implemented, and comparison to various BKA-based optimizations we have for that in MariaDB. MySQL 8.0.x is a moving target now, with every minor release introducing new features, and I do not have enough time to keep my knowledge current on this topic. That’s why i expect both a useful review and summary, and details about changes introduced by recent MySQL 8.0.20 in this area.&lt;/p&gt;
&lt;p&gt;Overall &lt;a href="https://www.percona.com/live/percona-live-online-full-agenda" target="_blank" rel="noopener noreferrer"&gt;the conference agenda&lt;/a&gt; looks really great, and i am considering taking a full day (if not two) off to spend most of these 24 hours online listening to talks.&lt;/p&gt;</content:encoded>
      <author>Valeriy Kravchuk</author>
      <category>valeriy.kravchuk</category>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_5f15f9d3f957c60b.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_6e248db3cecf756e.jpg" medium="image"/>
    </item>
    <item>
      <title>Expert MariaDB: Utilize MariaDB Server Effectively - Percona Live ONLINE Talk Preview</title>
      <link>https://percona.community/blog/2020/05/04/expert-mariadb-utilize-mariadb-server-effectively-percona-live-online-talk-preview/</link>
      <guid>https://percona.community/blog/2020/05/04/expert-mariadb-utilize-mariadb-server-effectively-percona-live-online-talk-preview/</guid>
      <pubDate>Mon, 04 May 2020 20:30:20 UTC</pubDate>
      <description>Percona Live Online Agenda Slot: Tue 19 May • New York 11:00 p.m. • London 4:00 a.m. (Wed) • New Delhi 8:30 a.m. (Wed) Level: Intermediate</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;&lt;a href="https://www.percona.com/live/conferences" target="_blank" rel="noopener noreferrer"&gt;Percona Live Online&lt;/a&gt; Agenda Slot:  Tue 19 May • New York&lt;/em&gt; &lt;em&gt;11:00 p.m. • London 4:00 a.m. (Wed) • New Delhi 8:30 a.m. (Wed)&lt;/em&gt; &lt;em&gt;Level: Intermediate&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;MariaDB Server 10.4 has been out for some time now (June 2019) and it has many new features, some of which MySQL does not have. Feature-wise, it is important to know what MariaDB Server 10.4 has (e.g. system tables in the Aria storage engine, ability to reload SSL certificates without a restart and more!) and what it lacks compared to MySQL 8.0 (group replication, the X Protocol, etc.)  Attendees will become more knowledgeable about how to better manage, observe, and secure their MariaDB Servers.&lt;/p&gt;
&lt;h3 id="why-is-your-talk-exciting"&gt;Why is Your Talk Exciting?&lt;/h3&gt;
&lt;p&gt;I am going to talk about MariaDB Server from a user perspective and why you might consider using this fork of MySQL for your production use cases. After all, it has progressed differently from MySQL and has features that are similar, sometimes implemented differently, yet it also has new features that MySQL may not get to, e.g. Oracle compatibility.&lt;/p&gt;
&lt;p&gt;It is also likely that we can talk a little about MariaDB Server 10.5 which should be just around the corner, as it is currently in beta. There are plenty of improvements around JSON, more information reported in the threadpool, a new Amazon S3 storage engine, plenty of InnoDB improvements, Galera 4 inconsistency voting, and more.&lt;/p&gt;
&lt;h3 id="who-would-benefit-the-most-from-your-talk"&gt;Who Would Benefit the Most From Your Talk?&lt;/h3&gt;
&lt;p&gt;Are you MariaDB curious? You would enjoy this talk as it is will only cover features not already present in MySQL. After all, it doesn’t matter how things are implemented — this is totally from a user perspective, so if you’re already used to MySQL, find out what *else* you will get from MariaDB Server.&lt;/p&gt;
&lt;h3 id="what-other-presentations-are-you-most-looking-forward-to"&gt;What Other Presentations Are You Most Looking Forward To?&lt;/h3&gt;
&lt;p&gt;I’m interested in the ProxySQL talks, though &lt;a href="https://www.percona.com/live/percona-live-online-full-agenda" target="_blank" rel="noopener noreferrer"&gt;the entire agenda&lt;/a&gt; is great.&lt;/p&gt;</content:encoded>
      <author>Colin Charles</author>
      <category>Colin.Charles</category>
      <category>Events</category>
      <category>MariaDB</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_5f15f9d3f957c60b.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/Social-PL-Online-2020-1_hu_6e248db3cecf756e.jpg" medium="image"/>
    </item>
    <item>
      <title>How to contribute Dashboards to PMM</title>
      <link>https://percona.community/blog/2020/05/04/how-to-contribute-dashboards-to-pmm/</link>
      <guid>https://percona.community/blog/2020/05/04/how-to-contribute-dashboards-to-pmm/</guid>
      <pubDate>Mon, 04 May 2020 14:54:56 UTC</pubDate>
      <description>Have you already contributed to Percona’s open-source products or perhaps you wanted to try doing so?</description>
      <content:encoded>&lt;p&gt;Have you already contributed to Percona’s open-source products or perhaps you wanted to try doing so?&lt;/p&gt;
&lt;p&gt;I will tell you how to become a contributor to the popular open-source product from Percona in just a few hours. You don’t need any serious developer skills.&lt;/p&gt;
&lt;p&gt;We earlier explained how to contribute to PMM documentation in &lt;a href="https://www.percona.com/community-blog/2020/01/28/how-to-contribute-to-pmm-documentation/" target="_blank" rel="noopener noreferrer"&gt;our last post&lt;/a&gt;. Now we will contribute to PMM itself, namely to Dashboards. Dashboards are an important part of PMM, they are seen and used by thousands of users, so your contribution may be of benefit to many others.&lt;/p&gt;
&lt;p&gt;You can view the latest version of our demo at &lt;a href="https://pmmdemo.percona.com/graph/" target="_blank" rel="noopener noreferrer"&gt;https://pmmdemo.percona.com/graph/&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The purpose of this latest article is to introduce you to the process of making changes to Dashboards in PMM, such as creating a new dashboard or improving an existing one. If you want to become a contributor, you will need to repeat the steps from &lt;a href="https://www.percona.com/community-blog/2020/01/28/how-to-contribute-to-pmm-documentation/" target="_blank" rel="noopener noreferrer"&gt;my earlier post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You need to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Have PMM installed on your server. PMM is easy to install via Docker.&lt;/li&gt;
&lt;li&gt;Have a GitHub account and install Git on your computer.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_1_hu_1224c9e26680dbd7.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_1_hu_a4961c23f2cda340.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_1_hu_2cbd147be7e3383a.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_1.png" alt="How to contribute Dashboards to PMM" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="what-kind-of-contribution-should-i-make"&gt;What kind of contribution should I make?&lt;/h2&gt;
&lt;p&gt;Of course, this is the first question to decide. PMM is a great product that many developers are working on. PMM uses &lt;a href="https://perconadev.atlassian.net/projects/PMM/issues/PMM-4923?filter=allopenissues" target="_blank" rel="noopener noreferrer"&gt;JIRA&lt;/a&gt; to track development tasks. You can:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Explore the tasks and choose an interesting one&lt;/li&gt;
&lt;li&gt;Create your own task from scratch&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;When I used PMM, I noticed that many charts have useful tooltips.  Although you can make any sort of contribution, in this article I will use the simplest type of contribution, a tooltip.&lt;/p&gt;
&lt;p&gt;Here’s the value of tooltips:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Tooltips - they are written by experts, for consumption by non-experts.  One of Percona’s value-add is to write good tooltips that are useful. We (Perconians) know the technologies and we have people who are used to simplifying complex topics.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_2_hu_ac51fe1a53140671.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_2_hu_d0a0dd3a02f6dae5.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_2_hu_f5bdf90544a46f02.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_2.png" alt="Tooltips" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;There are a lot of widgets that haven’t been described yet, so tooltips would hugely increase user experience here. You can open the widget settings and do the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;See settings, functions and parameters on which the chart is built.&lt;/li&gt;
&lt;li&gt;Study the documentation for these parameters&lt;/li&gt;
&lt;li&gt;Write a tooltip.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_3_hu_d69d839c5295f200.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_3_hu_217d4ad656de286d.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_3_hu_fa43f52ea226c9ea.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_3.png" alt="PMM Dashboard Settings" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now that we have defined what we’re about to do, let’s make a tooltip for one of the charts.&lt;/p&gt;
&lt;p&gt;I opened JIRA and created a task where I described what I would do:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tooltips: Prometheus dashboards: Head Block: Update graph panel description&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-5053" target="_blank" rel="noopener noreferrer"&gt;https://perconadev.atlassian.net/browse/PMM-5053&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_4_hu_f6f930a9b5139277.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_4_hu_18ac7eff3bc7d97f.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_4_hu_13820ee010699bd0.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_4.png" alt="PMM Dashboards Jira Issue" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="well-find-a-repository-for-the-dashboards"&gt;We’ll find a repository for the Dashboards&lt;/h2&gt;
&lt;p&gt;We’ll make changes to the code.&lt;/p&gt;
&lt;p&gt;PMM is big, for convenience it has a lot of GitHub repositories which can be found in the main repository &lt;a href="https://github.com/percona/pmm/tree/PMM-2.0" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/pmm/tree/PMM-2.0&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Since I will be contributing to Dashboards, I will need a Grafana Dashboard repository: &lt;a href="https://github.com/percona/grafana-dashboards" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/grafana-dashboards&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Next I make a fork of this repository in my GitHub account. A fork is needed to check my changes before sending them to the main repository.&lt;/p&gt;
&lt;p&gt;By the way, more than 600 people have already done it. You can do it, too! :)
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_5_hu_243442a762395303.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_5_hu_5838e5871721a3aa.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_5_hu_cdd4cc556f79a476.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_5.png" alt="PHH Dashboards Contribution GitHub " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="lets-study-the-structure-of-the-dashboard"&gt;Let’s study the structure of the Dashboard&lt;/h2&gt;
&lt;p&gt;All Dashboards are located in the “dashboards” folder and each dashboard is a JSON file.&lt;/p&gt;
&lt;p&gt;An example can be found here: &lt;a href="https://github.com/percona/grafana-dashboards/tree/master/dashboards" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/grafana-dashboards/tree/master/dashboards&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Next I have to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Find the JSON file I need&lt;/li&gt;
&lt;li&gt;Understand what needs to be changed&lt;/li&gt;
&lt;li&gt;Change it&lt;/li&gt;
&lt;li&gt;Commit and send a Pull Request for review&lt;/li&gt;
&lt;li&gt;Celebrate&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It is important that all contributions are carefully reviewed. When I wrote this article, I changed only a few lines, but even this was in review for several days by different expert advisors.&lt;/p&gt;
&lt;h2 id="changing-the-dashboard-is-easy"&gt;Changing the Dashboard is easy&lt;/h2&gt;
&lt;p&gt;I don’t have to know JSON. You can change Dashboards directly in the PMM interface. All settings are saved in JSON. Each chart has a button “Panel JSON”, which allows you to display JSON code.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/05/Contribute_to_dashboards_6.png" alt="Changing the Dashboard is easy" /&gt;&lt;/figure&gt; That way, I can:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;View the chart settings&lt;/li&gt;
&lt;li&gt;Make the necessary changes&lt;/li&gt;
&lt;li&gt;Save and get the necessary JSON file&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you look at the chart settings, you can understand what functions and arguments they use and check out the documentation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/docs/prometheus/latest/querying/functions/" target="_blank" rel="noopener noreferrer"&gt;https://prometheus.io/docs/prometheus/latest/querying/functions/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/docs/prometheus/latest/querying/operators/" target="_blank" rel="noopener noreferrer"&gt;https://prometheus.io/docs/prometheus/latest/querying/operators/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Review the documentation to make the correct description for the chart or make other improvements to the chart.&lt;/p&gt;
&lt;p&gt;As a next step, I need to add value to the Description field. As soon as I add it, I immediately get the tooltip for the chart.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_7_hu_8f151d519b2757e7.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_7_hu_1ca571d6b48e164d.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_7_hu_a643fa4e50afc16a.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_7.png" alt=" Description field" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="save-the-result"&gt;Save the result&lt;/h2&gt;
&lt;p&gt;I add the Description and save the chart. Then I open the JSON widget and find my value in the “description” field. It’s simple. I need to move the changes to the git repository.&lt;/p&gt;
&lt;p&gt;If I had created a new Dashboard or a chart, it would have been easier for me to transfer the entire file to the repository. But since I only have one line changed, I will only move it.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_8_hu_f2b335c81679041c.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_8_hu_80b1b666ec5112bb.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_8_hu_65b8986e18361b25.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_8.png" alt="JSON" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I made my fork repository’s git clone to a computer beforehand.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I open the dashboards/Prometheus.json file&lt;/li&gt;
&lt;li&gt;I find the “title” block: “Head Block”&lt;/li&gt;
&lt;li&gt;I add a line with “description” and save the file&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="working-with-the-pmm-repository"&gt;Working with the PMM repository&lt;/h2&gt;
&lt;p&gt;I have already described in detail the work with the repository in the previous article (link), and you can also use the instructions in the repository itself:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/percona/grafana-dashboards/blob/master/CONTRIBUTING.md" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/grafana-dashboards/blob/master/CONTRIBUTING.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I created a separate branch, named the commit correctly and sent it to my repository.&lt;/p&gt;
&lt;p&gt;I then made a Pull Request to the main grafana-dashboards repository.&lt;/p&gt;
&lt;p&gt;I really liked the process of testing the repository. I’ll tell you the steps.&lt;/p&gt;
&lt;h3 id="contributor-license-agreement"&gt;Contributor License Agreement&lt;/h3&gt;
&lt;p&gt;The first step is to sign the license/cla Contributor License Agreement. This is done with your GitHub account and one button. Simply read and agree.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_9_hu_fef24fe560f0821a.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_9_hu_52629108d3db8c68.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_9_hu_64fd6b4c20612356.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_9.png" alt="Contributor License Agreement" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="automated-code-review"&gt;Automated code review&lt;/h3&gt;
&lt;p&gt;Your branch will pass an automated code check using the &lt;a href="https://codecov.io/" target="_blank" rel="noopener noreferrer"&gt;Codecov&lt;/a&gt; service.&lt;/p&gt;
&lt;p&gt;You will be able to see the process and you will see the result: Codacy/PR Quality Review Up to standards.&lt;/p&gt;
&lt;p&gt;A positive pull request.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Codecov_hu_e4f19d930417c19b.png 480w, https://percona.community/blog/2020/05/Codecov_hu_f54b89c83973ad05.png 768w, https://percona.community/blog/2020/05/Codecov_hu_df2850f90d7f5f63.png 1400w"
src="https://percona.community/blog/2020/05/Codecov.png" alt="Codecov" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="continuous-integration-ci"&gt;Continuous integration (CI)&lt;/h3&gt;
&lt;p&gt;After each commit, Jenkins CI will try to build a PMM to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Make sure that your changes do not break the PMM&lt;/li&gt;
&lt;li&gt;Run auto testing&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It takes a few minutes.&lt;/p&gt;
&lt;p&gt;I’m sure you’ll pass all the automatic checks.&lt;/p&gt;
&lt;p&gt;You can try to start the build yourself using the instructions in the repository.&lt;/p&gt;
&lt;p&gt;If you are interested in these processes, please let me know in the comments.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Jenkins-1_hu_a80eb4198a908216.png 480w, https://percona.community/blog/2020/05/Jenkins-1_hu_3aa04e326a9f6f21.png 768w, https://percona.community/blog/2020/05/Jenkins-1_hu_3e77879a0f61ced8.png 1400w"
src="https://percona.community/blog/2020/05/Jenkins-1.png" alt="Jenkins" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="expert-review-and-code-review"&gt;Expert review and code review&lt;/h2&gt;
&lt;p&gt;Percona experts check all code changes. The more changes, the more experts will be involved.&lt;/p&gt;
&lt;p&gt;While I was writing this article, I made several contributions to Dashboards PMM.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;When I changed one line to add a tooltip, my code was reviewed by 2 people: the person responsible for Dashboards and those leaders.&lt;/li&gt;
&lt;li&gt;When I added a 50 line instruction, it already needed to be reviewed by 4 people.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;After each task is completed in JIRA, they will be checked by the QA department.&lt;/p&gt;
&lt;p&gt;You should not worry about the review process. Percona experts are very friendly, they will write recommendations directly to GitHub. They can even correct some lines at once.&lt;/p&gt;
&lt;p&gt;If you have any questions, just text me, I’ll try to help.&lt;/p&gt;
&lt;p&gt;I received a few recommendations, made some changes and my contribution was accepted.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_14_hu_d02ee39bfbf43cd9.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_14_hu_6c7c30e65f7fb9e2.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_14_hu_131b30fd6a3debbf.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_14.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="results"&gt;Results&lt;/h2&gt;
&lt;p&gt;I became a PMM contributor by improving one of the Dashboards. I spent about 30-60 minutes a day and it took me less than a week.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/05/Contribute_to_dashboards_15_hu_1a5aa20cc996caf5.png 480w, https://percona.community/blog/2020/05/Contribute_to_dashboards_15_hu_1357c6d6546f41b9.png 768w, https://percona.community/blog/2020/05/Contribute_to_dashboards_15_hu_9bbdea43a5e32fda.png 1400w"
src="https://percona.community/blog/2020/05/Contribute_to_dashboards_15.png" alt="Result" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In the process, I was able to add instructions for future contributors (link). You can improve this manual, too.&lt;/p&gt;
&lt;p&gt;I urge you to become a contributor. If you need help, just email me.&lt;/p&gt;
&lt;p&gt;More ideas for contributions can be found here: &lt;a href="https://www.percona.com/community/contributions/pmm" target="_blank" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="my-references"&gt;My references&lt;/h2&gt;
&lt;p&gt;Home page of the PMM contributor: &lt;a href="https://www.percona.com/community/contributions/pmm" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/community/contributions/pmm&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;An article on how to become a documentation contributor:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Instructions for Contributors: Issue at JIRA: &lt;a href="https://perconadev.atlassian.net/browse/PMM-5053" target="_blank" rel="noopener noreferrer"&gt;https://perconadev.atlassian.net/browse/PMM-5053&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;A branch in my repository: &lt;a href="https://github.com/dbazhenov/grafana-dashboards/tree/PMM-5053_dbazhenov_tooltip" target="_blank" rel="noopener noreferrer"&gt;https://github.com/dbazhenov/grafana-dashboards/tree/PMM-5053_dbazhenov_tooltip&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Pull Request to PMM repository &lt;a href="https://github.com/percona/grafana-dashboards/pull/524" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/grafana-dashboards/pull/524&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Confirmed by CLA: &lt;a href="https://cla-assistant.percona.com/percona/grafana-dashboards?pullRequest=524" target="_blank" rel="noopener noreferrer"&gt;https://cla-assistant.percona.com/percona/grafana-dashboards?pullRequest=524&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>daniil.bazhenov</category>
      <category>Entry Level</category>
      <category>Information</category>
      <category>Intermediate Level</category>
      <category>Open Source Databases</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/05/Contribute_to_dashboards_1_hu_339f1f46980870f7.jpg"/>
      <media:content url="https://percona.community/blog/2020/05/Contribute_to_dashboards_1_hu_fca739b2b4e871b7.jpg" medium="image"/>
    </item>
    <item>
      <title>Unexpected slow ALTER TABLE in MySQL 5.7</title>
      <link>https://percona.community/blog/2020/04/23/unexpected-slow-alter-table-mysql-5-7/</link>
      <guid>https://percona.community/blog/2020/04/23/unexpected-slow-alter-table-mysql-5-7/</guid>
      <pubDate>Thu, 23 Apr 2020 15:47:21 UTC</pubDate>
      <description>Usually one would expect that ALTER TABLE with ALGORITHM=COPY will be slower than the default ALGORITHM=INPLACE. In this blog post we describe the case when this is not so.</description>
      <content:encoded>&lt;p&gt;Usually one would expect that ALTER TABLE with ALGORITHM=COPY will be slower than the default ALGORITHM=INPLACE. In this blog post we describe the case when this is not so.&lt;/p&gt;
&lt;p&gt;One of the reasons for such behavior is the lesser known limitation of ALTER TABLE (with default ALGORITHM=INPLACE) that avoids REDO operations. As a result, all dirty pages of the altered table/tablespace have to be flushed before the ALTER TABLE completion.&lt;/p&gt;
&lt;h2 id="some-history"&gt;Some history&lt;/h2&gt;
&lt;p&gt;A long time ago, all “ALTER TABLE” (DDLs) operations in MySQL were implemented by creating a new table with the new structure, then copying the content of the original table to the new table, and finally renaming the table. During this operation the table was locked to prevent data inconsistency.&lt;/p&gt;
&lt;p&gt;Then, for InnoDB tables, the new algorithms were introduced, which do not involve the full table copy and some operations do not apply the table level lock – first the online add index algorithm was introduced for InnoDB, then the non-blocking add columns or &lt;em&gt;online DDLs&lt;/em&gt;. For the list of all online DDLs in MySQL 5.7 you can refer to this &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html" target="_blank" rel="noopener noreferrer"&gt;document&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="the-problem"&gt;The problem&lt;/h2&gt;
&lt;p&gt;Online DDLs are great for common operations like add/drop a column, &lt;strong&gt;however we have found out that these can be significantly slower&lt;/strong&gt;. For example, adding a field to a large table on a “beefy” server with 128G of RAM can take unexpectedly long time.&lt;/p&gt;
&lt;p&gt;In one of our “small” Percona Servers, it took a little more than 5 min to add a column to the 13 GB InnoDB table. Yet on another “large” Percona Server, where the same table was 30 GB in size, it took more than 4 hours to add the same column.&lt;/p&gt;
&lt;h3 id="investigating-the-issue"&gt;Investigating the issue&lt;/h3&gt;
&lt;p&gt;After verifying that the disk I/O throughput is the same on both servers, we investigated the reason for such a large difference in the duration of ALTER TABLE helios ADD COLUMN query using &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM)&lt;/a&gt; to record and review performance.&lt;/p&gt;
&lt;p&gt;On the smaller server, where ALTER TABLE was faster, the relevant PMM monitoring plots show:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/faster-alter-table.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In our Percona Server version 5.7, ALTER TABLE helios ADD COLUMN  was executed in place. On the left, we can observe a steady rate of the table rebuild, followed by four spikes corresponding to rebuilding of the four indices.&lt;/p&gt;
&lt;p&gt;What is also interesting is that ALTER TABLE with the INPLACE ALGORITHM (which will be the default for adding a field) &lt;strong&gt;will need to force flushing of all dirty pages and wait until it is done&lt;/strong&gt;. This is a much less known fact and very sparsely documented. The reason for this is that undo and redo logging is disabled for this operation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;No undo logging or associated redo logging is required for ALGORITHM=INPLACE. These operations add overhead to DDL statements that use ALGORITHM=COPY. &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html" target="_blank" rel="noopener noreferrer"&gt;https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;In this situation the only option is to flush all dirty pages, otherwise the data can become inconsistent. There’s a special treatment to be seen for ALTER TABLE in &lt;a href="https://github.com/percona/percona-server/blob/5.7/storage/innobase/buf/buf0flu.cc#L3907" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Back to our situation – during table rebuild, InnoDB buffer pool becomes increasingly dirty:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/increasingly-dirty-buffer-pool-1.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The graph shows peak at about 9 GB corresponding to the table data size. Originally we were under the impression that as dirty pages are flushed to disk, the in-memory dirty pages volume decreases at the rate determined by the Percona adaptive flushing algorithm. It turns out that flushing by ALTER and adaptive flushing have no relation: both happen concurrently. Flushing by ALTER is single page flushing and is done by iterating pages in the flush list and flushing pages of desired space_id (one by one). That probably explains that if the server has more RAM it can be slower to flush as it will have to scan a larger list.&lt;/p&gt;
&lt;p&gt;After the last buffer pool I/O request (from the last index build) ends, the algorithm increases the rate of flushing for the remaining dirty pages. The ALTER TABLE finishes when there are no more dirty pages left in the memory.&lt;/p&gt;
&lt;p&gt;You can see the six-fold increase in the I/O rate clearly in the plot below:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/six-fold-increase.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In contrast, on the “large” server, ALTER TABLE behaved differently. Although, at the beginning it proceeded the similar way:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/alter-table-different-on-larger-database.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;On the left, we can observe a steady rate of the table rebuild, followed by four spikes corresponding to rebuilding of the four table indices. During table rebuild the buffer pool became increasingly dirty:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/table-rebuild-increasingly-dirty.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Followed by the 21 GB of the table data, there are four kinks corresponding to four indices builds. It takes about twenty minutes to complete this part of ALTER TABLE processing of the 30 GB table. To some degree this is comparable to about four minutes to complete the similar part of ALTER TABLE processing of the 13 GB table. However, the adaptive flushing algorithm behaved differently on that server. It took more than four hours to complete the dirty pages flushing from memory
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/time-to-clear-pages.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is because in contrast to the “small” server, the buffer pool I/O remained extremely low:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/low-buffer-pool-io.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is not a hardware limitation, as PMM monitoring shows that at other times, the “large” server demonstrated ten times higher buffer pool I/O rates, e.g.:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/high-buffer-pool-io.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Beware the slower performance of ALTER TABLE … ADD COLUMN (default algorithm is INPLACE). On the large server the difference can be significant: the smaller the buffer pool the smaller is the flush lists and faster the flushing as the ALTER table has a smaller flush_lists to iterate. In some cases it may be better (and with more predictable timing) to use ALTER TABLE ALGORITHM=COPY.&lt;/p&gt;
&lt;h3 id="about-virtualhealth"&gt;About VirtualHealth&lt;/h3&gt;
&lt;p&gt;VirtualHealth created HELIOS, the first SaaS solution purpose-built for value-based healthcare. Utilized by some of the most innovative health plans in the country to manage millions of members, HELIOS streamlines person-centered care with intelligent case and disease management workflows, unmatched data integration, broad-spectrum collaboration, patient engagement, and configurable analytics and reporting. Named one of the fastest-growing companies in North America by Deloitte in 2018 and 2019, VirtualHealth empowers healthcare organizations to achieve enhanced outcomes, while maximizing efficiency, improving transparency, and lowering costs. For more information, visit &lt;a href="http://www.virtualhealth.com/" target="_blank" rel="noopener noreferrer"&gt;www.virtualhealth.com&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Alexander Rubin</author>
      <author>Alexandre Vaniachine</author>
      <category>Intermediate Level</category>
      <category>MySQL</category>
      <category>Percona Server for MySQL</category>
      <category>performance</category>
      <media:thumbnail url="https://percona.community/blog/2020/04/alter-table-different-on-larger-database_hu_290194340bced2df.jpg"/>
      <media:content url="https://percona.community/blog/2020/04/alter-table-different-on-larger-database_hu_b5b7f0161f8a7687.jpg" medium="image"/>
    </item>
    <item>
      <title>Our Offer to Online Meetups and Community Leaders</title>
      <link>https://percona.community/blog/2020/04/07/our-offer-to-online-meetups-and-community-leaders/</link>
      <guid>https://percona.community/blog/2020/04/07/our-offer-to-online-meetups-and-community-leaders/</guid>
      <pubDate>Tue, 07 Apr 2020 10:52:51 UTC</pubDate>
      <description> Percona’s Community team organizes our speakers at in-person events around the world, such as Percona Live, Percona University, and events sponsored by other organizations. However, like everyone else around the world, all our plans are on hold due to the Coronavirus pandemic.</description>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/online-meetups-percona-linkedin.jpg" alt="Offer of Percona Speakers for events" /&gt;&lt;/figure&gt;Percona’s Community team organizes our speakers at in-person events around the world, such as Percona Live, Percona University, and events sponsored by other organizations. However, like everyone else around the world, all our plans are on hold due to the Coronavirus pandemic.&lt;/p&gt;
&lt;p&gt;Perhaps you, like many others, are organizing online events, such as &lt;a href="https://help.meetup.com/hc/en-us/articles/360040609112" target="_blank" rel="noopener noreferrer"&gt;virtual meetups on Meetup.com&lt;/a&gt;. We can help you by making Percona’s team of experienced and well-known speakers available for your event. We have experts on key open-source database topics, including Kubernetes, monitoring, high availability, and more.&lt;/p&gt;
&lt;p&gt;Many of our speakers have spoken at major tech conferences before. These include experts like &lt;a href="https://www.linkedin.com/in/peterzaitsev/" target="_blank" rel="noopener noreferrer"&gt;Peter Zaitsev&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/askdba/" target="_blank" rel="noopener noreferrer"&gt;Alkin Tezuysal&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/ibrarahmed74/" target="_blank" rel="noopener noreferrer"&gt;Ibrar Ahmed&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/tylerduzan/" target="_blank" rel="noopener noreferrer"&gt;Tyler Duzan&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/svetsmirnova/" target="_blank" rel="noopener noreferrer"&gt;Sveta Smirnova&lt;/a&gt;, with availability across many timezones. Further, if you invite a Percona speaker to present virtually, Percona will help promote your events on our blog and social networks.&lt;/p&gt;
&lt;p&gt;To get started, just email &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; anytime.&lt;/p&gt;
&lt;h2 id="percona-live-amsterdam-2019"&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/04/ple-1.jpg" alt="Percona Live Amsterdam 2019" /&gt;&lt;/figure&gt;
Percona Live Amsterdam 2019&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Desk Photo by: &lt;a href="https://burst.shopify.com/@sarahpflugphoto" target="_blank" rel="noopener noreferrer"&gt;Sarah Pflug&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>daniil.bazhenov</category>
      <category>Events</category>
      <category>Information</category>
      <category>online meetups</category>
      <category>percona speakers</category>
      <media:thumbnail url="https://percona.community/blog/2020/04/online-meetups-percona-linkedin_hu_6db2be1eb222853b.jpg"/>
      <media:content url="https://percona.community/blog/2020/04/online-meetups-percona-linkedin_hu_8e18047ade9254f6.jpg" medium="image"/>
    </item>
    <item>
      <title>Finding MySQL Scaling Problems Using perf</title>
      <link>https://percona.community/blog/2020/02/05/finding-mysql-scaling-problems-using-perf/</link>
      <guid>https://percona.community/blog/2020/02/05/finding-mysql-scaling-problems-using-perf/</guid>
      <pubDate>Wed, 05 Feb 2020 16:18:14 UTC</pubDate>
      <description>The thing I wish I’d learned while still a DBA is how to use perf. Conversely after moving to a developer role, getting access to real external client workloads to get a perf recording directly is rare. To bridge this gap, I hope to encourage a bit of perf usage to help DBAs report bugs/feature requests in more detail to MySQL developers, who can then serve your needs better.</description>
      <content:encoded>&lt;p&gt;The thing I wish I’d learned while still a DBA is how to use &lt;a href="https://perf.wiki.kernel.org/index.php/Main_Page" target="_blank" rel="noopener noreferrer"&gt;perf&lt;/a&gt;. Conversely after moving to a developer role, getting access to real external client workloads to get a perf recording directly is rare. To bridge this gap, I hope to encourage a bit of perf usage to help DBAs report bugs/feature requests in more detail to MySQL developers, who can then serve your needs better.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/01/ricardo-gomez-angel-87vUJY3ntyI-unsplash.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;A recent client request showed how useful perf is in exposing the areas of MySQL that are otherwise well tuned, but can still be in need of coding improvements that increase throughput. The client had a &lt;a href="https://sourceforge.net/projects/tpccruner/" target="_blank" rel="noopener noreferrer"&gt;TPCCRunner&lt;/a&gt; (variant) workload that they wanted to run on a &lt;a href="https://www.ibm.com/it-infrastructure/power/power9" target="_blank" rel="noopener noreferrer"&gt;Power 9&lt;/a&gt; CPU, in &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html#isolevel_read-committed" target="_blank" rel="noopener noreferrer"&gt;READ-COMMITTED&lt;/a&gt; mode, and they received less performance than they hoped. Being a 2 socket, 20 cpus/socket 4 threads per core, and 256G RAM total it has enough resources.&lt;/p&gt;
&lt;p&gt;With such abundance of resources, the perf profile exposed code bottlenecks not normally seen.&lt;/p&gt;
&lt;p&gt;The principles driving MySQL development for a considerable time have been to a) maintain correctness, and b) deliver performance, usually meaning the CPU should be the bottleneck. The whole reason for large innodb buffer pools, innodb MVCC / LSN, group commit, table caches, thread caches, indexes, query planner etc, is to ensure that all hot data is in memory, ready to be processed optimally in the most efficient way by the CPU.&lt;/p&gt;
&lt;p&gt;Based on this principle, without a requirement to sync to persistent storage for durability, a SQL read mostly load should be able to add scale linearly up to the CPU capacity. Ideally after the CPU capacity has been reached the throughput should stay at the capacity limit and not degrade. Practical overheads of thread measurement mean this is never perfectly achieved. However, it is the goal.&lt;/p&gt;
&lt;h2 id="steps-to-using-perf"&gt;Steps to using perf&lt;/h2&gt;
&lt;p&gt;To install and use perf, use the following steps:&lt;/p&gt;
&lt;h4 id="1-install-perf"&gt;1. Install perf&lt;/h4&gt;
&lt;p&gt;This is a standard package and is closely tied to the Linux kernel version. The package name varies per distro:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ubuntu: &lt;a href="https://packages.ubuntu.com/bionic/linux-tools-common" target="_blank" rel="noopener noreferrer"&gt;linux-tools-common&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Debian: &lt;a href="https://packages.debian.org/buster/linux-base" target="_blank" rel="noopener noreferrer"&gt;linux-base&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;RHEL / Centos / Fedora: perf&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Distributions normally set the &lt;a href="http://man7.org/linux/man-pages/man8/sysctl.8.html" target="_blank" rel="noopener noreferrer"&gt;sysctl&lt;/a&gt; &lt;a href="https://www.kernel.org/doc/html/latest/admin-guide/sysctl/kernel.html#perf-event-paranoid" target="_blank" rel="noopener noreferrer"&gt;&lt;em&gt;kernel.perf_event_paranoid&lt;/em&gt;&lt;/a&gt; to a level which is hard to use (or &lt;a href="https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html" target="_blank" rel="noopener noreferrer"&gt;exploit&lt;/a&gt;) and this may need to be adjusted to obtain our recording. Large perf recordings due to hardware threads can require file descriptors and memory, and their limits may need to be increased with care (see &lt;a href="https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html#perf-events-perf-resource-control" target="_blank" rel="noopener noreferrer"&gt;kernel manual&lt;/a&gt;).&lt;/p&gt;
&lt;h4 id="2-install-debug-symbols-aka-debug-info-for-mysql"&gt;2. Install debug symbols (a.k.a. debug info) for MySQL&lt;/h4&gt;
&lt;p&gt;Debug symbols mapping memory addresses to real server code can assist greatly in understanding the recorded results. The debug info needs to map to the exact build of MySQL (both version number and its origin).&lt;/p&gt;
&lt;p&gt;Distros provide debug information in separate package repositories (distribution instructions: &lt;a href="https://wiki.ubuntu.com/Debug%20Symbol%20Packages" target="_blank" rel="noopener noreferrer"&gt;Ubuntu&lt;/a&gt;, &lt;a href="https://wiki.debian.org/AutomaticDebugPackages" target="_blank" rel="noopener noreferrer"&gt;Debian&lt;/a&gt;, &lt;a href="https://access.redhat.com/solutions/9907" target="_blank" rel="noopener noreferrer"&gt;RHEL&lt;/a&gt;, &lt;a href="https://fedoraproject.org/wiki/StackTraces#What_are_debuginfo_rpms.2C_and_how_do_I_get_them.3F" target="_blank" rel="noopener noreferrer"&gt;Fedora&lt;/a&gt;) and MySQL, &lt;a href="https://mariadb.com/kb/en/library/how-to-produce-a-full-stack-trace-for-mysqld/#installing-debug-info-packages-on-linux" target="_blank" rel="noopener noreferrer"&gt;MariaDB&lt;/a&gt; and Percona provide debug info packages in their repositories without additional configuration.&lt;/p&gt;
&lt;p&gt;If compiling from source, the default cmake option -DCMAKE_BUILD_TYPE=RelWithDebugInfo  has debug info as the name suggests.&lt;/p&gt;
&lt;h4 id="3-ensure-that-your-table-structures-and-queries-are-sane"&gt;3. Ensure that your table structures and queries are sane.&lt;/h4&gt;
&lt;p&gt;MySQL works well when the database table structures, indexes, and queries are in a &lt;code&gt;natural&lt;/code&gt; simple form. Asking MySQL developers to make poor table structures/queries to achieve greater performance will attract a low priority as making these changes can add to the overhead of simple queries.&lt;/p&gt;
&lt;h4 id="4-ensure-that-you-have-tuned-the-database-for-the-workload"&gt;4. Ensure that you have tuned the database for the workload.&lt;/h4&gt;
&lt;p&gt;MySQL has a lot of system variables, and using the performance schema and status variables assists in creating an optimally tuned MySQL instance before beginning perf measurements.&lt;/p&gt;
&lt;h4 id="5-ensure-that-the-active-data-is-off-disk"&gt;5. Ensure that the active data is off disk&lt;/h4&gt;
&lt;p&gt;To ensure you measurement is at its maximum, having the hot part of the data loaded into memory enables perf to focus on recording CPU related areas under stress, not just waiting to load from disk.&lt;/p&gt;
&lt;p&gt;For example, the TPCCRunner example described earlier took about an hour before it reached a point where it achieved its maximum transaction throughput. TPCRunner displays this, but generally watch for a leveling out of the queries per second over several minutes.&lt;/p&gt;
&lt;p&gt;When starting/stopping mysqld for testing, &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_dump_at_shutdown" target="_blank" rel="noopener noreferrer"&gt;innodb_buffer_pool_dump_at_shutdown&lt;/a&gt;=1 / innodb_buffer_pool_dump_at_start=1 / innodb_buffer_pool_dump_pct=100 will help restore the innodb buffer pool significantly quicker.&lt;/p&gt;
&lt;h4 id="6-know-what-workload-is-being-measured"&gt;6. Know what workload is being measured&lt;/h4&gt;
&lt;p&gt;A batch job may not have the same throughput requirements. It also may impact the concurrent workload that you are perf recording by creating longer history length, innodb buffer pool pressure etc.&lt;/p&gt;
&lt;p&gt;The application that generates the workload should be on a different server, different VM or in some way constrained in CPU to avoid resource contention with mysqld. Check the client side to ensure that it isn’t overloaded (CPU, network) as this could be indirectly constraining the server side workload.&lt;/p&gt;
&lt;h2 id="measuring"&gt;Measuring&lt;/h2&gt;
&lt;p&gt;With a hot workload running let’s start some measurement.&lt;/p&gt;
&lt;p&gt;Perf uses hardware (PMU) to assist its recording work, but there are limits to hardware support so there’s a point where it will affect your workload, so start slow. Perf works by looking at a frequency distribution of where the mysqld process is spending its time. To examine a function that is taking 0.1% of the time means that 1000 samples will likely show it once. As such a few thousand samples is sufficient. The number of samples is the multiplication of &lt;a href="http://man7.org/linux/man-pages/man1/perf-record.1.html" target="_blank" rel="noopener noreferrer"&gt;perf record’s&lt;/a&gt; &lt;em&gt;-F / –freq&lt;/em&gt; – which may by default be several thousand / second – the recording duration, and the number of CPUs.&lt;/p&gt;
&lt;p&gt;If your SQL queries are all running in much less than a second and occurring frequently, then a high frequency recording for a short duration is sufficient. If some query occurs less often, with a high CPU usage spike, a helper program &lt;a href="https://github.com/Netflix/flamescope" target="_blank" rel="noopener noreferrer"&gt;FlameScope&lt;/a&gt; will be able to narrow down a perf recording to a usable sample interval.&lt;/p&gt;
&lt;p&gt;Analysis involves looking through a number of sets of data. Below I show a pattern of using _name _as a shell variable, and a large one line command to conduct a number of recordings in sequence. In my case, I cycled through _RC _ (read-committed) vs _RR _(repeatable read), different compile options &lt;em&gt;-O0&lt;/em&gt; , kernel versions, final stages of _warmup _(compared to test run) and even local changes to mysqld (thread_local_ut_rnd_ulint_counter ). Keeping track of these alongside the same measurement of the test run output helps to correlate results more easily.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;name=5.7.28-thread_local_ut_rnd_ulint_counterO0-RC_warmup2 ;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pid=$(pidof mysqld);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;perf record -F 10 -o mysql-${name}.perf -p $pid  -- sleep 20;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;perf record -g -F 10 -o mysql-${name}.g.perf -p $pid  -- sleep 5;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;perf stat -B -e cache-references,cache-misses,cycles,instructions,branches,faults,migrations -p $pid sleep 20
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2&gt;&amp;1 | tee perf-stats-${name}.txt&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;With the above command, the recording is constrained the recording to mysqld (&lt;em&gt;-p $pid&lt;/em&gt;), at &lt;em&gt;-F 10&lt;/em&gt; samples/per second for (&lt;em&gt;sleep&lt;/em&gt;) &lt;em&gt;20&lt;/em&gt; seconds. A longer recording without the stack trace (-g) is taken as reference point to see if the shorter recording with &lt;em&gt;-g&lt;/em&gt; stack trace is a fair sample. 10 hz x 20 seconds may not seem like many samples, however this occurred on each of the 160 threads. A record with &lt;em&gt;-g&lt;/em&gt; is needed as a perf profile that shows all time in the kernel or pthread mutex (lock) code, but it doesn’t mean much without knowing which lock it is and where it was accessed from.&lt;/p&gt;
&lt;p&gt;Perf record with &lt;em&gt;-g&lt;/em&gt; call-graph (also known as stack chain or backtrace) adds to the size of the recording and the overhead of measurements. To ensure that there isn’t too much perf data (resulting in workload stalls), get the right frequency and duration before enabling &lt;em&gt;-g&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Perf stats were measured to identify (cpu) cache efficiency, instructions/cycle efficiency, instructions throughput (watch out for frequency scaling), faults (connecting real memory to the virtual address - should be low after warmup), and migrations between numa nodes.&lt;/p&gt;
&lt;p&gt;During measurement look at &lt;em&gt;htop&lt;/em&gt;/&lt;em&gt;top&lt;/em&gt; to ensure that the CPUs are indeed loaded. Also check the client side isn’t flooded with connection errors that could impact the validity of the recorded results.&lt;/p&gt;
&lt;h2 id="analysis"&gt;Analysis&lt;/h2&gt;
&lt;h3 id="viewing-a-perf-recording"&gt;Viewing a perf recording&lt;/h3&gt;
&lt;p&gt;&lt;a href="http://man7.org/linux/man-pages/man1/perf-report.1.html" target="_blank" rel="noopener noreferrer"&gt;perf report&lt;/a&gt; is used to textually view a perf recording. It is during the report stage that the debug info is read, since the linux kernel image resolves symbols. Run the report under &lt;code&gt;nice -n 19 perf report&lt;/code&gt; to ensure it has the lowest CPU priority if you are at all concerned about production impacts. It’s quite possible to do this on a different server provided the same kernel and MySQL packages are installed. &lt;code&gt;perf report --input mysql-5.7.28-event_run2_warmup_run1.perf --stdio&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Total Lost Samples: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Samples: 91K of event 'cycles:ppp'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Event count (approx.): 1261395960159641
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Overhead Command Shared Object Symbol
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# ........ ....... ................... ...............................................................................
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 5.84% mysqld mysqld [.] rec_get_offsets_func
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 3.62% mysqld mysqld [.] MYSQLparse
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2.70% mysqld mysqld [.] page_cur_search_with_match
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2.70% mysqld mysqld [.] buf_page_get_gen
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2.22% mysqld mysqld [.] cmp_dtuple_rec_with_match_low
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1.93% mysqld mysqld [.] buf_page_hash_get_low
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1.49% mysqld mysqld [.] btr_cur_search_to_nth_level
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1.35% mysqld [kernel.kallsyms] [k] do_syscall_64
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1.14% mysqld mysqld [.] row_search_mvcc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.93% mysqld mysqld [.] alloc_root
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.92% mysqld mysqld [.] lex_one_token
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.67% mysqld libc-2.27.so [.] malloc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.64% mysqld libc-2.27.so [.] _int_malloc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.61% mysqld libpthread-2.27.so [.] __pthread_getspecific
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.59% mysqld mysqld [.] pfs_rw_lock_s_lock_func
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.59% mysqld mysqld [.] dispatch_command
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.50% mysqld mysqld [.] check_stack_overrun
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.50% mysqld [tg3] [k] tg3_poll_work&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This shows a %CPU time measured when the CPU instruction pointer was at a particular time, grouped by the function name. To find out why malloc or the kernel do_syscall_64 appears so often the stack recording is needed.&lt;/p&gt;
&lt;h3 id="viewing-a-perf-recording-with-a-stack"&gt;Viewing a perf recording with a stack&lt;/h3&gt;
&lt;p&gt;When the &lt;em&gt;perf record&lt;/em&gt; used &lt;em&gt;-g&lt;/em&gt;, then &lt;em&gt;-g&lt;/em&gt; can be used in perf report to show the breakdown. By default it groups the functions, including the functions it calls, as below.
&lt;code&gt;perf report -i mysql-5.7.28-event_run2_warmup_run1.g.perf&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Samples: 85K of event 'cycles:ppp', Event count (approx.): 261413311777846
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Children Self Command Shared Object Symbol
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 80.08% 0.00% mysqld libpthread-2.27.so [.] start_thread
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 80.08% 0.00% mysqld mysqld [.] pfs_spawn_thread
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 80.05% 0.07% mysqld mysqld [.] handle_connection
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 79.75% 0.14% mysqld mysqld [.] do_command
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 77.98% 0.70% mysqld mysqld [.] dispatch_command
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 75.32% 0.18% mysqld mysqld [.] mysql_parse
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 62.63% 0.38% mysqld mysqld [.] mysql_execute_command
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 58.65% 0.13% mysqld mysqld [.] execute_sqlcom_select
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 55.63% 0.05% mysqld mysqld [.] handle_query
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 25.31% 0.41% mysqld mysqld [.] st_select_lex::optimize
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 24.67% 0.12% mysqld mysqld [.] JOIN::exec
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 24.45% 0.59% mysqld mysqld [.] JOIN::optimize
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 22.62% 0.29% mysqld mysqld [.] sub_select
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+ 20.15% 1.56% mysqld mysqld [.] btr_cur_search_to_nth_level&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In MySQL, as expected, most significant CPU load is in the threads. Most of the time this is a user connection, under the &lt;em&gt;handle_connection&lt;/em&gt; function, which parses and executes the SQL. In different situations you might see innodb background threads, or replication threads: understanding which thread is causing the load is important at a top level. Then, to continue analysis, use the perf report &lt;em&gt;–no-children&lt;/em&gt; option. This will show approximately the same as the non_-g_ recording, however it will provide the mechanism of being able to hit Enter on a function to show all the call stacks that go to that particular function.
&lt;code&gt;perf report -g --no-children --input mysql-5.7.28-event_run2_warmup_run1.g.perf&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Overhead Command Shared Object Symbol ◆
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;- 6.24% mysqld mysqld [.] rec_get_offsets_func ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; start_thread ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pfs_spawn_thread ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; handle_connection ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; do_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; dispatch_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql_parse ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql_execute_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; execute_sqlcom_select ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - handle_query ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 4.57% JOIN::exec ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - sub_select ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 3.77% evaluate_join_record ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 0.60% join_read_always_key ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 1.67% st_select_lex::optimize&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This shows a common call stack into &lt;em&gt;handle_query&lt;/em&gt;, where the &lt;em&gt;JOIN::exec&lt;/em&gt; and &lt;em&gt;st_select_lex::optimize&lt;/em&gt; is the diverging point. If the &lt;em&gt;evaluate_join_record&lt;/em&gt; and other sub-functions were to be expanded, the bottom level of the call graph would show &lt;em&gt;rec_get_offsets_func.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id="disassembly-annotation"&gt;Disassembly (annotation)&lt;/h3&gt;
&lt;p&gt;In the ncurses interfaces. Selecting ‘a’ (annotate) on a particular function calls out to the &lt;a href="https://linux.die.net/man/1/objdump" target="_blank" rel="noopener noreferrer"&gt;objdump&lt;/a&gt; (binutils) disassembler to show where in this function the highest frequency occurred and maps this to a commented C++ code above it.&lt;/p&gt;
&lt;p&gt;As compilers have significant understanding of the architecture, and given that the C/C++ language provides significant freedom in generating code, it’s sometimes quite difficult to parse from assembly back to the C/C++ source. In complex operations, C++ variables don’t have an easy translation to CPU registers. Inlined functions are also particularly hard as each inlining can further be optimized to a unique assembly depending on its location. To understand the assembly, I recommend focusing on the loads, stores, maths/conditions with constants and branches to see which register maps to which part of the MySQL server code in the context of the surrounding code.&lt;/p&gt;
&lt;p&gt;E.g annotation on &lt;em&gt;rec_get_offsets_func&lt;/em&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ dict_table_is_comp():
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ #if DICT_TF_COMPACT != 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ #error "DICT_TF_COMPACT must be 1"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ #endif
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ return(table-&gt;flags &amp; DICT_TF_COMPACT);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.44 │ mov 0x20(%rsi),%rsi
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 39.30 │ movzbl -0x3(%rdi),%eax
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ _Z20rec_get_offsets_funcPKhPK12dict_index_tPmmPP16mem_block_info_t():
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ut_ad(rec);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ut_ad(index);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ut_ad(heap);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ if (dict_table_is_comp(index-&gt;table)) {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.44 │ testb $0x1,0x34(%rsi)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.15 │ ↓ je e611d8 &lt;rec_get_offsets_func(unsigned char const*, dict_index_t const*, 128
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ switch (UNIV_EXPECT(rec_get_status(rec),&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here we see that &lt;em&gt;dict_table_is_comp&lt;/em&gt; is an expanded inline function at the top of &lt;em&gt;rec_get_offsets&lt;/em&gt;, the &lt;em&gt;movzlb .. %eax.&lt;/em&gt; The dominate CPU use in the function however isn’t part of this. The &lt;em&gt;testb 0x1 (DICT_TF_COMPACT) … %rsi&lt;/em&gt; is the testing of the flag with &lt;em&gt;je&lt;/em&gt; afterwards to return from the function.&lt;/p&gt;
&lt;h2 id="example---mutex-contention"&gt;Example - mutex contention&lt;/h2&gt;
&lt;p&gt;Compared to the performance profile on x86 above under ‘Viewing a perf recording’, this is what the performance profile looked like on POWER. perf report –input mysql-5.7.28-read_mostly_EVENT_RC-run2.perf –stdio&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Total Lost Samples: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Samples: 414K of event 'cycles:ppp'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Event count (approx.): 3884039315643070
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Overhead Command Shared Object Symbol
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# ........ ....... ................... ...............................................................................
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 13.05% mysqld mysqld [.] MVCC::view_open
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 10.99% mysqld mysqld [.] PolicyMutex&lt;TTASEventMutex&lt;GenericPolicy&gt; &gt;::enter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 4.11% mysqld mysqld [.] rec_get_offsets_func
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 3.78% mysqld mysqld [.] buf_page_get_gen
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2.34% mysqld mysqld [.] MYSQLparse
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2.27% mysqld mysqld [.] cmp_dtuple_rec_with_match_low
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2.15% mysqld mysqld [.] btr_cur_search_to_nth_level
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2.05% mysqld mysqld [.] page_cur_search_with_match
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1.99% mysqld mysqld [.] ut_delay
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1.83% mysqld mysqld [.] mtr_t::release_block_at_savepoint
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1.35% mysqld mysqld [.] rw_lock_s_lock_func
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.96% mysqld mysqld [.] buf_page_hash_get_low
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.88% mysqld mysqld [.] row_search_mvcc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.84% mysqld mysqld [.] lex_one_token
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.80% mysqld mysqld [.] pfs_rw_lock_s_unlock_func
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.80% mysqld mysqld [.] mtr_t::commit
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.62% mysqld mysqld [.] pfs_rw_lock_s_lock_func
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.59% mysqld [kernel.kallsyms] [k] power_pmu_enable
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.59% mysqld [kernel.kallsyms] [k] _raw_spin_lock
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.55% mysqld libpthread-2.28.so [.] __pthread_mutex_lock
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.54% mysqld mysqld [.] alloc_root
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.43% mysqld mysqld [.] PolicyMutex&lt;TTASEventMutex&lt;GenericPolicy&gt; &gt;::exit&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;What stands out clearly is the top two entries that didn’t appear on x86. Looking closer at &lt;em&gt;MVCC::view_open&lt;/em&gt;:
&lt;code&gt;perf report -g --no-children --input mysql-5.7.28-read_mostly_EVENT_RC-run2.g.perf&lt;/code&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;- 13.47% mysqld mysqld [.] MVCC::view_open ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; __clone ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0x8b10 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; pfs_spawn_thread ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; handle_connection ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; do_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; dispatch_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql_parse ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; mysql_execute_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; execute_sqlcom_select ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - handle_query ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 11.22% JOIN::exec ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - sub_select ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 10.99% join_read_always_key ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; handler::ha_index_read_map ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ha_innobase::index_read ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - row_search_mvcc ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 10.99% trx_assign_read_view ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MVCC::view_open&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Annotation of MVCC::view_open&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ _ZNK14TTASEventMutexI13GenericPolicyE7is_freeEjjRj(): ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ bool is_free( ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.02 │a08: ↓ bne cr4,10db7a30 &lt;MVCC::view_open(ReadView*&amp;, a90 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ↓ b 10db7ad0 &lt;MVCC::view_open(ReadView*&amp;, trx_t*)+0xb30&gt; ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ut_rnd_gen_ulint(): ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ut_rnd_ulint_counter = UT_RND1 * ut_rnd_ulint_counter + UT_RND2; ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │a10: addis r7,r2,2 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.02 │ addi r7,r7,26904 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 81.15 │ ld r8,0(r7) ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.02 │ mulld r8,r27,r8 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.02 │ addis r8,r8,1828 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.02 │ addi r8,r8,-14435 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ut_rnd_gen_next_ulint(): ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ rnd = UT_RND2 * rnd + UT_SUM_RND3; ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.02 │ mulld r9,r8,r19 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ut_rnd_gen_ulint(): ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ ut_rnd_ulint_counter = UT_RND1 * ut_rnd_ulint_counter + UT_RND2; ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.04 │ std r8,0(r7)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Due to the inline of code, within &lt;a href="https://github.com/mysql/mysql-server/blob/mysql-5.7.28/storage/innobase/read/read0read.cc#L554..L611" target="_blank" rel="noopener noreferrer"&gt;MVCC::view_open&lt;/a&gt; one of the mutexs got expanded out and the random number is used to spinlock wait for the lock again. &lt;a href="https://github.com/mysql/mysql-server/blob/mysql-5.7.28/storage/innobase/include/ib0mutex.h#L707..L717" target="_blank" rel="noopener noreferrer"&gt;PolicyMutex&lt;TTASEventMutex&lt;GenericPolicy&gt; &gt;::enter&lt;/a&gt; expanded to exactly the same code.&lt;/p&gt;
&lt;p&gt;We see here that the load (&lt;em&gt;ld&lt;/em&gt;) into &lt;em&gt;r8&lt;/em&gt;, is the slowest part of this. In mysql-5.7.28, &lt;a href="https://github.com/mysql/mysql-server/blob/mysql-5.7.28/storage/innobase/ut/ut0rnd.cc#L48" target="_blank" rel="noopener noreferrer"&gt;ut_rnd_ulint_counter&lt;/a&gt; is an ordinary global variable, meaning its shared between threads. The simple line of code &lt;em&gt;ut_rnd_ulint_counter = UT_RND1 * ut_rnd_ulint_counter + UT_RND2&lt;/em&gt;,  shows the result stored back in the same variable. To understand why this didn’t scale, we need to understand cache lines.&lt;/p&gt;
&lt;p&gt;note: &lt;em&gt;MVCC::view_open&lt;/em&gt; did show up in the x86 profile, at 0.23% and had the lock release as the highest cpu point. For x86 &lt;em&gt;PolicyMutex&lt;TTASEventMutex&lt;GenericPolicy&gt; &gt;::enter&lt;/em&gt; was at 0.32%.&lt;/p&gt;
&lt;h3 id="cache-lines"&gt;Cache Lines&lt;/h3&gt;
&lt;p&gt;All modern CPUs that are likely to support MySQL will have some from of &lt;a href="https://en.wikipedia.org/wiki/Cache_hierarchy" target="_blank" rel="noopener noreferrer"&gt;cache hierarchy&lt;/a&gt;. The principles are that a largely accessed memory location, like &lt;em&gt;ut_rnd_ulint_counter&lt;/em&gt;, can be copied into cache and at some point the CPU will push it back to memory. To ensure behavior is consistent, cache lines represent a MMU (memory management unit) concept of a memory allocation to a particular CPU. Cache lines can be read only, or exclusive, and a protocol between CPU cores exists to ensure that exclusive access is to one CPU only. When one CPU modifies a memory location it gains an exclusive cache line, and the cached value in other CPU caches are flushed. At which cache level this flushing occurs at, and to what extent are caches shared between CPUs, is quite architecture specific. However, citing rough &lt;a href="http://brenocon.com/dean_perf.html" target="_blank" rel="noopener noreferrer"&gt;metrics&lt;/a&gt;, cache access is orders of magnitude faster than RAM.&lt;/p&gt;
&lt;p&gt;In the perf recording above, storing back of &lt;em&gt;ut_rnd_ulint_counter&lt;/em&gt; clears the cache for the other CPUs, and this is why the load instruction is slow. MySQL did have this fixed in &lt;a href="https://github.com/mysql/mysql-server/commit/dedc8b3d567fbb92ce912f1559fe6a08b2857045" target="_blank" rel="noopener noreferrer"&gt;5.7.14&lt;/a&gt; but reverted this fix in &lt;a href="https://github.com/mysql/mysql-server/commit/dedc8b3d567fbb92ce912f1559fe6a08b2857045" target="_blank" rel="noopener noreferrer"&gt;5.7.20&lt;/a&gt; (assuming some performance degradation in thread local storage). In MySQL &lt;a href="https://github.com/mysql/mysql-server/commit/ea4913b403db72f26565520f68686b385872e7d2#diff-5f582f65ca6be1efafb5e278e4bffc44R35" target="_blank" rel="noopener noreferrer"&gt;8.0+&lt;/a&gt;, &lt;em&gt;ut_rnd_ulint_counter&lt;/em&gt; is a C++11 &lt;a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2659.htm" target="_blank" rel="noopener noreferrer"&gt;thread_local&lt;/a&gt; variable which has a faster implementation. &lt;a href="https://github.com/MariaDB/server/commit/ce04790" target="_blank" rel="noopener noreferrer"&gt;MariaDB-10.3.5&lt;/a&gt; avoided this by removing the random delay in InnoDB mutexes. Thread local variables reduce contention because each thread has its independent memory location. Because this is only a random number seed, there’s no need for synchronization of results.&lt;/p&gt;
&lt;h3 id="cache-collisions"&gt;Cache collisions&lt;/h3&gt;
&lt;p&gt;The impacts of &lt;em&gt;ut_rnd_ulint_counter&lt;/em&gt; however aren’t limited to itself. Cache lines reserve blocks of memory according to the cache line size of the architecture (x86 - 64 bytes, arm64 and POWER - 128 bytes, s390 - 256 bytes). High in the CPU profile is the &lt;em&gt;btr_cur_search_to_nth_level&lt;/em&gt; function. This is part of innodb’s scanning of an index and it would be easy to discount its high CPU usage. Looking at the disassembly however shows:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.01 │ ld r8,3464(r31)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ cursor-&gt;low_match = low_match;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.05 │ std r10,96(r25)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ cursor-&gt;up_bytes = up_bytes;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.00 │ ld r10,3456(r31)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ if (btr_search_enabled &amp;&amp; !index-&gt;disable_ahi) {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 24.08 │ lbz r9,0(r9)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ cursor-&gt;low_bytes = low_bytes;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.01 │ std r7,104(r25)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ cursor-&gt;up_match = up_match;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.00 │ std r8,80(r25)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ cursor-&gt;up_bytes = up_bytes;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.01 │ std r10,88(r25)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; │ if (UNIV_LIKELY(btr_search_enabled) &amp;&amp; !index-&gt;disable_ahi) {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.00 │ cmpwi cr7,r9,0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.00 │ ld r9,48(r29)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 0.01 │ ↓ beq cr7,10eb88a8 &lt;btr_cur_search_to_nth_level(dict_index_t*, 2348&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;em&gt;lbz&lt;/em&gt; is a load byte instruction referring to &lt;em&gt;btr_search_enabled&lt;/em&gt;. &lt;em&gt;btr_search_enabled&lt;/em&gt; and is the MySQL server variable associated with the SQL global &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_adaptive_hash_index" target="_blank" rel="noopener noreferrer"&gt;innodb_adaptive_hash_index&lt;/a&gt; . As a global system variable, this isn’t changed frequently, probably only once at startup. As such it should be able to rest comfortably in the cache of all CPUs in a read only cache line.&lt;/p&gt;
&lt;p&gt;To find out why the relative address is examined in the mysql executable:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ readelf -a bin/mysqld | grep btr_search_enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 8522: 0000000011aa1b40 1 OBJECT GLOBAL DEFAULT 24 btr_search_enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 17719: 0000000011aa1b40 1 OBJECT GLOBAL DEFAULT 24 btr_search_enabled&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Taking the last two characters off the hexadecimal address &lt;em&gt;0000000011aa1b40&lt;/em&gt; and the other variables in the same 256 (0x100) byte address range can be examined.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ readelf -a bin/mysqld | grep 0000000011aa1b
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1312: 0000000011aa1be0 296 OBJECT GLOBAL DEFAULT 24 fts_default_stopword
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 8522: 0000000011aa1b40 1 OBJECT GLOBAL DEFAULT 24 btr_search_enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 9580: 0000000011aa1b98 16 OBJECT GLOBAL DEFAULT 24 fil_addr_null
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 11434: 0000000011aa1b60 8 OBJECT GLOBAL DEFAULT 24 zip_failure_threshold_pct
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 12665: 0000000011aa1b70 40 OBJECT GLOBAL DEFAULT 24 dot_ext
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 13042: 0000000011aa1b30 8 OBJECT GLOBAL DEFAULT 24 ut_rnd_ulint_counter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 13810: 0000000011aa1b48 8 OBJECT GLOBAL DEFAULT 24 srv_checksum_algorithm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 18831: 0000000011aa1bb0 48 OBJECT GLOBAL DEFAULT 24 fts_common_tables
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 27713: 0000000011aa1b38 8 OBJECT GLOBAL DEFAULT 24 btr_ahi_parts
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 33183: 0000000011aa1b50 8 OBJECT GLOBAL DEFAULT 24 zip_pad_max
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 2386: 0000000011aa1b58 7 OBJECT LOCAL DEFAULT 24 _ZL9dict_ibfk
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 5961: 0000000011aa1b68 8 OBJECT LOCAL DEFAULT 24 _ZL8eval_rnd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 10509: 0000000011aa1be0 296 OBJECT GLOBAL DEFAULT 24 fts_default_stopword
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 17719: 0000000011aa1b40 1 OBJECT GLOBAL DEFAULT 24 btr_search_enabled
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 18777: 0000000011aa1b98 16 OBJECT GLOBAL DEFAULT 24 fil_addr_null
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 20631: 0000000011aa1b60 8 OBJECT GLOBAL DEFAULT 24 zip_failure_threshold_pct
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 21862: 0000000011aa1b70 40 OBJECT GLOBAL DEFAULT 24 dot_ext
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 22239: 0000000011aa1b30 8 OBJECT GLOBAL DEFAULT 24 ut_rnd_ulint_counter
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 23007: 0000000011aa1b48 8 OBJECT GLOBAL DEFAULT 24 srv_checksum_algorithm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 28028: 0000000011aa1bb0 48 OBJECT GLOBAL DEFAULT 24 fts_common_tables
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 36910: 0000000011aa1b38 8 OBJECT GLOBAL DEFAULT 24 btr_ahi_parts
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 42380: 0000000011aa1b50 8 OBJECT GLOBAL DEFAULT 24 zip_pad_max&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;em&gt;ut_rnd_ulint_counter&lt;/em&gt; is stored 16 bytes away from the &lt;em&gt;btr_search_enabled&lt;/em&gt;. Because of this, every invalidation of &lt;em&gt;ut_rnd_ulint_counter&lt;/em&gt; cache line results in a cache invalidation of &lt;em&gt;btr_search_enabled&lt;/em&gt; on POWER, and every other variable in the &lt;em&gt;0000000011aa1b00 to 0000000011aa1b40&lt;/em&gt; address range for x86_64 (or to &lt;em&gt;0000000011aa1b80&lt;/em&gt; for POWER and ARM64, or to &lt;em&gt;0000000011aa1c00&lt;/em&gt; for s390). There are no rules governing the layout of these variables so it was only luck that caused x86_64 to not be affected here.&lt;/p&gt;
&lt;p&gt;While the contended management of &lt;em&gt;ut_rnd_ulint_counter&lt;/em&gt; remains an unsolved problem on MySQL-5.7, putting all the global variables into the same memory block as a way to keep them out of the same cache as other potentially frequently changed variables is a way to prevent unintended contention. Global variables are an ideal candidate for this as they are changed infrequently and are usually hot in code paths. By pulling all the global variables in the same location, this maximized the cache by using less cache lines that remain in a read only mode.&lt;/p&gt;
&lt;p&gt;To achieve this co-location, MySQL uses the Linux kernel mechanism of using &lt;a href="https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-section-variable-attribute" target="_blank" rel="noopener noreferrer"&gt;section attributes on variables&lt;/a&gt; and a linker script to bind their location. This was is described in MySQL &lt;a href="https://bugs.mysql.com/bug.php?id=97777" target="_blank" rel="noopener noreferrer"&gt;bug 97777&lt;/a&gt; and the MariaDB task &lt;a href="https://jira.mariadb.org/browse/MDEV-21145" target="_blank" rel="noopener noreferrer"&gt;MDEV-21145&lt;/a&gt;. The segmenting of the system global variables using this mechanism resulted in a 5.29% increase in the transactions per minute of the TPCCRunner benchmark (using MUTEXTYPE=sys).&lt;/p&gt;
&lt;h2 id="mutex-implementations"&gt;Mutex Implementations&lt;/h2&gt;
&lt;p&gt;Having discovered what I thought to be a smoking gun with the &lt;em&gt;ut_rnd_ulint_counter&lt;/em&gt; contention being the source of throughput problems for the benchmark, the &lt;em&gt;thread_local&lt;/em&gt; implementation of MySQL-8.0 was back-ported to MySQL-5.7.28. Disappointingly it was discovered that the throughput was approximately the same. From a perf profile perspective, the CPU usage was no longer in the inlined &lt;em&gt;ut_rnd_gen_ulint&lt;/em&gt; function, it was in the &lt;a href="https://github.com/mysql/mysql-server/blob/mysql-5.7.28/storage/innobase/sync/sync0arr.cc#L451..L488" target="_blank" rel="noopener noreferrer"&gt;sync_array_wait_event&lt;/a&gt; and &lt;a href="https://github.com/mysql/mysql-server/blob/mysql-5.7.28/storage/innobase/sync/sync0arr.cc#L333..L400" target="_blank" rel="noopener noreferrer"&gt;sync_array_reserve_cell&lt;/a&gt; functions.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Samples: 394K of event 'cycles:ppp', Event count (approx.): 2348024370370315
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;  Overhead Command Shared Object Symbol ◆
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;- 45.48% mysqld [kernel.vmlinux] [k] _raw_spin_lock ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; __clone ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 0x8b10 ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 45.48% pfs_spawn_thread ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; handle_connection ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - do_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 45.44% dispatch_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 45.38% mysql_parse ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 45.38% mysql_execute_command ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 44.75% execute_sqlcom_select ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - handle_query ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 38.28% JOIN::exec ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 25.34% sub_select ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 24.85% join_read_always_key ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; handler::ha_index_read_map ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - ha_innobase::index_read ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 24.85% row_search_mvcc ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 24.85% trx_assign_read_view ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - MVCC::view_open ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 12.00% sync_array_wait_event ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 5.44% os_event::wait_low ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; - 2.21% os_event::wait_low ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; __pthread_mutex_unlock ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; system_call ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sys_futex ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + do_futex ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 2.04% __pthread_mutex_lock ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 0.94% pthread_cond_wait ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 2.34% __pthread_mutex_lock ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 2.28% sync_array_free_cell ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 1.60% sync_array_wait_event ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 10.69% sync_array_reserve_cell ▒
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; + 1.32% os_event_set&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;These functions are largely wrappers around a pthread locking implementation. From the version history these were imported from MySQL-5.0 with minor modification in 2013 compared to the pthread implementation that receives significant maintenance from the glibc community represented by major CPU architecture manufacturers.&lt;/p&gt;
&lt;p&gt;Thankfully, MySQL has a compile option &lt;em&gt;-DMUTEXTYPE=sys&lt;/em&gt; that results in &lt;a href="https://github.com/mysql/mysql-server/blob/mysql-5.7.28/storage/innobase/include/ib0mutex.h#L110..L123" target="_blank" rel="noopener noreferrer"&gt;pthreads being used directly&lt;/a&gt; and that increased x86 performance marginally, but much more significantly on POWER (understandable as since the sync_array elements have multiple instances on the same cache line size of 128 bytes compared to x86_64 which is 64 bytes). I’ll soon get to benchmarking these changes in more detail and generate some bug report to get this default changed in distro packages at least.&lt;/p&gt;
&lt;h2 id="encode---another-example"&gt;Encode - Another example&lt;/h2&gt;
&lt;p&gt;Even while carrying out this investigation a &lt;a href="https://jira.mariadb.org/browse/MDEV-21285" target="_blank" rel="noopener noreferrer"&gt;MariaDB zulip chat&lt;/a&gt; exposed a benchmark of &lt;a href="https://mariadb.com/kb/en/library/encode/" target="_blank" rel="noopener noreferrer"&gt;ENCODE&lt;/a&gt; (notably deprecated in MySQL-5.7) having scaling problems. Using the exact techniques here it was quick to generate and extracted a perf profile (&lt;a href="https://jira.mariadb.org/browse/MDEV-21285" target="_blank" rel="noopener noreferrer"&gt;MDEV-21285&lt;/a&gt;) and stack that showed every initial guess at the source of the problem – including mine – was incorrect. With the perf profile, however, the nature of the problem is quite clear – unlike the solution. That requires more thought.&lt;/p&gt;
&lt;h2 id="reportshow-your-perf-recordings"&gt;Report/Show your perf recordings&lt;/h2&gt;
&lt;p&gt;Alongside its low overhead during recording, the useful aspect of perf from a DBA perspective is that perf stack traces show only the MySQL code being executed, and the frequency of its execution. There is no exposed database data, SQL queries, or table names in the output. However, the &lt;a href="https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html#overview" target="_blank" rel="noopener noreferrer"&gt;Perf Events and tool security (item 4)&lt;/a&gt; indicates that registers can be captured in a perf recording so be careful about sharing raw perf data.&lt;/p&gt;
&lt;p&gt;Once the raw perf data is processed by &lt;em&gt;perf report&lt;/em&gt;, with correct debug info and kernel, there are no addresses and only mysqld and kernel function names in its output. The most that is being exposing by sharing a perf report is the frequency of use of the MySQL code that was obtained externally. This should be enough to convince strict and competent managers and security people to sharing the perf recordings.&lt;/p&gt;
&lt;p&gt;With some realistic expectations (code can’t execute in 0 time, all of the database can’t be in CPU cache) you should now be able to show the parts of MySQL that are limiting your queries.&lt;/p&gt;
&lt;h3 id="resulting-bug-reports"&gt;Resulting bug reports&lt;/h3&gt;
&lt;p&gt;MySQL:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://bugs.mysql.com/bug.php?id=97777" target="_blank" rel="noopener noreferrer"&gt;bug 97777&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://bugs.mysql.com/bug.php?id=97822" target="_blank" rel="noopener noreferrer"&gt;bug 97822&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;MariaDB:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://jira.mariadb.org/browse/MDEV-21145" target="_blank" rel="noopener noreferrer"&gt;MDEV-21145 &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jira.mariadb.org/browse/MDEV-21452" target="_blank" rel="noopener noreferrer"&gt;MDEV-21452&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jira.mariadb.org/browse/MDEV-21212" target="_blank" rel="noopener noreferrer"&gt;MDEV-21212&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;–&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: The postings on this site are the authors own and don’t necessarily represent IBM’s positions, strategies or opinions.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@ripato?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Ricardo Gomez Angel&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/perforations?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Daniel Black</author>
      <category>cacheline</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>perf</category>
      <category>performance</category>
      <category>POWER</category>
      <media:thumbnail url="https://percona.community/blog/2020/01/ricardo-gomez-angel-87vUJY3ntyI-unsplash_hu_78060d87e7ddad26.jpg"/>
      <media:content url="https://percona.community/blog/2020/01/ricardo-gomez-angel-87vUJY3ntyI-unsplash_hu_17197b9de458aa08.jpg" medium="image"/>
    </item>
    <item>
      <title>How To Contribute to PMM Documentation</title>
      <link>https://percona.community/blog/2020/01/28/how-to-contribute-to-pmm-documentation/</link>
      <guid>https://percona.community/blog/2020/01/28/how-to-contribute-to-pmm-documentation/</guid>
      <pubDate>Tue, 28 Jan 2020 16:43:38 UTC</pubDate>
      <description>We’d love to see more contributions towards the development and improvement of Percona Monitoring and Management (PMM), one of Percona’s most valued projects. Like all of Percona’s software, PMM is free and open-source. An area where we’d dearly love to see some community provided enhancement is in its documentation. In future blog posts, we’ll provide some insight on how to contribute to our software but… the beauty of documentation is that it’s straightforward to maintain, and you don’t even have to be a programmer to be able to provide valuable corrections and enhancements. So it’s a great place to start. In this post, we set out how you might be able to contribute to this to make PMM even better than it is already!</description>
      <content:encoded>&lt;p&gt;We’d love to see more contributions towards the development and improvement of &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Monitoring and Management (PMM),&lt;/a&gt; one of Percona’s most valued projects. Like all of Percona’s software, PMM is free and open-source. An area where we’d dearly love to see some community provided enhancement is in its documentation. In future blog posts, we’ll provide some insight on how to contribute to our software but… the beauty of documentation is that it’s straightforward to maintain, and you don’t even have to be a programmer to be able to provide valuable corrections and enhancements. So it’s a great place to start. In this post, we set out how you might be able to contribute to this to make PMM even better than it is already!&lt;/p&gt;
&lt;h2 id="some-context"&gt;Some context&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/index.html" target="_blank" rel="noopener noreferrer"&gt;PMM documentation&lt;/a&gt; is available from the Percona website, and it is an essential part of PMM; all the tasks and functions of the developer need to be documented. There are a couple of things that might inspire you to contribute to enhancing the PMM documentation:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It’s something you can do without feeling you need stellar programming skills&lt;/li&gt;
&lt;li&gt;It is useful for a large number of users.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By the way, if you aren’t sure where to start, there are currently more than 50 PMM documentation improvement tasks listed in &lt;a href="https://perconadev.atlassian.net/browse/PMM-5333?jql=project%20%3D%20PMM%20AND%20resolution%20%3D%20Unresolved%20AND%20component%20%3D%20Documentation" target="_blank" rel="noopener noreferrer"&gt;JIRA,&lt;/a&gt; Percona’s fault recording system. Once you have checked out a few of those and become familiar with the documentation structure and style, you’ll probably be able to find more issues to report… or think of your own improvements.&lt;/p&gt;
&lt;p&gt;Even enhancements that help only a few users are very welcome.&lt;/p&gt;
&lt;h2 id="a-simple-example"&gt;A simple example&lt;/h2&gt;
&lt;p&gt;This article provides a simple example which changes only a few lines of documentation, but these steps are all you need to be able to  contribute all manner of documentation improvements. Here, I focus just on the process and tools that are used to create the documentation. You’ll find more background information in the &lt;a href="https://www.percona.com/community/contributions/pmm" target="_blank" rel="noopener noreferrer"&gt;PMM Contributions Overview&lt;/a&gt;.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/01/PMM-Contribute-1_hu_1c1b623882062453.png 480w, https://percona.community/blog/2020/01/PMM-Contribute-1_hu_4df0ffc42a97dc10.png 768w, https://percona.community/blog/2020/01/PMM-Contribute-1_hu_2e97ee929d5662eb.png 1400w"
src="https://percona.community/blog/2020/01/PMM-Contribute-1.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="my-work-plan"&gt;My work plan…&lt;/h2&gt;
&lt;h3 id="or-a-summary"&gt;…or a summary&lt;/h3&gt;
&lt;p&gt;Having decided I was going to be a contributor, too, I created a simple outline of what I needed to do. Here it is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Find an existing task or create a new one. PMM is an excellent product, but it has known &lt;a href="https://perconadev.atlassian.net/issues/?jql=project+%3D+PMM+AND+component+%3D+Documentation" target="_blank" rel="noopener noreferrer"&gt;documentation issues.&lt;/a&gt; that I can help with&lt;/li&gt;
&lt;li&gt;Find the repository and install the PMM documentation on my computer so I can work out how to make changes&lt;/li&gt;
&lt;li&gt;Make the changes and test them.&lt;/li&gt;
&lt;li&gt;Send the changes to the PMM repository.&lt;/li&gt;
&lt;li&gt;Go through a review and verification process so that my changes can be published.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In the process of exploring this, I’ve written and published a manual for you, which is available in the primary documentation repository at &lt;a href="https://github.com/percona/pmm-doc" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/pmm-doc.&lt;/a&gt; This was my small contribution, and you are welcome to help improve that too!&lt;/p&gt;
&lt;p&gt;If you’re ready to jump in, though, let’s take a look step-by-step at what’s involved.&lt;/p&gt;
&lt;h3 id="1-find-an-existing-task-or-create-a-new-one"&gt;1. Find an existing task or create a new one&lt;/h3&gt;
&lt;p&gt;Percona has identified over &lt;a href="https://perconadev.atlassian.net/issues/?jql=project%20%3D%20PMM%20AND%20resolution%20%3D%20Unresolved%20AND%20component%20%3D%20Documentation" target="_blank" rel="noopener noreferrer"&gt;50 specific documentation needs for PMM&lt;/a&gt; as shown in &lt;a href="https://perconadev.atlassian.net/projects/PMM/issues/PMM-5075?filter=allopenissues" target="_blank" rel="noopener noreferrer"&gt;Percona’s JIRA repository of all PMM development tasks&lt;/a&gt;. Create an account and log in to JIRA, then you can choose a current task, or create a new report to start contributing to PMM.&lt;/p&gt;
&lt;p&gt;In fact, for the sake of this example, while I liked the look of quite a few of the existing tasks I wanted to take the first step quickly. So I identified an improvement for the main documentation page and created a new record in JIRA. Here it is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://perconadev.atlassian.net/browse/PMM-5012" target="_blank" rel="noopener noreferrer"&gt;https://perconadev.atlassian.net/browse/PMM-5012&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;It’s really important that you use JIRA as the starting point for any changes&lt;/strong&gt;. This is the only way for the PMM team to find out what your intentions are and to advise you of the best approach. Through JIRA, too, you can discuss the task before you start work. If you want to work on an existing report, then I recommend that you contact the author of the task through comments in JIRA.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/01/PMM-Contribute-2_hu_10d09c79448f3e8b.png 480w, https://percona.community/blog/2020/01/PMM-Contribute-2_hu_981f35364b7097dd.png 768w, https://percona.community/blog/2020/01/PMM-Contribute-2_hu_cac33e2989c14f49.png 1400w"
src="https://percona.community/blog/2020/01/PMM-Contribute-2.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="2-repository-and-installation"&gt;2. Repository and installation&lt;/h3&gt;
&lt;p&gt;All PMM documentation is written using the &lt;a href="https://www.sphinx-doc.org/" target="_blank" rel="noopener noreferrer"&gt;Sphinx engine markup language&lt;/a&gt;.  We store the documentation as *.rst files inside GitHub’s &lt;a href="https://github.com/percona/pmm-doc" target="_blank" rel="noopener noreferrer"&gt;PMM documentation repository&lt;/a&gt;. Sphinx allows easy publishing of various output formats such as HTML, LaTeX (for PDF), ePub, Texinfo, etc. You’ll need a GitHub account. A simple overview:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The text is written using a unique markup language as .rst files. The syntax is very similar to markdown but with its own rules. All the rules are available on the &lt;a href="http://www.sphinx-doc.org/en/master/" target="_blank" rel="noopener noreferrer"&gt;official website&lt;/a&gt; or can be found implemented in existing documentation.&lt;/li&gt;
&lt;li&gt;Source files are stored in the GitHub repository. Each version of PMM has its branch in the repository.&lt;/li&gt;
&lt;li&gt;The Sphinx engine collects the source code into an HTML documentation. This works very quickly.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In fact, you don’t even need to install Sphinx-doc, you can write or edit documentation without it just using a standard editor.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/01/PMM-Contribute-3.png" alt=" " /&gt;&lt;/figure&gt; The PMM project team uses several separate repositories. See this &lt;a href="https://github.com/percona/pmm/tree/PMM-2.0" target="_blank" rel="noopener noreferrer"&gt;list of all PMM repositories in Github&lt;/a&gt;. One of them is the &lt;a href="https://github.com/percona/pmm-doc" target="_blank" rel="noopener noreferrer"&gt;PMM documentation repository&lt;/a&gt;. You’ll find a link to the documentation repository from the main PMM repository at &lt;a href="https://github.com/percona/pmm/tree/PMM-2.0" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/pmm/tree/PMM-2.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;To begin, &lt;a href="https://help.github.com/en/github/getting-started-with-github/fork-a-repo" target="_blank" rel="noopener noreferrer"&gt;fork the PMM repository&lt;/a&gt; under your GitHub account. You can then edit this personal fork safely, without interfering with the main repository. Later on, Percona can pull your changes into its main repository.&lt;/p&gt;
&lt;h4 id="local-installation-of-the-documentation"&gt;Local installation of the documentation&lt;/h4&gt;
&lt;p&gt;Install the documentation locally on your computer. Here’s the process:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Clone the fork repository to your environment.&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.sphinx-doc.org/en/master/usage/installation.html" target="_blank" rel="noopener noreferrer"&gt;Install Sphinx-doc&lt;/a&gt; according to the instructions in the repository&lt;/li&gt;
&lt;li&gt;Build the documentation. Use the instruction from &lt;a href="https://github.com/percona/pmm-doc#install" target="_blank" rel="noopener noreferrer"&gt;pmm-doc repository&lt;/a&gt; (see p.3 in Install section)&lt;/li&gt;
&lt;li&gt;Check the result in your browser.  You may need the Apache webserver on your computer. For example, you can use a Docker image with Apache (&lt;a href="https://hub.docker.com/_/httpd" target="_blank" rel="noopener noreferrer"&gt;link&lt;/a&gt;). However, documentation may open in your browser without this.&lt;/li&gt;
&lt;li&gt;Edit some changes and rebuild.&lt;/li&gt;
&lt;li&gt;Check the changes in your browser.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;… and so on. It’s essential not only to install but also to check what you can change. If you’d like more instructions, please leave a message in the comments to this post or contact me &lt;a href="mailto:community-team@percona.com"&gt;by email&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="3-making-changes-and-testing-them"&gt;3. Making changes and testing them&lt;/h3&gt;
&lt;p&gt;Now you can make changes. Two important points:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;If you aren’t sure how make changes correctly, take a look at how others do it. There are already plenty of changes in the documentation; and you should be able to see them.&lt;/li&gt;
&lt;li&gt;It’s essential to make changes properly, otherwise your hard work will be wasted.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We have already selected or created a task in JIRA, and we will need its ID. The JIRA task ID is used as an identifier for the JIRA and GitHub bundle.&lt;/p&gt;
&lt;p&gt;We need to create a new git branch. When creating a branch, correctly name it using the formula: JIRA_ID_SHORTTITLE. For example, my GitHub user is dbazhenov, and the changes I’m making are related to the JIRA task PMM-5012, so here’s the command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git checkout -b PMM-5012_dbazhenov_introduction&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;So… you found the right page and made some changes. If you created a new page, there are examples in the existing documentation. In this case, you need to create a new page file and include it in the toctree level below. If you need help with that, please just ask.&lt;/p&gt;
&lt;p&gt;Now save your changes to git and be sure to call the commit correctly. What do I mean by that? Well, be sure to use the task ID and describe in detail the change you’ve made. Here’s my example:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git add .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git commit -m "PMM-5012 PostgreSQL and ProxySQL have been added to the home page"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now, build the documentation and check the result in your browser. If you get warnings during the build, this is mostly likely to be due to using different versions of Sphinx and nothing to worry about.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/01/PMM-Contribute-7-warn.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;When you see the documentation, don’t worry that it’s not CSS or JavaScript, only pure HTML. In due course, it will be built into the current percona.com website and will inherit its styling from there.&lt;/p&gt;
&lt;h3 id="4-saving-the-result-and-contributing"&gt;4. Saving the result and contributing&lt;/h3&gt;
&lt;p&gt;This is where you send your work to the PMM team. First, you have to send your branch to your own fork. That’s straightforward:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; git push origin PMM-5012_dbazhenov&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now, open your repository and check the results. In particular, make sure that your branch holds only the changed files. It’s possible that additional files have been uploaded. To check the result, create &lt;a href="https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests" target="_blank" rel="noopener noreferrer"&gt;a pull request&lt;/a&gt; in the master branch of your repository. This will give you a list of the changes that you’ve made.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/01/PMM-Contribute-5_hu_c8205f9871ed0107.png 480w, https://percona.community/blog/2020/01/PMM-Contribute-5_hu_6214fa658463ed88.png 768w, https://percona.community/blog/2020/01/PMM-Contribute-5_hu_bf49d9cb7dc9b968.png 1400w"
src="https://percona.community/blog/2020/01/PMM-Contribute-5.png" alt=" " /&gt;&lt;/figure&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/01/PMM-Contribute-4.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Once you’ve checked that the pull request has only the intended changes, you can make a second pull request, but this time it’s to the Percona repository.&lt;/p&gt;
&lt;p&gt;Here’s my pull request: &lt;a href="https://github.com/percona/pmm-doc/pull/45" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/pmm-doc/pull/45&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/01/PMM-Contribute-6_hu_a24e26e2ae910c77.png 480w, https://percona.community/blog/2020/01/PMM-Contribute-6_hu_129d0bcaabd8198c.png 768w, https://percona.community/blog/2020/01/PMM-Contribute-6_hu_bae6a0276ffc5110.png 1400w"
src="https://percona.community/blog/2020/01/PMM-Contribute-6.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="5-passing-a-review"&gt;5. Passing a review&lt;/h3&gt;
&lt;p&gt;All submissions are thoroughly reviewed before being released. This guarantees the quality and safety of PMM. Even if it’s “just” documentation, it has a very important role to play in the user experience.&lt;/p&gt;
&lt;p&gt;You will also need to confirm the Contributor License Agreement.&lt;/p&gt;
&lt;p&gt;Once I’d submitted my changes, I waited a little while, and then the Percona team checked my work and sent it back to me for improvement. I made the necessary changes and – this is an important point – I sent them to the &lt;strong&gt;same pull request&lt;/strong&gt;.
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/01/PMM-Contribute-8-lic_hu_889bfe8a10809cdd.png 480w, https://percona.community/blog/2020/01/PMM-Contribute-8-lic_hu_e8d54b73856ac009.png 768w, https://percona.community/blog/2020/01/PMM-Contribute-8-lic_hu_564bd6e3f781cf5f.png 1400w"
src="https://percona.community/blog/2020/01/PMM-Contribute-8-lic.png" alt=" " /&gt;&lt;/figure&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2020/01/PMM-Contribute-9_hu_cf73d2ff4193d390.png 480w, https://percona.community/blog/2020/01/PMM-Contribute-9_hu_e6f3ad12818c865a.png 768w, https://percona.community/blog/2020/01/PMM-Contribute-9_hu_257f945253406da2.png 1400w"
src="https://percona.community/blog/2020/01/PMM-Contribute-9.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h4 id="release"&gt;Release&lt;/h4&gt;
&lt;p&gt;There’s nothing for you to do here, the Percona team have to create releases of software and documentation.&lt;/p&gt;
&lt;p&gt;After a few days, Percona published my changes to the PMM documentation site.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/index.html" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/doc/percona-monitoring-and-management/2.x/index.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That’s how I ended up on the list of pmm-doc contributors.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/01/PMM-Contribute-10.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Contributing to documentation is a great way to start your journey as an open source contributor, especially if you are not too familiar with git and GitHub. If you’d like to start contributing to open source, then I recommend you try contributing to the PMM documentation. Instructions here: &lt;a href="https://github.com/percona/pmm-doc" target="_blank" rel="noopener noreferrer"&gt;https://github.com/percona/pmm-doc&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;All the same, I realize that documentation is not for everyone, even as a means of introduction. So here are some ideas and options for contributing to PMM in other ways: &lt;a href="https://www.percona.com/community/contributions/pmm" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/community/contributions/pmm&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As already reported, I more than happy to help you out. Just sent me an email to &lt;a href="mailto:community-team@percona.com"&gt;community-team@percona.com&lt;/a&gt; and add “PMM Community” to your subject line so that my colleagues know that the email’s for me. Good luck!&lt;/p&gt;</content:encoded>
      <author>Daniil Bazhenov</author>
      <category>daniil.bazhenov</category>
      <category>contributing</category>
      <category>contributions</category>
      <category>contributors</category>
      <category>documentation</category>
      <category>Open Source Databases</category>
      <category>Percona Monitoring and Management</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2020/01/PMM-Contribute-10_hu_7cafffa62fa0b179.jpg"/>
      <media:content url="https://percona.community/blog/2020/01/PMM-Contribute-10_hu_e8c3686c85efc412.jpg" medium="image"/>
    </item>
    <item>
      <title>Disk of Yesteryear Compared to Today’s SSD Drives</title>
      <link>https://percona.community/blog/2020/01/17/disk-of-yesteryear-compared-to-todays-ssd-drives/</link>
      <guid>https://percona.community/blog/2020/01/17/disk-of-yesteryear-compared-to-todays-ssd-drives/</guid>
      <pubDate>Fri, 17 Jan 2020 16:48:46 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2020/01/enrico-sottocorna-HOhR-t0yZIU-unsplash.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In my [last blog post](&lt;a href="https://www.percona.com/" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/&lt;/a&gt;
community-blog/2019/08/01/how-to-build-a-percona-server-stack-on-a-raspberry-pi-3/) I showed you how to get the entire Percona “Stack" up and running on a Raspberry Pi. This time around, I would like to show the impact on performance between using an SSD hard disk and a standard hard disk.&lt;/p&gt;
&lt;p&gt;Disk performance is a key factor in &lt;a href="https://www.percona.com/software/mysql-database/percona-server" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt; (or any RDB platform) performance on a Raspberry Pi 4.&lt;/p&gt;
&lt;h2 id="test-set-up"&gt;Test set up&lt;/h2&gt;
&lt;p&gt;Each test below was run three times per Hard Disk and I took the best of the three for comparison. Hardware&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Raspberry Pi 4+ with 4GB ram.&lt;/li&gt;
&lt;li&gt;Disk 1: USB3 Western Digital My Passport Ultra, 1TB&lt;/li&gt;
&lt;li&gt;Disk 2: USB3 KEXIN 240GB Portable External SSD Drive&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hardware stayed consistent during test, except for the hard disk that were switched from KEXIN to Western Digital drive. Software&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Raspbian Buster&lt;/li&gt;
&lt;li&gt;Persona Server Version: 5.7.27-30 built from source. See the above BLOG for install instructions.&lt;/li&gt;
&lt;li&gt;Sysbench 1.0.17&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Sample my.cnf&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;port = 3306
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;socket = /var/lib/mysql/mysql.sock
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pid-file = /var/lib/mysql/mysqld.pid
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;basedir = /usr/local/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;datadir = /data0/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;tmpdir = /data0/mysql/tmp
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;general_log_file = /var/log/mysql/mysql-general.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log-error = /var/log/mysql/mysqld.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log_file = /var/log/mysql/log/slow_query.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log = 0 # Slow query log off
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lc-messages-dir = /usr/local/mysql/share
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;plugin_dir = /usr/local/mysql/lib/mysql/plugin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log-bin = /data0/mysql/binlog/mysql-bin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sync_binlog = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;expire_logs_days = 5
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;server-id = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;binlog_format = mixed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;max_allowed_packet = 64M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;max_connections = 50
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;max_user_connections = 40
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;query_cache_size=0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;query_cache_type=0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_data_home_dir = /data0/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_group_home_dir = /data0/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_files_in_group = 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_size = 1536M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_file_size = 64M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_buffer_size = 8M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_flush_log_at_trx_commit = 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#innodb_flush_log_at_trx_commit = 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_lock_wait_timeout = 50
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_flush_method = O_DIRECT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_file_per_table = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_instances = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;skip-name-resolve=0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;thread_pool_size=20
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_temp_data_file_path = ../tmp/ibtmp1:12M:autoextend:max:8G&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Sysbench MySQL test prep step:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sysbench --db-driver=mysql —mysql-db=sbtest --oltp-table-size=500000 --oltp-tables-count=10 --threads=8 --mysql-host= --mysql-port=3306 --mysql-user= --mysql-password=
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/usr/share/sysbench/tests/include/oltp_legacy/parallel_prepare.lua run&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="test-1"&gt;Test 1&lt;/h2&gt;
&lt;p&gt;This was done using the: KEXIN 240GB Portable External SSD Drive. Sysbench command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sysbench --db-driver=mysql --mysql-db=sbtest --report-interval=2 --mysql-table-engine=innodb --oltp-table-size=500000 --oltp-tables-count=10 --oltp-test-mode=complex --threads=10 --time=150 —mysql-host= --mysql-port=3306 —mysql-user= —mysql-password= /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua run&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SQL statistics:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; queries performed:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; read: 486542
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; write: 139012
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; other: 69506
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total: 695060
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; transactions: 34753 (231.62 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; queries: 695060 (4632.45 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ignored errors: 0 (0.00 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; reconnects: 0 (0.00 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;General statistics:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total time: 150.0362s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total number of events: 34753
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Latency (ms):
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; min: 20.28
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; avg: 43.16
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; max: 94.32
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 95th percentile: 57.87
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sum: 1500044.61
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Threads fairness:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; events (avg/stddev): 3475.3000/368.77
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; execution time (avg/stddev): 150.0045/0.01&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you can see the performance with the KEXIN (SSD) Drive was pretty good:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;transactions: 34753 (231.62 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;queries: 695060 (4632.45 per sec.)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="test-2"&gt;Test 2&lt;/h2&gt;
&lt;p&gt;This was done using the: Western Digital My Passport Ultra 1TB drive.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SQL statistics:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; queries performed:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; read: 60984
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; write: 17424
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; other: 8712
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total: 87120
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; transactions: 4356 (29.00 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; queries: 87120 (579.94 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ignored errors: 0 (0.00 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; reconnects: 0 (0.00 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;General statistics:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total time: 150.2160s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total number of events: 4356
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Latency (ms):
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; min: 23.26
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; avg: 344.75
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; max: 1932.12
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 95th percentile: 733.00
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sum: 1501739.03
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Threads fairness:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; events (avg/stddev): 435.6000/5.71
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; execution time (avg/stddev): 150.1739/0.05&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you can see the performance on the Western Digital Drive was really bad:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;transactions: 4356 (29.00 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;queries: 87120 (579.94 per sec.)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="disk-io-tests"&gt;Disk IO Tests&lt;/h3&gt;
&lt;p&gt;KEXIN:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Operations performed: 208123 Read, 138748 Write, 443904 Other = 790775 Total
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Read 3.1757Gb Written 2.1171Gb Total transferred 5.2928Gb (18.066Mb/sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 1156.24 Requests/sec executed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Test execution summary:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total time: 300.0004s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total number of events: 346871
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total time taken by event execution: 113.1569
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; per-request statistics:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; min: 0.02ms
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; avg: 0.33ms
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; max: 31.07ms
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; approx. 95 percentile: 0.60ms
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Threads fairness:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; events (avg/stddev): 346871.0000/0.00
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; execution time (avg/stddev): 113.1569/0.00&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Western Digital:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Operations performed: 24570 Read, 16380 Write, 52352 Other = 93302 Total
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Read 383.91Mb Written 255.94Mb Total transferred 639.84Mb (2.1327Mb/sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; 136.50 Requests/sec executed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Test execution summary:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total time: 300.0103s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total number of events: 40950
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; total time taken by event execution: 230.0220
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; per-request statistics:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; min: 0.03ms
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; avg: 5.62ms
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; max: 692.52ms
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; approx. 95 percentile: 13.96ms
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Threads fairness:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; events (avg/stddev): 40950.0000/0.00
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; execution time (avg/stddev): 230.0220/0.00&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;As you can see the transactions per second between the Western Digital Drive and the KEXIN Drive was more than 12.5% slower. The queries per second between the Western Digital Drive and KEXIN drive were more than 12.5% slower. Even the sysbench showed an extreme difference between the two drives. There is a 13.36ms difference in the 95% percentile. KEXIN:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;transactions: 34753 (231.62 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;queries: 695060 (4632.45 per sec.)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Western Digital:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;transactions: 4356 (29.00 per sec.)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;queries: 87120 (579.94 per sec.)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;With the cost of SSD drives dropping, we can see that the Raspberry Pi 4, 4GB with an SSD drive is a good choice for a small business (or anyone) that needs a good robust database at an affordable price range.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@enricosottocorna?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Enrico Sottocorna&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/berries-spoons?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>Percona Server for MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2020/01/enrico-sottocorna-HOhR-t0yZIU-unsplash_hu_bdcf89c101a6bdfa.jpg"/>
      <media:content url="https://percona.community/blog/2020/01/enrico-sottocorna-HOhR-t0yZIU-unsplash_hu_8188425365f5f8cf.jpg" medium="image"/>
    </item>
    <item>
      <title>A First Look at Amazon RDS Proxy</title>
      <link>https://percona.community/blog/2020/01/07/a-first-look-at-amazon-rds-proxy/</link>
      <guid>https://percona.community/blog/2020/01/07/a-first-look-at-amazon-rds-proxy/</guid>
      <pubDate>Tue, 07 Jan 2020 11:45:40 UTC</pubDate>
      <description>At re:Invent in Las Vegas in December 2019, AWS announced the public preview of RDS Proxy, a fully managed database proxy that sits between your application and RDS. The new service offers to “share established database connections, improving database efficiency and application scalability”.</description>
      <content:encoded>&lt;p&gt;At &lt;a href="https://reinvent.awsevents.com/" target="_blank" rel="noopener noreferrer"&gt;re:Invent&lt;/a&gt; in Las Vegas in December 2019, &lt;strong&gt;AWS announced the public preview of &lt;a href="https://aws.amazon.com/rds/proxy/" target="_blank" rel="noopener noreferrer"&gt;RDS Proxy&lt;/a&gt;&lt;/strong&gt;, a fully managed database proxy that sits between your application and RDS. The new service offers to “&lt;em&gt;share established database connections, improving database efficiency and application scalability”&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;But one of the benefits that caught my eye is the ability to reduce the downtime in case of an instance failure and a failover. As for the announcement:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2019/12/allie-smith-zp-0uEqBwpU-unsplash-50_hu_fbe2b632fd0d61a4.jpg 480w, https://percona.community/blog/2019/12/allie-smith-zp-0uEqBwpU-unsplash-50_hu_60b8af3720c9d5c0.jpg 768w, https://percona.community/blog/2019/12/allie-smith-zp-0uEqBwpU-unsplash-50_hu_83995111e92d9238.jpg 1400w"
src="https://percona.community/blog/2019/12/allie-smith-zp-0uEqBwpU-unsplash-50.jpg" alt="Photo by Allie Smith on Unsplash" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In case of a failure, RDS Proxy automatically connects to a standby database instance while preserving connections from your application and reduces failover times for RDS and Aurora multi-AZ databases by up to 66%"&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;You can read more about the announcement and the new service on the AWS &lt;a href="https://aws.amazon.com/about-aws/whats-new/2019/12/amazon-rds-proxy-available-in-preview/" target="_blank" rel="noopener noreferrer"&gt;blog&lt;/a&gt; but as the service is already available in public preview, it is time to give it a try.&lt;/p&gt;
&lt;h2 id="what-does-reduces-failover-times-by-66-mean-and-how-can-we-test-it"&gt;What does “reduces failover times by 66%” mean and how can we test it?&lt;/h2&gt;
&lt;p&gt;According to the documentation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within one to two minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered; the use of adequately large instance types is recommended with Multi-AZ for best results. "&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;So I decided to perform a simple test, using only two terminals, a MySQL client and a while loop in Bash: I wanted to check what happens when I trigger &lt;strong&gt;a forced failover (reboot with failover)&lt;/strong&gt; on a Multi AZ RDS instance running MySQL 5.7.26 behind a RDS Proxy.&lt;/p&gt;
&lt;h3 id="the-simplest-test"&gt;The simplest test&lt;/h3&gt;
&lt;p&gt;I created a new proxy &lt;em&gt;“test-proxy”&lt;/em&gt; that pointed to a m5.large Multi AZ &lt;em&gt;“test-rds”&lt;/em&gt; instance. And I set the idle client connection timeout to 3 minutes, a value that should allow us to avoid dropping connections given the expected failover time on the RDS instance.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/12/Screenshot_2019-12-19-RDS-%C2%B7-AWS-Console.png" alt="Creating RDS Proxy" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;And after a few minutes I was ready to go. I started two while loops against the proxy and against the instance, each retrieving current time from MySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ while true; do mysql -s -N -h test-proxy.proxy-cqz****wmlnh.us-east-1.rds.amazonaws.com -u testuser -e "select now()"; sleep 2; done
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ while true; do mysql -s -N -h test-rds.cqz****wmlnh.us-east-1.rds.amazonaws.com -u testuser -e "select now()"; sleep 2; done&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Acknowledged, this is a pretty basic and limited approach, but one that can quickly provide a feeling of how the RDS proxy performs during a forced failover. &lt;strong&gt;test-rds instance&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:45:48
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:45:50
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:45:52
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:45:54
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;test-proxy proxy&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:45:48
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:45:50
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:45:52
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:45:54
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Which terminal was going to be the winner and have the smallest gap in the time &lt;strong&gt;once I triggered the reboot with failover?&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws rds reboot-db-instance --db-instance-identifier test-rds --force-failover
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;```Let's see the results. **test-rds instance**```
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:47:31
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:47:33
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:49:44
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:49:46
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;test-proxy proxy&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:47:31
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:47:33
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:47:56
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-12-16 18:47:58
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;From a delay of 129 seconds for the “test-rds” instance to 21 seconds for the proxy&lt;/strong&gt;, it is quite a significant difference. Even better than the advertised 66%. I performed the test a couple of more times to make sure the result was not a one off, but the numbers are pretty consistent and &lt;strong&gt;the gap was always significant&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id="main-limitations-and-caveats"&gt;Main limitations and caveats&lt;/h3&gt;
&lt;p&gt;As of today, RDS Proxy is in public preview and available for RDS MySQL (MySQL 5.6 and MySQL 5.7) and Aurora MySQL . There is currently no support for RDS PostgreSQL or Aurora PostgreSQL. And it’s important to note: &lt;strong&gt;there is as yet no opportunity to change the instance size or class once the proxy has been created. That means it cannot be used to reduce downtime during a vertical scaling of the instance, which would be one of the main scenarios for the product.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can still trigger a modifying instance on the Multi AZ RDS but the proxy will then not be able to recover after a scaling operation. It will still be there but will only provide a “MySQL server has gone away” message.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 2006 (HY000) at line 1: MySQL server has gone away
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 1105 (HY000) at line 1: Unknown error
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 2006 (HY000) at line 1: MySQL server has gone away
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 2006 (HY000) at line 1: MySQL server has gone away
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 2006 (HY000) at line 1: MySQL server has gone away&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;That is actually expected. As per the documentation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Currently, proxies don’t track any changes to the set of DB instances within an Aurora DB cluster. Those changes include operations such as host replacements, instance renames, port changes, scaling instances up or down, or adding or removing DB instances.”&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;You can find all the current limitations &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html#rds-proxy.limitations" target="_blank" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="what-about-costs"&gt;What about costs?&lt;/h2&gt;
&lt;p&gt;Compared to other more convoluted AWS models, &lt;strong&gt;the pricing structure of RDS Proxy is actually &lt;a href="https://aws.amazon.com/rds/proxy/pricing/" target="_blank" rel="noopener noreferrer"&gt;simple&lt;/a&gt;&lt;/strong&gt;: you pay a fixed hourly amount ($0.015 in us-east-1) per vCPU of the underlying database instance, regardless of instance class or other configurations. The larger the instance running behind the Proxy, the higher the price.&lt;/p&gt;
&lt;h3 id="how-is-that-going-to-affect-your-overall-rds-costs"&gt;How is that going to affect your overall RDS costs?&lt;/h3&gt;
&lt;p&gt;Let’s take two popular instances t3.small (1vCPU) and m5.large (2 vCPU): the cost of the Proxy is about 12 USD and 24 USD per month. That is about 8% on top of cost of the Multi AZ instance for the m5.large, and over 20% for the t3.small.&lt;/p&gt;
&lt;p&gt;Of course, as you are likely preserving connections, you might be able to absorb the cost of the proxy itself by running a smaller instance, but that might not be always the case.&lt;/p&gt;
&lt;p&gt;Note that as per the current documentation, the Amazon RDS Proxy preview was free until the end of 2019 only.&lt;/p&gt;
&lt;p&gt;To recap, &lt;strong&gt;RDS Proxy is a new service by Amazon and still in preview but the results in term of reduced failover times are really promising.&lt;/strong&gt; On top of providing a simpler layer to handle database connections for serverless architectures.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;–&lt;/em&gt;
&lt;em&gt;Photo Allie Smith on &lt;a href="https://unsplash.com/" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Renato Losio</author>
      <category>renato-losio</category>
      <category>Amazon RDS</category>
      <category>AWS</category>
      <category>aws</category>
      <category>DevOps</category>
      <category>MySQL</category>
      <category>proxy</category>
      <category>RDS</category>
      <category>RDS Proxy</category>
      <media:thumbnail url="https://percona.community/blog/2019/12/allie-smith-zp-0uEqBwpU-unsplash-50_hu_478971ef086a83c5.jpg"/>
      <media:content url="https://percona.community/blog/2019/12/allie-smith-zp-0uEqBwpU-unsplash-50_hu_fa2ec7087352b42.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Server for MySQL 8.0 – New Data Masking Feature</title>
      <link>https://percona.community/blog/2019/12/13/percona-server-for-mysql-8-0-new-data-masking-feature/</link>
      <guid>https://percona.community/blog/2019/12/13/percona-server-for-mysql-8-0-new-data-masking-feature/</guid>
      <pubDate>Fri, 13 Dec 2019 10:43:14 UTC</pubDate>
      <description>Database administrators are responsible for maintaining the privacy and integrity of data. When the data contains confidential information, your company has a legal obligation to ensure that privacy is maintained. Even so, being able to access the information contained in that dataset, for example for testing or reporting purposes, has great value so what to do? MySQL Enterprise Edition offers data masking and de-identification, so I decided to contribute similar functionality to Percona Server for MySQL. In this post, I provide some background context and information on how to use these new functions in practice.</description>
      <content:encoded>&lt;p&gt;Database administrators are responsible for maintaining the privacy and integrity of data. When the data contains confidential information, your company has a legal obligation to ensure that privacy is maintained. Even so, being able to access the information contained in that dataset, for example for testing or reporting purposes, has great value so what to do? &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/data-masking.html" target="_blank" rel="noopener noreferrer"&gt;MySQL Enterprise Edition&lt;/a&gt; offers data masking and de-identification, so I decided to contribute similar functionality to &lt;a href="https://www.percona.com/doc/percona-server/LATEST/security/data-masking.html" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt;. In this post, I provide some background context and information on how to use these new functions in practice.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/12/data-masking-Percona-Server-for-MySQL.jpg" alt="Data Masking in Percona Server for MySQL 8.0.17" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="some-context"&gt;Some context&lt;/h2&gt;
&lt;p&gt;One of the most important assets of any company is data. Having good data allows engineers to build better systems and user experiences.&lt;/p&gt;
&lt;p&gt;Even through our most trivial activities, we continuously generate and share great volumes of data. I’m walking down the street and if I take a look at my phone it’s quite straightforward to get recommendations for a place to have lunch. The platform knows that it’s almost lunch time and that I have visited this nearby restaurant, or a similar one, a few times in the past. Sounds cool, right?&lt;/p&gt;
&lt;p&gt;But this process could be more manual than we might think at first. Even if the system has implemented things like AI or Machine Learning, a human will have validated the results; they might have taken a peek to ensure that everything is fine; or perhaps they are developing some new cool feature that must be tested… And this means that someone, somewhere has the ability to access my data. Or your data.&lt;/p&gt;
&lt;p&gt;Now, that is not so great, is it?&lt;/p&gt;
&lt;p&gt;In the last decade or so, governments around the world have taken this challenge quite seriously. They have enforced a series of rules to guarantee that the data is not only safely stored, but also safely used. I’m sure you will have heard terms like PCI, GDPR or HIPAA. They contain mandatory guidelines for how our data can be used, for primary or secondary purposes, and if it can be used at all.&lt;/p&gt;
&lt;h2 id="data-masking-and-de-identification"&gt;Data masking and de-identification&lt;/h2&gt;
&lt;p&gt;One of the most basic safeguarding rules is that if the data is to be used for secondary purposes – such as for data analytics – it has to be de-identified in a way that it would make impossible identify the original individual.&lt;/p&gt;
&lt;p&gt;Let’s say that the company ACME is storing employee data.&lt;/p&gt;
&lt;p&gt;We will use the &lt;a href="https://github.com/datacharmer/test_db" target="_blank" rel="noopener noreferrer"&gt;example database of employees&lt;/a&gt; that’s freely available.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Employee number
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;First name
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Last name
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Birth date
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Gender
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Hire date
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Gross salary
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Salary from date
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Salary to date&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We can clearly see that all those fields can be classified as private information. Some of these directly identify the original individual, like employee number or first + last name. Others could be used for indirect identification: I could ask my co-workers their birthday and guess the owner of that data using birth date.&lt;/p&gt;
&lt;p&gt;So, here is where de-identification and data-masking come into play. But what are the differences?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;De-identification&lt;/strong&gt; transforms the original data into something different that could look more or less real. For example, I could de-identify birth date and get a different date.&lt;/p&gt;
&lt;p&gt;However, this method would make that information unusable if I want to see the relationship between salary and employee’s age.&lt;/p&gt;
&lt;p&gt;On the other hand, &lt;strong&gt;data-masking&lt;/strong&gt; transforms the original data leaving some part untouched. I could mask birth date replacing the month and day for January first. That way, the year would be retained and that would allow us to identify that salary–employee’s age relationship.&lt;/p&gt;
&lt;p&gt;Of course, if the dataset I’m working with is not big enough, certain methods of data-masking would be inappropriate as I could still deduce who the data belonged to.&lt;/p&gt;
&lt;h2 id="mysql-data-masking"&gt;MySQL data masking&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Oracle’s MySQL Enterprise Edition&lt;/strong&gt; offers a &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/data-masking.html" target="_blank" rel="noopener noreferrer"&gt;de-identification and data-masking solution for MySQL&lt;/a&gt;, using a flexible set of functions that cover most of our needs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Percona Server for MySQL 8.0.17&lt;/strong&gt; introduces that functionality as &lt;a href="https://www.percona.com/doc/percona-server/LATEST/security/data-masking.html" target="_blank" rel="noopener noreferrer"&gt;an open source plugin&lt;/a&gt;, and is compatible with Oracle’s implementation. You no longer need to code slow and complicated stored procedures to achieve data masking, and you can migrate the processes that were written for the MySQL Enterprise Edition to Percona Server for MySQL. Go grab a cup of coffee and contribute something cool to the community with all that time you have got back. ☺&lt;/p&gt;
&lt;h2 id="in-the-lab"&gt;In the lab&lt;/h2&gt;
&lt;p&gt;Put on your thinking cap and let’s see how it works.&lt;/p&gt;
&lt;p&gt;First we need an instance of Percona MySQL Server 8.0.17 or newer. I think containers are the most flexible way to test new stuff so I will be using that, but you could use a virtual server or just a traditional setup. Let’s download the latest version of Percona MySQL Server in a ready to run container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker pull percona:8.0.17-1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Eventually that command should work but sadly, Percona hadn’t built this version of the docker image when this article was written. Doing it yourself is quite simple, though, and by the time you read this it will likely be already there.&lt;/p&gt;
&lt;p&gt;Once in place, Running an instance of Percona MySQL Server has never been so easy:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker run --name ps -e MYSQL_ROOT_PASSWORD=secret -d percona:8.0.17-8&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We’ll logon to the new container:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker exec -ti ps mysql -u root -p&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now is the time to download the test database employees from &lt;a href="https://github.com/datacharmer/test_db" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and load it into our Percona Server. You can follow the official instructions in the project page.&lt;/p&gt;
&lt;p&gt;Next step is to enable the data de-identification and masking feature. Installing the data masking module in Percona MySQL Server is easier than in Oracle.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; INSTALL PLUGIN data_masking SONAME 'data_masking.so';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 rows affected (0.06 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This automatically defines a set of global functions in our MySQL instance, so we don’t need to do anything else.&lt;/p&gt;
&lt;h3 id="a-new-concept-dictionaries"&gt;A new concept: Dictionaries&lt;/h3&gt;
&lt;p&gt;Sometimes we will like to generate new data selecting values from a predefined collection. For example we could want to have first name  values that are really first names and not a random alphanumeric. This will make our masked data looks real, and it’s perfect for creating demo or QA environments.&lt;/p&gt;
&lt;p&gt;For this task we have &lt;strong&gt;dictionaries&lt;/strong&gt;. They are nothing more than text files containing a value per line that are loaded into MySQL memory. You need to be aware that the contents of the file are fully loaded into memory and that the dictionary only exists while MySQL is running. So keep this in mind before loading any huge file or after restarting the instance.&lt;/p&gt;
&lt;p&gt;For our lab we will load two dictionaries holding first and last names. You can use these files or create different ones: &lt;a href="https://raw.githubusercontent.com/philipperemy/name-dataset/master/names_dataset/first_names.all.txt" target="_blank" rel="noopener noreferrer"&gt;first names&lt;/a&gt; and &lt;a href="https://raw.githubusercontent.com/philipperemy/name-dataset/master/names_dataset/last_names.all.txt" target="_blank" rel="noopener noreferrer"&gt;last names&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Store the files in a folder of your database server (or container) readable by the mysqld  process.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wget https://raw.githubusercontent.com/philipperemy/name-dataset/master/names_dataset/first_names.all.txt
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker cp first_names.all.txt ps:/tmp/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wget https://raw.githubusercontent.com/philipperemy/name-dataset/master/names_dataset/last_names.all.txt
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker cp last_names.all.txt ps:/tmp/&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Once the files are in our server we can map them as MySQL dictionaries.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; select gen_dictionary_load('/tmp/first_names.all.txt', 'first_names');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| gen_dictionary_load('/tmp/first_names.all.txt', 'first_names') |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Dictionary load success                                        |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.04 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; select gen_dictionary_load('/tmp/last_names.all.txt', 'last_names');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| gen_dictionary_load('/tmp/last_names.all.txt', 'last_names') |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Dictionary load success                                      |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.03 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="masking-some-data"&gt;Masking some data&lt;/h3&gt;
&lt;p&gt;Now let’s take another look at our employees table&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; show columns from employees;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------+---------------+------+-----+---------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Field      | Type   | Null | Key | Default | Extra |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------+---------------+------+-----+---------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| emp_no     | int(11)   | NO | PRI | NULL    | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| birth_date | date          | NO | | NULL   | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| first_name | varchar(14)   | NO | | NULL   | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| last_name  | varchar(16)   | NO | | NULL   | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| gender     | enum('M','F') | NO   | | NULL   | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| hire_date  | date   | NO | | NULL    | |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------+---------------+------+-----+---------+-------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Ok, it’s very likely we will want to de-identify everything in this table. You can apply different methods to achieve your security requirements, but I will create a view with the following transformations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;emp_no&lt;/strong&gt;: get a random value from 900.000.000 to 999.999.999&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;birth_date&lt;/strong&gt;: set it to January 1st of the original year&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;first_name&lt;/strong&gt;: set a random first name from a list of names that we have in a text file&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;last_name&lt;/strong&gt;: set a random last name from a list of names that we have in a text file&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;gender&lt;/strong&gt;: no transformation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;hire_date&lt;/strong&gt;: set it to January 1st of the original year&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE VIEW deidentified_employees
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;AS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;  gen_range(900000000, 999999999) as emp_no,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;  makedate(year(birth_date), 1) as birth_date,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;  gen_dictionary('first_names') as first_name,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;  gen_dictionary('last_names') as last_name,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;  gender,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;  makedate(year(hire_date), 1) as hire_date
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FROM employees;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Let’s check how the data looks in our de-identified view.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT * FROM employees LIMIT 10;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+------------+------------+-----------+--------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| emp_no | birth_date | first_name | last_name | gender | hire_date  |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+------------+------------+-----------+--------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 | 1953-09-02 | Georgi     | Facello | M | 1986-06-26 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10002 | 1964-06-02 | Bezalel    | Simmel | F | 1985-11-21 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10003 | 1959-12-03 | Parto      | Bamford | M | 1986-08-28 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10004 | 1954-05-01 | Chirstian  | Koblick | M | 1986-12-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10005 | 1955-01-21 | Kyoichi    | Maliniak | M | 1989-09-12 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10006 | 1953-04-20 | Anneke     | Preusig | F | 1989-06-02 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10007 | 1957-05-23 | Tzvetan    | Zielinski | F | 1989-02-10 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10008 | 1958-02-19 | Saniya     | Kalloufi | M | 1994-09-15 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10009 | 1952-04-19 | Sumant     | Peac | F | 1985-02-18 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10010 | 1963-06-01 | Duangkaew  | Piveteau | F | 1989-08-24 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+------------+------------+-----------+--------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;10 rows in set (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT * FROM deidentified_employees LIMIT 10;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+------------+------------+---------------+--------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| emp_no    | birth_date | first_name | last_name     | gender | hire_date |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+------------+------------+---------------+--------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 930277580 | 1953-01-01 | skaidrīte  | molash | M | 1986-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 999241458 | 1964-01-01 | grasen     | cessna | F | 1985-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 951699030 | 1959-01-01 | imelda     | josephpauline | M | 1986-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 985905688 | 1954-01-01 | dunc       | burkhardt | M | 1986-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 923987335 | 1955-01-01 | karel      | wanamaker | M | 1989-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 917751275 | 1953-01-01 | mikrut     | allee | F | 1989-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 992344830 | 1957-01-01 | troyvon    | muma | F | 1989-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 980277046 | 1958-01-01 | aliziah    | tiwnkal | M | 1994-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 964622691 | 1952-01-01 | dominiq    | legnon | F | 1985-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 948247243 | 1963-01-01 | sedale     | tunby | F | 1989-01-01 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+------------+------------+---------------+--------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;10 rows in set (0.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The data looks quite different, but remains good enough to apply some analytics and get meaningful results. Let’s de-identify the table salaries  this time.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; show columns from salaries;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+---------+------+-----+---------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Field     | Type | Null | Key | Default | Extra |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+---------+------+-----+---------+-------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| emp_no    | int(11) | NO   | PRI | NULL |       |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| salary    | int(11) | NO   | | NULL |       |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| from_date | date    | NO | PRI | NULL |       |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| to_date   | date | NO   | | NULL |       |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+---------+------+-----+---------+-------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We could use something like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE VIEW deidentified_salaries
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;AS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gen_range(900000000, 999999999) as emp_no,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gen_range(40000, 80000) as salary,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mask_inner(date_format(from_date, '%Y-%m-%d'), 4, 0) as from_date,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mask_outer(date_format(to_date, '%Y-%m-%d'), 4, 2, '0') as to_date
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FROM salaries;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We are using again the function gen_range . For the dates this time we are using the very flexible functions mask_inner  and mask_outer  that replace some characters in the original string. Let’s see how the data looks now.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In a real life exercise we would like to have the same values for emp_no across all the tables to keep referential integrity. This is where I think the original MySQL data-masking plugin falls short, as we don’t have deterministic functions using the original value as seed.&lt;/p&gt;&lt;/blockquote&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT * FROM salaries LIMIT 10;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+--------+------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| emp_no | salary | from_date  | to_date |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+--------+------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  60117 | 1986-06-26 | 1987-06-26 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  62102 | 1987-06-26 | 1988-06-25 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  66074 | 1988-06-25 | 1989-06-25 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  66596 | 1989-06-25 | 1990-06-25 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  66961 | 1990-06-25 | 1991-06-25 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  71046 | 1991-06-25 | 1992-06-24 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  74333 | 1992-06-24 | 1993-06-24 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  75286 | 1993-06-24 | 1994-06-24 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  75994 | 1994-06-24 | 1995-06-24 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;|  10001 |  76884 | 1995-06-24 | 1996-06-23 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------+--------+------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;10 rows in set (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT * FROM deidentified_salaries LIMIT 10;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+--------+------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| emp_no    | salary | from_date  | to_date |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+--------+------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 929824695 | 61543  | 1986XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 954275265 | 63138  | 1987XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 948145700 | 53448  | 1988XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 937927997 | 54704  | 1989XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 978459605 | 78179  | 1990XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 993464164 | 75526  | 1991XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 946692434 | 51788  | 1992XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 979870243 | 54807  | 1993XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 958708118 | 70647  | 1994XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 945701146 | 76056  | 1995XXXXXX | 0000-06-00 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------+--------+------------+------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;10 rows in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="clean-up"&gt;Clean-up&lt;/h3&gt;
&lt;p&gt;Remember that when you’re done, you can free up memory by removing the dictionaries. Restarting the instance will also remove the dictionaries.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT gen_dictionary_drop('first_names');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| gen_dictionary_drop('first_names') |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Dictionary removed                 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.01 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT gen_dictionary_drop('last_names');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| gen_dictionary_drop('last_names') |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Dictionary removed                 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If you use the MySQL data-masking plugin to define different levels of access to the data, remember that you will need to load the dictionaries each time the instance is restarted. With this usage, for example, you could control the data that someone in support has access to, very much like a bargain-basement virtual private database solution. (I’m not proposing this for production systems!)&lt;/p&gt;
&lt;h2 id="other-de-identification-and-masking-functions"&gt;Other de-identification and masking functions&lt;/h2&gt;
&lt;p&gt;Percona Server for MySQL Data-Masking includes more functions that the ones we’ve seen here. We have specialized functions for Primary Account Numbers (PAN), Social Security Numbers (SSN), phone numbers, e-Mail addresses… And also generic functions that will allow us to de-identify types without a specialized method. Being an open source plugin it should be quite easy to implement any additional methods and contribute it to the broader community.&lt;/p&gt;
&lt;h2 id="next-steps"&gt;Next Steps&lt;/h2&gt;
&lt;p&gt;Using these functions we can de-identify and mask any existing dataset. But if you are populating a lower level environment using production data you would want to store the transformed data only. To achieve this you could choose between various options.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Small volumes of data&lt;/strong&gt;: use “de-identified” views to export the data and load into a new database using mysqldump or mysqlpump.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Medium volumes of data&lt;/strong&gt;: Clone the original database and de-identify locally the data using updates.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Large volumes of data option one&lt;/strong&gt;: using replication, create a master -&gt; slave chain with STATEMENT binlog format and define triggers de-identifying the data on the slave. Your master can be a slave to the master (using log_slave_updates), so you don’t need to run your primary master in STATEMENT mode.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Large volumes of data option two&lt;/strong&gt;: using multiplexing in &lt;a href="https://www.proxysql.com/" target="_blank" rel="noopener noreferrer"&gt;ProxySQL&lt;/a&gt;, configure ProxySQL to send writes to a clone server where you have defined triggers to de-identify the data.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="future-developments"&gt;Future developments&lt;/h2&gt;
&lt;p&gt;While de-identifying complex schemas we could find that, for example, the name of a person is stored in multiple tables (de-normalized tables). In this case, these functions would generate different names and the resulting data will look broken. You can solve this using a variant of the dictionary functions that will obtain the value based on the original value and passed as parameter:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gen_dictionary_deterministic('Francisco', 'first_names')&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This not-yet-available function would always return the same value using that dictionary file, but in such a way that the de-identification cannot be reversed. Oracle doesn’t currently support this, so we will expand Percona Data-Masking plugin to introduce this as a unique feature. However, that will be in another contribution, so stay tuned for more exciting changes to Percona Server for MySQL Data Masking.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;–&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Image: Photo by &lt;a href="https://unsplash.com/@finan?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Finan Akbar&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/mask?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content (although in this case, of course, we have tested the data masking feature incorporated into Percona Server for MySQL 8.0/17, just not the examples in this blog). Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/p&gt;</content:encoded>
      <author>Francisco Miguel Biete Banon</author>
      <category>data obfuscation</category>
      <category>data privacy</category>
      <category>identity protection</category>
      <category>Intermediate Level</category>
      <category>MySQL</category>
      <category>Percona Server for MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2019/12/data-masking-Percona-Server-for-MySQL_hu_579bb5525b9e0b33.jpg"/>
      <media:content url="https://percona.community/blog/2019/12/data-masking-Percona-Server-for-MySQL_hu_3d1019fef5fce818.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Presents: Test Like a Boss</title>
      <link>https://percona.community/blog/2019/09/25/percona-live-europe-presents-test-like-a-boss/</link>
      <guid>https://percona.community/blog/2019/09/25/percona-live-europe-presents-test-like-a-boss/</guid>
      <pubDate>Wed, 25 Sep 2019 06:31:58 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/09/dbdeployer.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;My first talk is a tutorial &lt;em&gt;Testing like a boss: Deploy and Test Complex Topologies With a Single Command&lt;/em&gt;, scheduled at &lt;a href="https://www.percona.com/live-agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Europe in Amsterdam&lt;/a&gt; on September 30th at 13:30.&lt;/p&gt;
&lt;p&gt;My second talk is &lt;em&gt;Amazing sandboxes with dbdeployer&lt;/em&gt; scheduled on October 1st at 11:00. It is the same topic as the tutorial, but covers a narrow set of features, all in the *amazing* category.&lt;/p&gt;
&lt;p&gt;The tutorial introduces a challenging topic, because when people hear &lt;em&gt;testing&lt;/em&gt;, they imagine a troop of monkeys fiddling with a keyboard and a mouse, endlessly repeating a boring task. What I want to show is that testing is a creative activity and, with the right tools and mindset, it could be exciting and rewarding. During my work as a quality assurance engineer, I have always seen a boring task as an opportunity to automate. &lt;a href="https://github.com/datacharmer/dbdeployer" target="_blank" rel="noopener noreferrer"&gt;dbdeployer&lt;/a&gt;, the tool at the heart of my talk, was born from one such challenge. While working as a MySQL consultant, I realized that every customer was using a different version of MySQL. When they had a problem, I couldn’t just use the latest and greatest version and recommend they upgrade: almost nobody wanted to even consider that, and I can see the point. Sometimes, upgrading is a huge task that should be planned appropriately, and not done as a troubleshooting measure. If I wanted to assist my customers, I had to install their version, reproduce the problem, and propose a solution. After installing and reinstalling several versions of MySQL manually, and juggling dozens of options to use the right version for the right task, I decided to make a tool for that purpose. That was in 2006, and since then the tool has evolved to handle the newest features of MySQL, was rewritten almost two years ago, and now is been adopted by several categories of database professionals: developers, DBAs, support engineers, and quality assurance engineers.&lt;/p&gt;
&lt;p&gt;Looking at the user base of dbdeployer, it’s easy to reconsider the concept of &lt;em&gt;testing&lt;/em&gt;: it could be exploring the latest MySQL or Percona Server release, or a building a sample Group Replication or Percona XtraDB Cluster, or comparing a given setup across different versions of MySQL. Still unconvinced? Read on!&lt;/p&gt;
&lt;h3 id="whats-the-catch-what-do-attendees-get-from-attending"&gt;What’s the catch? What do attendees get from attending?&lt;/h3&gt;
&lt;p&gt;In addition to opening their eyes to the beauty of testing, this tutorial will show several activities that a normal user would consider difficult to perform, time consuming, and error prone.&lt;/p&gt;
&lt;p&gt;The key message of this presentation is that users should focus on &lt;strong&gt;what&lt;/strong&gt; to do, and leave the details of &lt;strong&gt;how&lt;/strong&gt; to perform the task to the tools at their disposal. The examples will show that you can deploy complicated scenarios with just a few commands, usually in less than one minute, sometimes in less than ten seconds, and then spend your time with the real task, which is exploring, trying a particular feature, proving a point, and not doing manually and with errors what the tool can do for you quickly and precisely.&lt;/p&gt;
&lt;p&gt;Some examples to water your mouth: you can deploy group replication in less than 30 seconds. And what about deploying two groups and running asynchronous replication between them? Even if you have done this before, this is a task that takes you quite a while. dbdeployer can run the whole setup (two clusters in group replication + asynchronous replication on top of it) in less than one minute. How about testing the new &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/clone-plugin.html" target="_blank" rel="noopener noreferrer"&gt;clone plugin?&lt;/a&gt; You can do it in a snap using dbdeployer as &lt;a href="http://blog.wl0.org/2019/09/mysql-8-0-17-cloning-is-now-much-easier/" target="_blank" rel="noopener noreferrer"&gt;demonstrated recently by Simon Mudd&lt;/a&gt; , which proves the point that having the right tools makes your experiments easier.&lt;/p&gt;
&lt;p&gt;Another example? MySQL upgrade: dbdeployer can run a server upgrade for you faster than you can say “blueberry muffin” or maybe not that fast, but surely faster than reading the manual and following the instructions.&lt;/p&gt;
&lt;h3 id="what-else-is-in-store-at-perconalive-what-will-i-do-apart-from-charming-the-attendees"&gt;What else is in store at PerconaLive? What will I do apart from charming the attendees?&lt;/h3&gt;
&lt;p&gt;Percona Live Amsterdam is chock-full of good talks. I know because I was part of the review committee that has examined hundreds of proposals, and painfully approved only a portion of them. Things that I look forward to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;em&gt;InnoDB Cluster tutorial&lt;/em&gt; on Monday. Although I have seen this talk several times, the cluster has been improved continuously, and it is useful to see it in action. Besides, &lt;a href="https://lefred.be/" target="_blank" rel="noopener noreferrer"&gt;Lefred’s&lt;/a&gt; style of presentation is so engaging that I enjoy it every time.&lt;/li&gt;
&lt;li&gt;Jeremy Cole’s take on Google Cloud, on Tuesday afternoon. Jeremy has been at the top of the database game for long time, and his views are always stimulating.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Backing up Wikipedia&lt;/em&gt;, with Jaime Crespo and Manuel Arostegui. Seeing how big deployments are dealt with is a sobering experience, which I highly recommend to newcomers and experts alike.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;ClickHouse materialized views&lt;/em&gt;, with Robert Hodges of Altinity. You may not be thrilled about the topic, but the speaker is a guarantee. Robert has been working with databases for several decades, and he knows his way around big data and difficult problems to solve. Looking forward to learning something new here.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are many more talks that I encourage you to peruse in &lt;a href="https://www.percona.com/live-agenda" target="_blank" rel="noopener noreferrer"&gt;the agenda&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As usual, the best part of the conference is networking in the intervals and around the venue before and after the event. This is where the best morsels of knowledge land with serendipity in my plate. See you soon!&lt;/p&gt;
&lt;p&gt;If you haven’t yet &lt;a href="https://www.percona.com/live-registration" target="_blank" rel="noopener noreferrer"&gt;registered&lt;/a&gt;, then you are invited to use the code &lt;strong&gt;CMESPEAK-GIUSEPPE&lt;/strong&gt; for a 20% discount.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/09/giuseppe-maxia-two-talks.jpg" alt=" " /&gt;&lt;/figure&gt; &lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Giuseppe Maxia</author>
      <category>conferences</category>
      <category>dbdeployer</category>
      <category>Events</category>
      <category>MySQL</category>
      <category>testing</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2019/09/dbdeployer_hu_e5464b33ecd5e5bd.jpg"/>
      <media:content url="https://percona.community/blog/2019/09/dbdeployer_hu_1d2451229773b4bd.jpg" medium="image"/>
    </item>
    <item>
      <title>Are your Database Backups Good Enough?</title>
      <link>https://percona.community/blog/2019/09/20/are-your-database-backups-good-enough/</link>
      <guid>https://percona.community/blog/2019/09/20/are-your-database-backups-good-enough/</guid>
      <pubDate>Fri, 20 Sep 2019 15:32:00 UTC</pubDate>
      <description>In the last few years there have been several examples of major service problems affecting businesses data: outages causing data inconsistencies; unavailability or data loss, and worldwide cyberattacks encrypting your files and asking for a ransom.</description>
      <content:encoded>&lt;p&gt;In the last few years there have been several examples of major service problems affecting businesses data: outages causing data inconsistencies; unavailability or data loss, and &lt;a href="https://en.wikipedia.org/wiki/WannaCry_ransomware_attack" target="_blank" rel="noopener noreferrer"&gt;worldwide cyberattacks encrypting your files and asking for a ransom&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/08/this_is_fine.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Database-related incidents are a very common industry issue- even if the root cause is not the database system itself. No matter if your main relational system is MySQL, MariaDB, PostgresQL or AWS Aurora -there will be a time where you will need to make use of backups to recover to a previous state. And when that happens it will be the worst time to realize that your backup system hadn’t been working for months, or testing for the first time a cluster-wide recovery.&lt;/p&gt;
&lt;h2 id="forget-about-the-backups-it-is-all-about-recovery"&gt;Forget about the backups, it is all about recovery!&lt;/h2&gt;
&lt;p&gt;Let me be 100% clear: the question is not &lt;strong&gt;IF&lt;/strong&gt; data incidents like those can happen to you, but &lt;strong&gt;WHEN&lt;/strong&gt; it will happen and &lt;strong&gt;HOW&lt;/strong&gt; you are prepared to respond to them. It could be a bad application deploy, an external breach, a disgruntled employee, a hardware failure, a provider problem, ransomware infection or a network failure,… Your relational data will eventually get lost, corrupted or in an inconsistent state, and “I have backups” will not be good enough. Recovery plans and tools have to be in place and in a healthy state.&lt;/p&gt;
&lt;p&gt;As the only 2 Site Reliability Engineers in charge of the Database Layer of &lt;a href="https://www.wikipedia.org/" target="_blank" rel="noopener noreferrer"&gt;Wikipedia&lt;/a&gt; and other projects at the &lt;a href="https://wikimediafoundation.org/" target="_blank" rel="noopener noreferrer"&gt;Wikimedia Foundation&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/manuel-arostegui-b977141/" target="_blank" rel="noopener noreferrer"&gt;Manuel&lt;/a&gt; and I grew worried on how to improve both our existing data recovery strategy and provisioning systems. We have the responsibility to make sure that free knowledge contributed by &lt;a href="https://stats.wikimedia.org/v2/#/all-projects" target="_blank" rel="noopener noreferrer"&gt;millions of volunteers around the world&lt;/a&gt; keeps being available for future generations. As a colleague of us once said- no worries, we “only” are in charge of maintaining &lt;a href="https://en.wikipedia.org/wiki/Encyclopedia_Galactica" target="_blank" rel="noopener noreferrer"&gt;the (probably) most valuable collaborative database ever created&lt;/a&gt; in the history of mankind! :-D&lt;/p&gt;
&lt;p&gt;Among the two of us we handle over &lt;strong&gt;half a petabyte of relational data&lt;/strong&gt; &lt;a href="https://grafana.wikimedia.org/d/000000278/mysql-aggregated?orgId=1&amp;var-dc=eqiad%20prometheus%2Fops&amp;var-group=All&amp;var-shard=All&amp;var-role=All" target="_blank" rel="noopener noreferrer"&gt;over hundreds of instances and servers&lt;/a&gt;, and manual work is off-limits to be efficient. Unlike other popular Internet services, we not only store metadata in MariaDB databases, &lt;strong&gt;we also store all content&lt;/strong&gt; (Wikitext).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We needed a system that was incredibly &lt;strong&gt;flexible&lt;/strong&gt; - so it worked for both large Wiki databases (like the many terabytes of the &lt;a href="https://en.wikipedia.org/wiki/Special:Statistics" target="_blank" rel="noopener noreferrer"&gt;English Wikipedia&lt;/a&gt;), but also for small but important internal database services such as our &lt;a href="https://phabricator.wikimedia.org/" target="_blank" rel="noopener noreferrer"&gt;bug tracker&lt;/a&gt; -.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;fast&lt;/strong&gt; - able to recover data saturating our &lt;a href="https://wikitech.wikimedia.org/wiki/Network_design" target="_blank" rel="noopener noreferrer"&gt;10Gbit network&lt;/a&gt; -,&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;granular&lt;/strong&gt; - being able to recover 1 row or an entire instance, to 1 server or an entire cluster; at any arbitrary point in the past-&lt;/li&gt;
&lt;li&gt;and &lt;strong&gt;reliable&lt;/strong&gt; - low rate of failure, but when it failed it &lt;a href="https://docs.honeycomb.io/learning-about-observability/intro-to-observability/" target="_blank" rel="noopener noreferrer"&gt;should be detected immediately&lt;/a&gt;, and not when it is too late.&lt;/li&gt;
&lt;li&gt;The system had to use exclusively &lt;strong&gt;free (open source) software&lt;/strong&gt; and be published itself with a &lt;a href="https://en.wikipedia.org/wiki/Free_software_license" target="_blank" rel="noopener noreferrer"&gt;free license&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We ended up with something like this (simplified view :-P):&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2019/08/Database_backups_overview.svg__hu_ff40939f6450b98e.jpg 480w, https://percona.community/blog/2019/08/Database_backups_overview.svg__hu_ad0bd2abf140e58.jpg 768w, https://percona.community/blog/2019/08/Database_backups_overview.svg__hu_7194b7b9a7d3c6c7.jpg 1400w"
src="https://percona.community/blog/2019/08/Database_backups_overview.svg_.jpg" alt="Workflow of backups and recovery at the Wikimedia Foundation" /&gt;&lt;/figure&gt;
&lt;em&gt;A wonderful example of “programmer art”&lt;/em&gt;
Like any application, a recovery system is never complete. However after a year of planning, developing and deploying our solution, we are ready to share what we have finished so far to people outside of our organization.&lt;/p&gt;
&lt;h2 id="our-presentation-at-percona-live-europe-2019"&gt;Our Presentation at Percona Live Europe 2019&lt;/h2&gt;
&lt;p&gt;A single blog post is not enough to tell the whole story of how we reached the current state – that is why &lt;strong&gt;we are going to present the work at the &lt;a href="https://www.percona.com/live-info" target="_blank" rel="noopener noreferrer"&gt;Percona Live Europe 2019 conference&lt;/a&gt;&lt;/strong&gt; which will take place 29 September–2 October in Amsterdam. We will introduce what was the problem we wanted to solve, our design philosophy, existing tooling used and backup methods, backups checking, recovery verification and general automation. You will be able to compare with your own setup and ask questions about why we chose certain paths, based on our experience.&lt;/p&gt;
&lt;p&gt;What we have setup may not be perfect, and may not work for you- your needs will be different, as well as your environment. However I expect our presentation will inspire you to design and setup better recovery systems in the future.&lt;/p&gt;
&lt;p&gt;See you in Amsterdam! And if you haven’t yet registered, then you are invited to use the code CMESPEAK-JAIME for a 20% discount.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Jaime Crespo</author>
      <category>amsterdam</category>
      <category>automation</category>
      <category>backups</category>
      <category>database</category>
      <category>Events</category>
      <category>InnoDB</category>
      <category>mariabackup</category>
      <category>mydumper</category>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>Percona Live 2019</category>
      <category>perconalive</category>
      <category>recovery</category>
      <category>wikimedia</category>
      <category>wikipedia</category>
      <category>xtrabackup</category>
      <media:thumbnail url="https://percona.community/blog/2019/08/this_is_fine_hu_8363d78ea07c7bbb.jpg"/>
      <media:content url="https://percona.community/blog/2019/08/this_is_fine_hu_b454851bd219dc71.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe '19: MongoDB 4.2</title>
      <link>https://percona.community/blog/2019/09/18/percona-live-europe-19-mongodb-4-2/</link>
      <guid>https://percona.community/blog/2019/09/18/percona-live-europe-19-mongodb-4-2/</guid>
      <pubDate>Wed, 18 Sep 2019 15:22:25 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/09/percona-live-europe2019.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;It’s all about MongoDB® 4.2 this time. MongoDB 4.2 released like a month ago (still newborn) and I am going to cover what’s new in three different areas: sharding, indexing, and the aggregation framework. I can promise you this, there are a lot of new features and improvements in MongoDB 4.2 and I am thrilled to present those to you. Join me at &lt;a href="https://www.percona.com/live-agenda" target="_blank" rel="noopener noreferrer"&gt;Percona Live Europe&lt;/a&gt;, and discover how distributed transactions, wildcard indexes and materialized views (plus many other new features) actually work and fit on your workload.&lt;/p&gt;
&lt;h2 id="this-talk-is-for-you-if"&gt;This talk is for you if…&lt;/h2&gt;
&lt;p&gt;… you are actively working with MongoDB, either as a DBA/SRE or a Developer. I am confident that you will love the new features and you would like to adopt them straight away after the presentation.&lt;/p&gt;
&lt;p&gt;If you are not working with MongoDB or you have never heard about MongoDB before, come join us and check if the 4.2 new features fit your needs. Maybe MongoDB 4.2 has the answer to a challenge you are currently facing with your existing datastore.&lt;/p&gt;
&lt;h2 id="other-presentations-im-looking-forward-to"&gt;Other presentations I’m looking forward to…&lt;/h2&gt;
&lt;p&gt;I wish I could be James Arthur Madrox (the Multiple Man) and attend all talks. I am going to attend all MongoDB related talks, as all the Mongo topics are great this year. I will also try to attend as many Postgres talks as I can, I am very curious to find out how the Percona distribution for Postgres will make my DBA life easier. Keynotes and tutorials are also a must.&lt;/p&gt;
&lt;p&gt;And last but not least, Percona Europe returns to &lt;a href="https://www.percona.com/live-info" target="_blank" rel="noopener noreferrer"&gt;Amsterdam&lt;/a&gt;!!!&lt;/p&gt;
&lt;h4 id="more-about-percona-live-europe-2019"&gt;More about Percona Live Europe 2019&lt;/h4&gt;
&lt;p&gt;Antonios is presenting two talks at Percona Live Europe 2019: New Indexing and Aggregation Pipeline Capabilities in MongoDB 4.2 and What’s New on Sharding in MongoDB 4.2. He was also an active member of the community paper selection committee (thank you!)&lt;/p&gt;
&lt;p&gt;You can &lt;a href="https://www.percona.com/live-agenda" target="_blank" rel="noopener noreferrer"&gt;download a full schedule from the agenda page&lt;/a&gt; and if you’d like to hear these talks, &lt;a href="https://www.percona.com/live-registration" target="_blank" rel="noopener noreferrer"&gt;register&lt;/a&gt; with CMESPEAK-ANTONIOS for a 20% discount!&lt;/p&gt;</content:encoded>
      <author>Antonios Giannopoulos</author>
      <category>Events</category>
      <category>MongoDB</category>
      <category>Percona Live Europe 2019</category>
      <media:thumbnail url="https://percona.community/blog/2019/09/percona-live-europe2019_hu_fe2c0816585a7259.jpg"/>
      <media:content url="https://percona.community/blog/2019/09/percona-live-europe2019_hu_9b866fc779a2a97a.jpg" medium="image"/>
    </item>
    <item>
      <title>Minimalist Tooling for MySQL/MariaDB DBAs</title>
      <link>https://percona.community/blog/2019/08/14/minimalist-tooling-for-mysql-mariadb-dbas/</link>
      <guid>https://percona.community/blog/2019/08/14/minimalist-tooling-for-mysql-mariadb-dbas/</guid>
      <pubDate>Wed, 14 Aug 2019 14:21:10 UTC</pubDate>
      <description>In my roles as a DBA at various companies, I generally found the tooling to be quite lacking. Everything from metrics collection, alerting, backup management; they were either missing, incomplete or implemented poorly. DBA-Tools was born from a desire to build backup tools that supported my needs in smaller/non-cloud environments. As BASH is easily the most common shell available out there on systems running MySQL® or MariaDB®, it was an easy choice.</description>
      <content:encoded>&lt;p&gt;In my roles as a DBA at various companies, I generally found the tooling to be quite lacking. Everything from metrics collection, alerting, backup management; they were either missing, incomplete or implemented poorly. &lt;a href="http://gitlab.com/gwinans/dba-tools" target="_blank" rel="noopener noreferrer"&gt;DBA-Tools&lt;/a&gt; was born from a desire to build backup tools that supported my needs in smaller/non-cloud environments. As BASH is easily the most common shell available out there on systems running MySQL® or MariaDB®, it was an easy choice.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/08/dba-tools-minimalist-mysql-tooling.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="how-dba-tools-came-to-be"&gt;How DBA-Tools came to be&lt;/h2&gt;
&lt;p&gt;While rebuilding my home-lab two years ago, I decided I wanted some simple tools for my database environment. Being a fan of NOT re-inventing the wheel, I thought I would peruse GitHub and Gitlab to see what others have put together. Nothing I saw looked quite like what I wanted. They all hit one or more of the checkboxes I wanted, but never all of them.&lt;/p&gt;
&lt;p&gt;My checklist when searching for tools included the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Extendable&lt;/li&gt;
&lt;li&gt;Configurable&lt;/li&gt;
&lt;li&gt;User Friendly&lt;/li&gt;
&lt;li&gt;Easy-to-Read&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The majority of scripts I found were contained within a single file and not easy to extend. They were universally easy-to-use. My subjective requirement for code quality simply was not met. When I considered what kits were already available to me against the goal I had in mind, I came to the only reasonable conclusion I could muster:&lt;/p&gt;
&lt;p&gt;I would build my own tools!&lt;/p&gt;
&lt;h2 id="a-trip-down-release-lane-and-publicity"&gt;A trip down release lane and publicity&lt;/h2&gt;
&lt;p&gt;DBA-Tools was designed to be simple, extendible and configurable. I wanted my kit to have very few external dependencies. BASH was the shell I chose for implementation and I grew my vision from there. At the most fundamental level, I enjoy simplicity. I consider procedural programming to be just that – simple. This, thus far, remains my guiding philosophy with these tools.&lt;/p&gt;
&lt;p&gt;My first public release was on July 7th, 2019. The scripts only did single full backups and most of the secondary scripts only worked with MariaDB. I posted about it in one of the MySQL Slack groups. The tools were written for my lab use and, while I hoped others would find my offering useful, the lack of noticeable response did not bother me.&lt;/p&gt;
&lt;p&gt;The second release, 22 days later, marked full incremental support and ensured all the secondary scripts supported MySQL and MariaDB. I decided to call this one 2.0.0 and posted it again. I received my first “support” email that day, which spurred me to create better documentation.&lt;/p&gt;
&lt;p&gt;Later, I found out that Peter Zaitsev posted about the tools I wrote on his Twitter and LinkedIn pages on August 11th 2019. I can’t say thank you enough – I didn’t expect these tools to be used much beyond a small niche of home-lab engineers that might stumble across them.&lt;/p&gt;
&lt;h2 id="whats-next"&gt;What’s next?&lt;/h2&gt;
&lt;p&gt;As of this writing, I’m working on adding extensible, easy-to-use alerting facilities to these tools. I’m always ready to accept PRs and help from anyone that would like to add their own features.&lt;/p&gt;
&lt;p&gt;Now, I just need to get significantly better with git.&lt;/p&gt;
&lt;p&gt;In any case, check them out at &lt;a href="http://gitlab.com/gwinans/dba-tools" target="_blank" rel="noopener noreferrer"&gt;http://gitlab.com/gwinans/dba-tools&lt;/a&gt; or read the Wiki at &lt;a href="https://gitlab.com/gwinans/dba-tools/wikis/home" target="_blank" rel="noopener noreferrer"&gt;https://gitlab.com/gwinans/dba-tools/wikis/home&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;–
&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@iurte?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Iker Urteaga&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/tools?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Geoff Winans</author>
      <category>DBA Tools</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2019/08/dba-tools-minimalist-mysql-tooling_hu_c1630e839fb0625.jpg"/>
      <media:content url="https://percona.community/blog/2019/08/dba-tools-minimalist-mysql-tooling_hu_615ce1dd4f0852a4.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL 5.6/Maria 10.1 : How we got from 30k qps to 101k qps.....</title>
      <link>https://percona.community/blog/2019/08/07/mysql-how-we-got-from-30k-qps-to-101k-qps/</link>
      <guid>https://percona.community/blog/2019/08/07/mysql-how-we-got-from-30k-qps-to-101k-qps/</guid>
      <pubDate>Wed, 07 Aug 2019 07:52:45 UTC</pubDate>
      <description>Late one evening, I was staring at one of our large MySQL installations and noticed the database was hovering around 7-10 run queue length (48 cores, ~500 gigs memory, fusionIO cards). I had been scratching my head on how to get more throughput from the database. This blog records the changes I made to tune performance in order to achieve a 300% better throughput in MySQL. I tested my theories on MySQL 5.6/Maria 10.1. While with 5.7 DBAs would turn to performance_schema for the supporting metrics, I hope that you find the process interesting nevertheless.</description>
      <content:encoded>&lt;p&gt;Late one evening, I was staring at one of our large MySQL installations and noticed the database was hovering around 7-10 run queue length (48 cores, ~500 gigs memory, fusionIO cards). I had been scratching my head on how to get more throughput from the database. This blog records the changes I made to tune performance in order to achieve a 300% better throughput in MySQL. I tested my theories on MySQL 5.6/Maria 10.1. While with 5.7 DBAs would turn to &lt;em&gt;performance_schema&lt;/em&gt; for the supporting metrics, I hope that you find the process interesting nevertheless.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/08/tuning-mysql-for-throughput.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="view-from-an-oracle-rdbms-dba"&gt;View from an Oracle RDBMS DBA…&lt;/h2&gt;
&lt;p&gt;For context, I came to MySQL from a background as an Oracle RDBMS DBA, and this informs my expectations. For this exercise, unlike with Oracle RDBMS, I had no access to view &lt;em&gt;wait events&lt;/em&gt; so that I could see where my database was struggling. At least, no access in MySQL 5.6/Maria 10.1 without taking a performance hit by using &lt;em&gt;performance_schema&lt;/em&gt;, which was less efficient in these earlier versions.&lt;/p&gt;
&lt;p&gt;In fact, overall, I find that MySQL has far fewer bells and whistles than Oracle at the database level. I constantly whine to my team mates how MySQL provides less knobs compared to Oracle. Even for just creating an index. Without counting, I can confidently say there are over 50 permutations combinations I could use. For example initrans, pctfree, pct, reverse, function, w/o gathering statistics… Admittedly some may be obsolete and discarded in recent versions, but you get my point. :)&lt;/p&gt;
&lt;p&gt;Oracle allows the DBA’s to tune blocks in an index or a table, their physical characteristics….all the way to pinning tables in buffer, tuning specific latches used for buffer cache so one can get rid of cache buffer chains waits with help of a hidden parameter  :)&lt;/p&gt;
&lt;p&gt;Anyway, I digress. Back to the challenges of MySQL!&lt;/p&gt;
&lt;h2 id="tuning-mysql-a-process"&gt;Tuning MySQL… a process&lt;/h2&gt;
&lt;p&gt;Given the version of MySQL that provided this challenge, one of the few tools you have access to is the output from &lt;code&gt;show engine innodb status&lt;/code&gt;. While that has a wealth of information, I haven’t yet found a single source of good documentation for each of the metrics shown in the report. I repeatedly saw these &lt;em&gt;waits&lt;/em&gt; in the SEMAPHOREs section:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;buf0buf.c
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;row0rel.cc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;btr0btr.c&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Very naturally I started with reference books available on MySQL’s website, traversing through countless blogs, and sniffing through the code. Only after I had looked at multiple sources did I begin to get a gist of the metrics available in the status report. My research over the next few nights led me to a few different parameters. These ultimately helped me to find the answers I needed.&lt;/p&gt;
&lt;h2 id="making-the-changes-that-mattered"&gt;Making the changes that mattered&lt;/h2&gt;
&lt;p&gt;Here is a quick snippet that I changed from the default – or lower – values set by a previous DBA.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_instances=32
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;table_open_cache_instances=12
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;table_open_cache=8000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;table_definition_cache=12000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_change_buffer_size=5&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Some other parameters that I changed are shown next. Although these are very scenario specific, they all helped in tuning one or other of the performance problems I was encountering:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_purge_batch_size=5000 
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;optimizer_search_depth=0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_file_size=32g
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_buffer_size=1G&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Plus, I set innodb_adaptive_hash_index_parts to 32. &lt;em&gt;Note:&lt;/em&gt; this parameter may be called &lt;code&gt;innodb_adaptive_hash_partitions&lt;/code&gt; in some db versions.&lt;/p&gt;
&lt;p&gt;I will try and explain them to the best of my knowledge and understanding.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;innodb_buffer_pool_instances&lt;/code&gt; had to be increased to allow a greater number of latches to access the buffer pool. Ideally we want to keep this parameter either equal to or a little lower than the number of cores. In this case we set this at half of the number of cores. We have other boxes in the farm with fewer cores and prefer to keep to standard configs and not have snowflakes!&lt;/p&gt;
&lt;p&gt;&lt;code&gt;table_open_cache_instance&lt;/code&gt; also provided similar performance improvement for all queries accessing table metadata. If you are a heavy user of adaptive hash indexes, splitting innodb_adaptive_hash_parts/innodb_adaptive_hash_partitions (depending on your db version) to a higher number of partitions helps a lot with concurrency. It allows you to split hash indexes into  different partitions and to remove contention with hot tables access.&lt;/p&gt;
&lt;p&gt;We reduced &lt;code&gt;innodb_change_buffer_size&lt;/code&gt; to 5% from its default 25% because change buffer was never used more than 400mb. With the default value the change buffer had ~90gb allocated.&lt;/p&gt;
&lt;p&gt;This provided for a lot more data and indices to fit into the buffer pool.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Overall this set of parameters tuning worked for us and for our workload. We saw a great performance benefit from these changes. It was the first time we ever surpassed 100k qps without changing the code or hardware. Please make sure to understand what each parameter does, and to test your workload against them.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;—&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href="https://unsplash.com/search/photos/raspberry?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Photo by &lt;/a&gt;&lt;a href="https://unsplash.com/@joaosilas?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;João Silas&lt;/a&gt;&lt;a href="https://unsplash.com/search/photos/raspberry?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt; on &lt;/a&gt;&lt;a href="https://unsplash.com/search/photos/mystery?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. Percona has not edited or tested the technical content. Views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Gurnish Anand</author>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>performance</category>
      <media:thumbnail url="https://percona.community/blog/2019/08/tuning-mysql-for-throughput_hu_771cfd6c2edbf6b8.jpg"/>
      <media:content url="https://percona.community/blog/2019/08/tuning-mysql-for-throughput_hu_222ac1377e0a2344.jpg" medium="image"/>
    </item>
    <item>
      <title>How to Build a Percona Server "Stack" on a Raspberry Pi 3+</title>
      <link>https://percona.community/blog/2019/08/01/how-to-build-a-percona-server-stack-on-a-raspberry-pi-3/</link>
      <guid>https://percona.community/blog/2019/08/01/how-to-build-a-percona-server-stack-on-a-raspberry-pi-3/</guid>
      <pubDate>Thu, 01 Aug 2019 12:50:36 UTC</pubDate>
      <description>The blog post How to Compile Percona Server for MySQL 5.7 in Raspberry Pi 3 by Walter Garcia, inspired me to create an updated install of Percona Server for the Raspberry Pi 3+.</description>
      <content:encoded>&lt;p&gt;The blog post &lt;a href="https://www.percona.com/blog/2018/08/22/how-to-compile-percona-server-for-mysql-5-7-in-raspberry-pi-3/" target="_blank" rel="noopener noreferrer"&gt;&lt;em&gt;How to Compile Percona Server for MySQL 5.7 in Raspberry Pi 3&lt;/em&gt;&lt;/a&gt; by Walter Garcia, inspired me to create an updated install of Percona Server for the &lt;a href="https://www.raspberrypi.org/products/" target="_blank" rel="noopener noreferrer"&gt;Raspberry Pi 3+&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/07/Percona-installation-on-Raspberry-Pi-3.jpg" alt="Percona installation on Raspberry Pi 3+" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This how-to post covers installing from source and being able to use &lt;a href="https://www.percona.com/software/mysql-database" target="_blank" rel="noopener noreferrer"&gt;Percona Server for MySQL&lt;/a&gt; in any of your maker projects. I have included everything you need to have a complete Percona Server, ready to store data collection for your weather station, your GPS data, or any other project you can think of that would require data collection in a database.&lt;/p&gt;
&lt;p&gt;My years of hands-on support of Percona Server enable me to customize the install a bit. I wanted to build a full Percona “Stack” including XtraBackup, and Percona Toolkit.&lt;/p&gt;
&lt;h2 id="hardware-and-software"&gt;Hardware and Software&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Tested on a Raspberry PI 3B and 3B+&lt;/li&gt;
&lt;li&gt;OS is Raspbian Buster. You can download it here: &lt;a href="https://www.raspberrypi.org/downloads/raspbian/" target="_blank" rel="noopener noreferrer"&gt;https://www.raspberrypi.org/downloads/raspbian/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;I choose the option: Raspbian Buster with Desktop.&lt;/li&gt;
&lt;li&gt;64GB SD Card, not required, but would not suggest less than 32GB. For best performance use and SD card that is between 90 - 100MB per sec.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="the-step-by-step-guide"&gt;The Step-by-Step Guide&lt;/h2&gt;
&lt;p&gt;Let’s get on and build!&lt;/p&gt;
&lt;h3 id="1-prep-your-raspberry-pi"&gt;1. Prep Your Raspberry PI&lt;/h3&gt;
&lt;p&gt;You will notice I use sudo rather often, even during the make and cmake. I found that running as the default pi user for the install gave me issues. Using sudo for root based commands is the best practice that I always try to follow.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt-get update
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt-get upgrade
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt-get install screen cmake debhelper autotools-dev libaio-dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;automake libtool bison bzr libgcrypt20-dev flex autoconf libtool libncurses5-dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mariadb-client-10.0 libboost-dev libreadline-dev libcurl4-openssl-dev libtirpc-dev&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create a swapfile. Very much needed for these two compiles.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo dd if=/dev/zero of=/swapfile2GB bs=1M count=2048
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkswap /swapfile2GB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo swapon /swapfile2GB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chmod 0600 /swapfile2GB&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="2-build-percona-server-for-mysql"&gt;2. Build Percona Server for MySQL&lt;/h3&gt;
&lt;p&gt;This will take about 3.5 to 4 hours to run. Download percona-server 5.7.26 source tar ball&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wget https://www.percona.com/downloads/Percona-Server-5.7/Percona-Server-5.7.26-29/source/tarball/percona-server-5.7.26-29.tar.gz&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Extract to /home/pi&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd percona-server-5.7.26-29
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo cmake -DDOWNLOAD_BOOST=ON -DWITH_BOOST=$HOME/boost .
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo make -j3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo make install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="3-build-percona-xtrabackup"&gt;3. Build Percona XtraBackup&lt;/h3&gt;
&lt;p&gt;This will take about 3 hours.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt-get install libcurl4-gnutls-dev libev-dev libev4&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Note: installing the package libcurl4-gnutls-dev  will remove the package libcurl4-openssl-dev . I had compile failures for XtraBackup when libcurl4-openssl-dev  was installed. Download XtraBackup 2.4.14&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wget https://www.percona.com/downloads/Percona-XtraBackup-2.4/Percona-XtraBackup-2.4.14/source/tarball/percona-xtrabackup-2.4.14.tar.gz&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Extract to /home/pi&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd percona-xtrabackup-2.4.14
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo cmake -DWITH_BOOST=$HOME/boost -DBUILD_CONFIG=xtrabackup_release -DWITH_MAN_PAGES=OFF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo make -j3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo make install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="4-build-percona-toolkit"&gt;4. Build Percona Toolkit&lt;/h3&gt;
&lt;p&gt;Done in a few minutes.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;wget https://www.percona.com/downloads/percona-toolkit/3.0.13/source/tarball/percona-toolkit-3.0.13.tar.gz&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;extract to /home/pi&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd percona-toolkit-3.0.13
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;perl Makefile.PL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;make test
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo make install&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="5-create-the-mysqsl-user"&gt;5. Create the mysqsl user&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo useradd mysql -d /var/lib/mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create directories for mysql to use.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir -p /var/lib/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir /var/lib/mysql/binlog
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir /var/lib/mysql/tmp
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo mkdir /var/log/mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Change ownership of directories to mysql user.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chown -R mysql:mysql /var/lib/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chown mysql:mysql /var/log/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chown -R mysql:mysql /usr/local/mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="6-prep-mycnf"&gt;6. Prep my.cnf&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo rm -fR /etc/mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I like to remove any leftover mysql directories or files in /etc before I create my file in the next step.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo vi /etc/my.cnf&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add these lines, below, to your new my.cnf file.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;port = 3306
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;socket = /var/lib/mysql/mysql.sock
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pid-file = /var/lib/mysql/mysqld.pid
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;basedir = /usr/local/mysql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;datadir = /var/lib/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;general_log_file = /var/log/mysql/mysql-general.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log-error = /var/log/mysql/mysqld.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log_file = /var/log/mysql/log/slow_query.log
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slow_query_log = 0 # Slow query log off
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lc-messages-dir = /usr/local/mysql/share
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;plugin_dir = /usr/local/mysql/lib/mysql/plugin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;skip-external-locking
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;log-bin = /var/lib/mysql/binlog/mysql-bin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sync_binlog = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;expire_logs_days = 5
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;server-id = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;binlog_format = mixed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_data_home_dir = /var/lib/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_group_home_dir = /var/lib/mysql/data
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_files_in_group = 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_size = 128M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_file_size = 16M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_log_buffer_size = 8M
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_flush_log_at_trx_commit = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_lock_wait_timeout = 50
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_flush_method = O_DIRECT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_file_per_table = 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;innodb_buffer_pool_instances = 1&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Save the my.cnf file.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo chown mysql:mysql /etc/my.cnf&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="7-initialize-the-database-files"&gt;7. Initialize the database files&lt;/h3&gt;
&lt;p&gt;At this point, you can initialize the database files&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo /usr/local/mysql/bin/mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/var/lib/mysql/data&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="8-start-percona-server"&gt;8. Start Percona Server&lt;/h3&gt;
&lt;p&gt;This is the exciting part coming up. We are going to start Percona Server&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo /usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/my.cnf --user=mysql &amp;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If everything went well you should see the following lines in your /var/log/mysql/mysqld.log .```
2019-06-24T19:56:52.071765Z 0 [Note] Server hostname (bind-address): ‘*’; port: 3306
2019-06-24T19:56:52.072251Z 0 [Note] IPv6 is available.
2019-06-24T19:56:52.072385Z 0 [Note]   - ‘::’ resolves to ‘::’;
2019-06-24T19:56:52.072770Z 0 [Note] Server socket created on IP: ‘::’.
2019-06-24T19:56:52.132587Z 0 [Note] InnoDB: Buffer pool(s) load completed at 190624 15:56:52
2019-06-24T19:56:52.136886Z 0 [Note] Failed to start slave threads for channel ’’
2019-06-24T19:56:52.178087Z 0 [Note] Event Scheduler: Loaded 0 events
2019-06-24T19:56:52.179153Z 0 [Note] /usr/local/mysql/bin/mysqld: ready for connections.
Version: ‘5.7.26-29-log’  socket: ‘/var/lib/mysql/mysql.sock’  port: 3306 Source distribution&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;### 9. Test login to Percona Server&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;mysql -u root –socket=/var/lib/mysql/mysql.sock&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;If you plan on keeping this as an active Percona Server I **strongly advise** you to remove the root user and create your own privileged user.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;First, stop Percona Server&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;/usr/local/mysql/bin/mysqladmin -u root –socket=/var/lib/mysql/mysql.sock shutdown&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Create the mysqld.server and enable it.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;sudo vi /etc/systemd/system/mysqld.service
[Unit]
Description=Percona Server Version 5.7.x
After=syslog.target
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/usr/local/mysql/bin/mysqld –defaults-file=/etc/my.cnf
TimeoutSec=300
WorkingDirectory=/usr/local/mysql/bin
#Restart=on-failure
#RestartPreventExitStatus=1
PrivateTmp=true
sudo systemctl enable mysqld.service&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Now if everything was done correctly you should be able to reboot your Pi and Percona Server will auto start on OS Boot.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;This is it, you now have an entire Percona Server for MySQL up and running, with XtraBackup for your daily backups and Percona Toolkit to assist you with daily and complicated tasks. If you try this out, I'd love to hear about the uses you make of your Percona Server on a Raspberry Pi.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;_—_
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;_Image based on Photo by [Hector Bermudez](https://unsplash.com/@hectorbermudez?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText) on [Unsplash](https://unsplash.com/search/photos/raspberry?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText)_
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;_The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up._&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</content:encoded>
      <author>Wayne Leutwyler</author>
      <category>MySQL</category>
      <category>Percona Server for MySQL</category>
      <category>Raspberry Pi</category>
      <category>Toolkit</category>
      <media:thumbnail url="https://percona.community/blog/2019/07/Percona-installation-on-Raspberry-Pi-3_hu_bac5ba8285ff105a.jpg"/>
      <media:content url="https://percona.community/blog/2019/07/Percona-installation-on-Raspberry-Pi-3_hu_c834587c85969757.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL Optimizer: Naughty Aberrations on Queries Combining WHERE, ORDER BY and LIMIT</title>
      <link>https://percona.community/blog/2019/07/29/mysql-optimizer-naughty-aberrations-on-queries-combining-where-order-by-and-limit/</link>
      <guid>https://percona.community/blog/2019/07/29/mysql-optimizer-naughty-aberrations-on-queries-combining-where-order-by-and-limit/</guid>
      <pubDate>Mon, 29 Jul 2019 11:50:51 UTC</pubDate>
      <description>Sometimes, the MySQL Optimizer chooses a wrong plan, and a query that should execute in less than 0.1 second ends-up running for 12 minutes!This is not a new problem: bugs about this can be traced back to 2014, and a blog post on this subject was published in 2015.But even if this is old news, because this problem recently came yet again to my attention, and because this is still not fixed in MySQL 5.7 and 8.0, this is a subject worth writing about.</description>
      <content:encoded>&lt;p&gt;Sometimes, the MySQL Optimizer chooses a wrong plan, and a query that should execute in less than 0.1 second ends-up running for 12 minutes!This is not a new problem: bugs about this can be traced back to 2014, and a blog post on this subject was published in 2015.But even if this is old news, because this problem recently came yet again to my attention, and because this is still not fixed in MySQL 5.7 and 8.0, this is a subject worth writing about.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/07/mysql-optimizer-choose-wrong-path.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="the-mysql-optimizer"&gt;The MySQL Optimizer&lt;/h2&gt;
&lt;p&gt;Before looking at the problematic query, we have to say a few words about the optimizer.The &lt;a href="https://dev.mysql.com/doc/internals/en/optimizer.html" target="_blank" rel="noopener noreferrer"&gt;Query Optimizer&lt;/a&gt; is the part of query execution that chooses the query plan.A &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/execution-plan-information.html" target="_blank" rel="noopener noreferrer"&gt;Query Execution Plan&lt;/a&gt; is the way MySQL chooses to execute a specific query.It includes index choices, join types, table query order, temporary table usage, sorting type … You can get the execution plan for a specific query using the &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/explain.html" target="_blank" rel="noopener noreferrer"&gt;EXPLAIN command&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="a-case-in-question"&gt;A Case in Question&lt;/h2&gt;
&lt;p&gt;Now that we know what are the Query Optimizer and a Query Execution Plan, I can introduce you to the table we are querying. The SHOW CREATE TABLE for our table is below.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SHOW CREATE TABLE _test_jfg_201907G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Table: _test_jfg_201907
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Create Table: CREATE TABLE `_test_jfg_201907` (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`str1` varchar(150) DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`id1` int(10) unsigned NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`id2` bigint(20) unsigned DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`str2` varchar(255) DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[...many more id and str fields...]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`create_datetime` datetime NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`update_datetime` datetime DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRIMARY KEY (`id`),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;KEY `key1` (`id1`,`id2`)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;) ENGINE=InnoDB AUTO_INCREMENT=_a_big_number_ DEFAULT CHARSET=utf8
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;And this is not a small table (it is not very big either though…):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# ls -lh _test_jfg_201907.ibd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-rw-r----- 1 mysql mysql 11G Jul 23 13:21 _test_jfg_201907.ibd&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now we are ready for the problematic query (I ran PAGER cat &gt; /dev/null before to skip printing the result):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT * FROM _test_jfg_201907
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WHERE id1 = @v AND id2 IS NOT NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ORDER BY id DESC LIMIT 20;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;20 rows in set (27.22 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Hum, this query takes a long time (27.22 sec) considering that the table has an index on id1 and id2. Let’s check the query execution plan:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; EXPLAIN SELECT * FROM _test_jfg_201907
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WHERE id1 = @v AND id2 IS NOT NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ORDER BY id DESC LIMIT 20G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;id: 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;select_type: SIMPLE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;table: _test_jfg_201907
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;partitions: NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;type: index
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;possible_keys: key1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;key: PRIMARY
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;key_len: 4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ref: NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rows: 13000
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;filtered: 0.15
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Extra: Using where
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set, 1 warning (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;What ? The query is not using the index key1, but is scanning the whole table (key: PRIMARY in above EXPLAIN) ! How can this be ? The short explanation is that the optimizer thinks — or should I say hopes — that scanning the whole table (which is already sorted by the id field) will find the limited rows quick enough, and that this will avoid a sort operation. So by trying to avoid a sort, the optimizer ends-up losing time scanning the table.&lt;/p&gt;
&lt;h2 id="some-solutions"&gt;Some Solutions&lt;/h2&gt;
&lt;p&gt;How can we solve this ? The first solution is to hint MySQL to use key1 as shown below. Now the query is almost instant, but this is not my favourite solution because if we drop the index, or if we change its name, the query will fail.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT * FROM _test_jfg_201907 USE INDEX (key1)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WHERE id1 = @v AND id2 IS NOT NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ORDER BY id DESC LIMIT 20;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;20 rows in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;A more elegant, but still very hack-ish, solution is to prevent the optimizer from using an index for the ORDER BY. This can be achieved with the modified ORDER BY clause below (thanks to &lt;a href="http://code.openark.org/blog/" target="_blank" rel="noopener noreferrer"&gt;Shlomi Noach&lt;/a&gt; for suggesting this solution on a MySQL Community Chat). This is the solution I prefer so far, even if it is still somewhat a hack.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT * FROM _test_jfg_201907
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WHERE id1 = @v AND id2 IS NOT NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ORDER BY (id+0) DESC LIMIT 20;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;20 rows in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;A third solution is to use the &lt;a href="https://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/" target="_blank" rel="noopener noreferrer"&gt;Late Row Lookups&lt;/a&gt; trick. Even if the post about this trick is 10 years old, it is still useful — thanks to my colleague Michal Skrzypecki for bringing it to my attention. This trick basically forces the optimizer to choose the good plan because the query is modified with the intention of making the plan explicit. This is an elegant hack, but as it makes the query more complicated to understand, I prefer not to use it.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT y.* FROM (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT id FROM _test_jfg_201907
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;WHERE id1 = @v AND id2 IS NOT NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ORDER BY id DESC LIMIT 20) x
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;JOIN _test_jfg_201907 y ON x.id = y.id
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ORDER by y.id DESC;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;20 rows in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="the-ideal-solution"&gt;The ideal solution…&lt;/h2&gt;
&lt;p&gt;Well, the best solution would be to fix the bugs below. I claim Bug#74602 is not fixed even if it is marked as such in the bug system, but I will not make too much noise about this as Bug#78612 also raises attention on this problem:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://bugs.mysql.com/bug.php?id=74602" target="_blank" rel="noopener noreferrer"&gt;Bug#74602: Optimizer prefers wrong index because of low_limit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://bugs.mysql.com/bug.php?id=78612" target="_blank" rel="noopener noreferrer"&gt;Bug#78612: Optimizer chooses wrong index for ORDER BY&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-1653" target="_blank" rel="noopener noreferrer"&gt;PS-1653: Optimizer chooses wrong index for ORDER BY DESC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://perconadev.atlassian.net/browse/PS-4935" target="_blank" rel="noopener noreferrer"&gt;PS-4935: Optimizer choosing full table scan (instead of index range scan) on query order by primary key with limit.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;PS-4935 is a duplicate of PS-1653 that I opened a few months ago. In that report, I mention a query that is taking 12 minutes because of a bad choice by the optimizer (when using the good plan, the query is taking less than 0.1 second).&lt;/p&gt;
&lt;p&gt;One last thing before ending this post: I wrote above that I would give a longer explanation about the reason for this bad choice by the optimizer. Well, this longer explanation has already been written by Domas Mituzas in 2015, so I am referring you to his &lt;a href="https://dom.as/2015/07/30/on-order-by-optimization/" target="_blank" rel="noopener noreferrer"&gt;on ORDER BY optimization&lt;/a&gt; post for more details.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;–&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@jamie452?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Jamie Street&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/wrong?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.&lt;/em&gt;&lt;/p&gt;
&lt;div class="comments"&gt;
&lt;h2 id="6-comments"&gt;6 Comments&lt;/h2&gt;
&lt;div class="comment"&gt;
&lt;div class="info"&gt;
&lt;a href="http://oysteing.blogspot.com/"&gt;Øystein Grøvlen&lt;/a&gt;
&lt;span&gt;July 29, 2019 at 9:43 am&lt;/span&gt;
&lt;/div&gt;
Hi JF,
&lt;p&gt;I think this behavior may be expected if there is a correlation between the columns. For example, that id2 is more likely to be NULL for high (recent?) values of id. The MySQL optimizer does not have any statistics on how columns are correlated. Hence, it is not be able to effectively determine how many rows it needs to read to find the first 20 rows that satisfies the WHERE clause.&lt;/p&gt;
&lt;p&gt;Bug#74602 describes a scenario where column values were not correlated. This particular problem was fixed in 5.7.
Bug#78612 seems to be caused by the use of a prefix index, which does not seem to be relevant here.&lt;/p&gt;
&lt;p&gt;However, there are probably other bug reports that describes the problem you are facing. In order to address this problem, I think MySQL needs to add statistics on correlation between columns.&lt;/p&gt;
&lt;p&gt;Unfortunately, as Domas decribes, the optimizer trace does not contain any information on the cost calculations made when it decides to switch to an index that provides sorting. Hence, it is not straight-forward to verify why this choice was made.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="comment"&gt;
&lt;div class="info"&gt;
&lt;a href="https://jfg-mysql.blogspot.com/"&gt;Jean-François Gagné&lt;/a&gt;
&lt;span&gt;July 29, 2019 at 9:59 am&lt;/span&gt;
&lt;/div&gt;
Hi Øystein, thanks for the details about Bug#74602 and Bug#78612.
&lt;blockquote&gt;
&lt;p&gt;In order to address this problem, I think MySQL needs to add statistics on correlation between columns.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This might be a solution, but I am sure there are others. Tracking correlations might be very complicated. A more simple solution might be to identify plans that are “probabilistic” (like the worse case I show in this post) and to not let queries using those plans run for too long before trying an alternative plan. Also, in the case of plans that might have a very worse case (like the one in this post), maybe running both queries in parallel and killing the other when one completes might be another way to avoid this problem.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="comment"&gt;
&lt;div class="info"&gt;
&lt;a href="http://oysteing.blogspot.com/"&gt;Øystein Grøvlen&lt;/a&gt;
&lt;span&gt;July 30, 2019 at 5:13 am&lt;/span&gt;
&lt;/div&gt;
Hi,
&lt;p&gt;I think it is an interesting idea to let the optimizer have a fallback plan, in case its original estimates is off. The challenge is how to detect in time that the estimates are off. Maybe it would be easier to just switch to the more safe plan if the execution takes longer than the estimate for the safe plan. (Unfortunately, it is not straight-forward to translate query cost to execution time in MySQL.) Another aspect is diagnostics. It must be a way for the user to determine which plan was actually used.&lt;/p&gt;
&lt;p&gt;Maybe, the optimizer could be a bit more cautious, and choose a safe plan over a more risky, but potentially quicker plan. In your case, there will be a pretty accurate estimate for the number of rows that need to be read when using the secondary index, while how many rows needs to be read using the primary index depends on how the interesting rows are distributed.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="comment"&gt;
&lt;div class="info"&gt;
&lt;a href=""&gt;Jeremy&lt;/a&gt;
&lt;span&gt;July 31, 2019 at 4:08 pm&lt;/span&gt;
&lt;/div&gt;
While in an ideal world I would like to see this fixed my solution is turn towards the application. It is easier to grow application servers than database servers. After all with solutions like Nginx and such one can easily have a farm of whatever application servers (PHP, Python, Java) and just keep adding more.
&lt;p&gt;I generally keep queries super simple and let the application server(s) do the heavy lifting. For example sort. I almost never ask the database server(s) to sort in my own applications. That is wasting DB cycles on something the application layer can do quite easily and faster. So I am like just dump the raw data DB to app.&lt;/p&gt;
&lt;p&gt;Thus keeping in mind: “The fastest query is the query you do NOT run”. I prefer to dump as much heavy lifting onto the application and let the database layer handle as little as possible. As stated I would rather spin up another app server than a DB server.&lt;/p&gt;
&lt;p&gt;Of course I understand in some limited cases this isn’t always possible. Still in the vast majority of deployments there is an application layer. Also one could turn toward solutions like ProxySQL to cache bad queries although that doesn’t address the bug.&lt;/p&gt;
&lt;p&gt;Finally, as stated, I would like to see this bug fixed. However I still wouldn’t ask the DB to sort in most cases.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="comment"&gt;
&lt;div class="info"&gt;
&lt;a href=""&gt;s&lt;/a&gt;
&lt;span&gt;August 3, 2019 at 4:27 am&lt;/span&gt;
&lt;/div&gt;
also see &lt;a href="https://bugs.mysql.com/bug.php?id=95543"&gt;https://bugs.mysql.com/bug.php?id=95543&lt;/a&gt; (optimizer prefers index for order by rather than filtering – (70x slower))
&lt;/div&gt;
&lt;div class="comment"&gt;
&lt;div class="info"&gt;
&lt;a href="https://jfg-mysql.blogspot.com/"&gt;Jean-François Gagné&lt;/a&gt;
&lt;span&gt;July 28, 2021 at 5:48 pm&lt;/span&gt;
&lt;/div&gt;
Another blog post on the same subject (with a patch that was merged in 5.7 and 8.0):
&lt;a href="https://blog.jcole.us/2019/09/30/reconsidering-access-paths-for-index-ordering-a-dangerous-optimization-and-a-fix/"&gt;https://blog.jcole.us/2019/09/30/reconsidering-access-paths-for-index-ordering-a-dangerous-optimization-and-a-fix/&lt;/a&gt;
&lt;/div&gt;
&lt;/div&gt;</content:encoded>
      <author>Jean-François Gagné</author>
      <category>bugs</category>
      <category>MySQL</category>
      <category>optimizer</category>
      <category>performance</category>
      <media:thumbnail url="https://percona.community/blog/2019/07/mysql-optimizer-choose-wrong-path_hu_e5b755342fc0fd9a.jpg"/>
      <media:content url="https://percona.community/blog/2019/07/mysql-optimizer-choose-wrong-path_hu_75a34687f5d0d9cd.jpg" medium="image"/>
    </item>
    <item>
      <title>Impact of innodb_file_per_table Option On Crash Recovery Time</title>
      <link>https://percona.community/blog/2019/07/23/impact-of-innodb_file_per_table-option-on-crash-recovery-time/</link>
      <guid>https://percona.community/blog/2019/07/23/impact-of-innodb_file_per_table-option-on-crash-recovery-time/</guid>
      <pubDate>Tue, 23 Jul 2019 13:34:14 UTC</pubDate>
      <description>Starting at version MySQL5.6+ by default innodb_file_per_table is enabled and all data is stored in separate tablespaces. It provides some advantages.</description>
      <content:encoded>&lt;p&gt;Starting at version MySQL5.6+ by default innodb_file_per_table is enabled and all data is stored in separate tablespaces. It provides some &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-multiple-tablespaces.html" target="_blank" rel="noopener noreferrer"&gt;advantages&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/07/logo-mysql-170x115.png" alt="MySQL Logo" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I will highlight some of them:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can reclaim disk space when truncating or dropping a table stored in a file-per-table tablespace. Truncating or dropping tables stored in the shared &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_system_tablespace" title="system tablespace" target="_blank" rel="noopener noreferrer"&gt;system tablespace&lt;/a&gt; creates free space internally in the system tablespace data files (&lt;a href="https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_ibdata_file" title="ibdata file" target="_blank" rel="noopener noreferrer"&gt;ibdata files&lt;/a&gt;) which can only be used for new InnoDB data.&lt;/li&gt;
&lt;li&gt;You can store specific tables on separate storage devices, for I/O optimization, space management, or backup purposes.&lt;/li&gt;
&lt;li&gt;You can monitor table size at a file system level without accessing MySQL.&lt;/li&gt;
&lt;li&gt;Backups taken with &lt;a href="https://www.percona.com/software/mysql-database/percona-xtrabackup" target="_blank" rel="noopener noreferrer"&gt;Percona XtraBackup&lt;/a&gt; takes less space (compared with the physical backup of ibdata files)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="problem"&gt;Problem&lt;/h3&gt;
&lt;p&gt;There are disadvantages &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-multiple-tablespaces.html" target="_blank" rel="noopener noreferrer"&gt;described&lt;/a&gt; on MySQL man page but I found another one that is not mentioned: if you have a huge number of tables, the crash recovery process may take a lot of time. During crash recovery the MySQL daemon scans .ibd files:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-07-16 21:00:04 6766 [Note] InnoDB: Starting crash recovery.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-07-16 21:00:04 6766 [Note] InnoDB: Reading tablespace information from the .ibd files...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Started at Jul 16 23:46:52:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Version: '5.6.39-83.1-log' socket: ......&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;During startup time I checked MySQL behavior and found that MySQL opens files one by one. In my test case it was 1400000+ tables and it took 02:46:48 just to scan ibd files.&lt;/p&gt;
&lt;p&gt;To prevent such a long downtime we decided to move all the tables to shared tablespaces.&lt;/p&gt;
&lt;h3 id="solution--moving-tables-to-shared-tablespaces"&gt;Solution – moving tables to shared tablespaces&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Make sure that you have enough space on disk.&lt;/li&gt;
&lt;li&gt;Modify my.cnf and add the files.&lt;/li&gt;
&lt;li&gt;Restart MySQL and wait until it creates the data files.&lt;/li&gt;
&lt;li&gt;Move your InnoDB tables to shared tablespaces.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can use this script:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Get table list that stored in own tablespace (SPACE&gt;0)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -NB information_schema -e "select NAME from INNODB_SYS_TABLES WHERE name not like 'SYS_%' AND name not like 'mysql/%' AND SPACE &gt; 0" | split -l 30000 - tables;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Generate SQL script
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;for file in `ls tables*`;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;do
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;perl -e '$curdb=""; while(&lt;STDIN&gt;) {chomp; ($db,$table) = split(///); if ($curdb ne $db ) { print "USE $db;n"; $curdb=$db; } print "ALTER TABLE $table engine=innodb;n"; }' &lt; $file &gt; $file.SQL;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;done
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Apply files $file.SQL ( you can use parallel execution ) :
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat&lt;&lt;EOF&gt;convert.sh
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;file=$1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql &lt; ${file}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Do not forget to fix my.cnf
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -e "set global innodb_file_per_table = 0"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;chmod +x convert.sh
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# run 10 parallel threads
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ls tables*.SQL | xargs -n1 -P10 ./convert.sh&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;What the script does:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Retrieves all tables that are occupying their own tablespace&lt;/li&gt;
&lt;li&gt;Generates SQL code in this pattern USE DB_X; ALTER TABLE TBL_Y engine=innodb;&lt;/li&gt;
&lt;li&gt;Applies the SQL scripts in parallel.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;After changing file_per_table to 0 and moving the InnoDB tables:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-07-17 22:16:47 976 [Note] InnoDB: Reading tablespace information from the .ibd files...
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2019-07-17 22:25:45 976 [Note] mysqld: ready for connections.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="conclusion"&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Using the default value of innodb_file_per_table (ON) is not always a good choice. In my test case: 4000+ databases, 1400000+ tables. I reduced recovery time from 02:46:48 to 00:08:58 seconds. That’s 18 times less! Remember, there is no “golden my.cnf config”, and each case is special. Optimize MySQL configuration according to your needs.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;–&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource, please &lt;strong&gt;test&lt;/strong&gt; ideas before applying them to your production systems, and **always **secure a working back up.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Cartoon source &lt;a href="https://imgur.com/" target="_blank" rel="noopener noreferrer"&gt;https://imgur.com/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Timur Solodovnikov</author>
      <category>MySQL</category>
      <category>MySQL</category>
      <category>MySQL Crash Recovery</category>
      <category>Open Source Databases</category>
      <media:thumbnail url="https://percona.community/blog/2019/07/logo-mysql-170x115_hu_be9204a60de7f14a.jpg"/>
      <media:content url="https://percona.community/blog/2019/07/logo-mysql-170x115_hu_1f5d5e082bdcb177.jpg" medium="image"/>
    </item>
    <item>
      <title>Tame Kubernetes with These Open-Source Tools</title>
      <link>https://percona.community/blog/2019/07/08/tame-kubernetes-with-open-source-tools/</link>
      <guid>https://percona.community/blog/2019/07/08/tame-kubernetes-with-open-source-tools/</guid>
      <pubDate>Mon, 08 Jul 2019 13:03:15 UTC</pubDate>
      <description>Kubernetes’ popularity as the most-preferred open-source container-orchestration system has skyrocketed in the recent past. The overall container market is expected to cross USD 2.7 billion by 2020 with a CAGR of 40 percent. Three orchestrators spearhead this upward trend, namely Kubernetes, Mesos, and Docker Swarm. However, referring to the graph below, Kubernetes clearly leads the pack.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://kubernetes.io/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;’ popularity as the most-preferred open-source container-orchestration system has skyrocketed in the recent past. The &lt;a href="https://enterprisersproject.com/article/2017/11/kubernetes-numbers-10-compelling-stats" target="_blank" rel="noopener noreferrer"&gt;overall container market&lt;/a&gt; is expected to cross USD 2.7 billion by 2020 with a CAGR of 40 percent. Three orchestrators spearhead this upward trend, namely Kubernetes, &lt;a href="http://mesos.apache.org/" target="_blank" rel="noopener noreferrer"&gt;Mesos&lt;/a&gt;, and &lt;a href="https://docs.docker.com/engine/swarm/" target="_blank" rel="noopener noreferrer"&gt;Docker Swarm&lt;/a&gt;. However, referring to the graph below, Kubernetes clearly leads the pack.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/07/kubernetes-growth.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;source: &lt;a href="https://medium.com/@rdodev/saved-you-an-analyst-read-on-kubernetes-growth-2018-edition-810367876981" target="_blank" rel="noopener noreferrer"&gt;https://medium.com/@rdodev/saved-you-an-analyst-read-on-kubernetes-growth-2018-edition-810367876981 &lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The automation and infrastructural capabilities of Kubernetes are transforming the DevOps space,  thereby enhancing the value of the business through software. With Kubernetes you can deploy, scale, and &lt;a href="https://www.percona.com/live/19/sites/default/files/digital_rack_aws.pdf" target="_blank" rel="noopener noreferrer"&gt;manage cloud-native databases&lt;/a&gt; and applications from anywhere. No wonder, data scientists and &lt;a href="https://www.manipalprolearn.com/data-science/post-graduate-certificate-program-in-data-science-and-machine-learning-manipal-academy-higher-education" target="_blank" rel="noopener noreferrer"&gt;machine learning engineers&lt;/a&gt; love Kubernetes and apply it to improve their productivity. As Kubernetes continues to evolve and grow in complexity, we need to be ready with solutions that simplify Kubernetes, thereby enhancing your development work. Here is a comprehensive list of Kubernetes tools that can help you tame this orchestrator, many of them open source. I have divided them into five functional categories.&lt;/p&gt;
&lt;h2 id="1-tools-for-automating-cluster-deployments"&gt;1. Tools for Automating Cluster Deployments&lt;/h2&gt;
&lt;p&gt;Automated Kubernetes cluster services are a hot topic today because they eliminate much of the deployment and management hassles. An ideal application should consume declarative manifests, bootstrap fully-functioning clusters, and ensure that the K8 clusters are highly available.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/kubespray" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;KubeSpray&lt;/strong&gt;&lt;/a&gt; is a great choice for individuals who know Ansible. You can deploy this Ansible-driven cluster deployment tool on AWS, GCE, Azure, OpenStack, Baremetal, and Oracle Cloud Infrastructure.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://conjure-up.io/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Conjure-Up&lt;/strong&gt;&lt;/a&gt; can deploy the Canonical distribution of Kubernetes across several cloud providers using simple commands. The tool has native AWS integration, yet supports other cloud providers like GCE, Azure, Joyent, and OpenStack.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/kubernetes/kops" target="_blank" rel="noopener noreferrer"&gt;Kops&lt;/a&gt;&lt;/strong&gt; or &lt;strong&gt;Kubernetes Operations&lt;/strong&gt; can automate the provisioning of K8 clusters in Amazon Web Services (officially supported) and GCE (beta support). The tool allows you to take full control of the cluster lifecycle, from infrastructure provisioning to cluster deletion.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-incubator/kube-aws" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Kube-AWS&lt;/strong&gt;&lt;/a&gt; is a command-line tool that creates/updates/destroys fully-functional clusters using Amazon Web Services, namely CloudFormation, Auto Scaling, Spot Fleet, and KMS among others.&lt;/li&gt;
&lt;li&gt;You might also like to check out the &lt;a href="https://www.percona.com/software/percona-kubernetes-operators" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Percona Kubernetes operators&lt;/strong&gt;&lt;/a&gt; for Percona XtraDB Cluster and Percona Server for MongoDB.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2-cluster-monitoring-tools"&gt;2. Cluster Monitoring Tools&lt;/h2&gt;
&lt;p&gt;Monitoring Kubernetes clusters is critical in a microservice architecture. The following graph shows the top cluster monitoring tools available today.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/07/tools-services-to-monitor-kubernetes-clusters.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Source: &lt;a href="https://thenewstack.io/5-tools-monitoring-kubernetes-scale-production/" target="_blank" rel="noopener noreferrer"&gt;https://thenewstack.io/5-tools-monitoring-kubernetes-scale-production/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Here are our recommendations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Prometheus&lt;/strong&gt;&lt;/a&gt; is an open-source Cloud Native Computing Foundation (CNCF) tool that offers enhanced querying, visualization, and alerting features.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/google/cadvisor" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;CAdvisor&lt;/strong&gt;&lt;/a&gt; or &lt;strong&gt;Container Advisor&lt;/strong&gt; comes embedded into the kubelet, the primary node agent that runs on each node in the cluster. The tool focuses on container-level performance and provides an understanding of the resource usage and performance characteristics of the running containers.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datadoghq.com/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Datadog&lt;/strong&gt;&lt;/a&gt; is a good monitoring tool for those who prefer working with a fully-managed SaaS solution. It has a simple user interface to monitor containers. Further, it hosts metrics, such as the CPU and RAM. Its open source projects can be accessed in &lt;a href="https://github.com/DataDog" target="_blank" rel="noopener noreferrer"&gt;github&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/kubernetes-retired/heapster" target="_blank" rel="noopener noreferrer"&gt;Heapster&lt;/a&gt;&lt;/strong&gt; was a native supporter of Kubernetes and is installed as a pod inside Kubernetes. Thus, it can effectively gather data from the containers and pods inside the cluster. Unfortunately developers have retired the project, but you can still access the open source code.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="3-security-tools"&gt;3. Security Tools&lt;/h2&gt;
&lt;p&gt;Since Kubernetes effectively automates the provisioning and configuration of containers and provides IP-based security to each pod in the cluster, it has become the de facto container orchestrator. However, Kubernetes cannot offer advanced security monitoring and compliance enforcement, making it important for you to rely on the below-mentioned tools to secure your container stack and in turn &lt;a href="https://www.manipalprolearn.com/blog/decoding-devops-security-three-best-practices" target="_blank" rel="noopener noreferrer"&gt;bolster DevOps security&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/aporeto-inc" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Aporeto&lt;/strong&gt;&lt;/a&gt; offers runtime protection to containers, microservices, and cloud and legacy applications, thereby securing Kubernetes workloads. It provides a cloud-network firewall system to secure apps running in distributed environments.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://www.twistlock.com/" target="_blank" rel="noopener noreferrer"&gt;Twistlock&lt;/a&gt;&lt;/strong&gt; is designed to monitor applications deployed on Kubernetes for vulnerability, compliance issues, whitelisting, firewalling, and offer runtime protection to containers. In fact, it had compliance controls for enforcing HIPAA and PCI regulations on the K8 containers. The latest version adds forensic analysis that can reduce runtime overhead.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://neuvector.com/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;NeuVector&lt;/strong&gt;&lt;/a&gt; was designed to safeguard the entire K8 cluster. The container security product can protect applications at all stages of deployment.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sysdig.com/products/secure/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Sysdig Secure&lt;/strong&gt;&lt;/a&gt; offers a set of tools for monitoring container runtime environments. Sysdig designed this tool for deep integrations with container orchestration tools and to run along with other tools, such as Sysdig Monitor.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="4-development-tools"&gt;4. Development Tools&lt;/h2&gt;
&lt;p&gt;Kubernetes applications consist of multiple services, each running in its own container. Developing and debugging them on a remote Kubernetes cluster can be a cumbersome undertaking. Here are a few development tools that can ease the process of developing and debugging the services locally.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://www.telepresence.io/" target="_blank" rel="noopener noreferrer"&gt;Telepresence&lt;/a&gt;&lt;/strong&gt; is a development tool that allows you to use custom tools, namely debugger and IDE to simplify the developing and &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/" target="_blank" rel="noopener noreferrer"&gt;local debugging process&lt;/a&gt;. It provides full access to ConfigMap and other services running on the remote cluster.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://keel.sh/" target="_blank" rel="noopener noreferrer"&gt;Keel&lt;/a&gt;&lt;/strong&gt; automates Kubernetes deployment updates as soon as the new version is available in the repository. It is stateless and robust and deploys Kubernetes services through labels, annotations, and charts.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/helm" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Helm&lt;/strong&gt;&lt;/a&gt; is an application package manager for Kubernetes that allows the description of the application structure using helm-charts and simple commands.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/logzio/apollo/wiki/Getting-Started-with-Apollo" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Apollo&lt;/strong&gt;&lt;/a&gt; is an open-source application that helps operators create and deploy their services to Kubernetes. It also allows the user to view logs and revert deployments at any time.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="5-kubernetes-based-serverless-frameworks"&gt;5. Kubernetes-Based Serverless Frameworks&lt;/h2&gt;
&lt;p&gt;Due to Kubernetes’ ability to orchestrate containers across clusters of hosts, serverless FaaS frameworks rely on Kubernetes for orchestration and management. Here are a few Kubernetes-based serverless frameworks that can help build a serverless environment.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://kubeless.io/" target="_blank" rel="noopener noreferrer"&gt;Kubeless&lt;/a&gt;&lt;/strong&gt; is a Kubernetes-native open-source serverless framework that allows developers to deploy bits of code without worrying about the underlying infrastructure. It uses Kubernetes resources to offer auto-scaling, API routing, monitoring, and troubleshooting&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform9.com/fission/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Fission&lt;/strong&gt;&lt;/a&gt; is an open-source serverless framework released by Platform9, a software company that manages hybrid cloud infrastructure with Kubernetes cloud solutions. The framework helps developers manage their applications without bothering about the plumbing related to containers.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/knative" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;KNative&lt;/strong&gt;&lt;/a&gt; is a platform used by operators to build serverless solutions on top of Kubernetes. It isn’t an outright serverless solution. KNative acts as a layer between Kubernetes and the serverless framework, enabling developers to run the application anywhere that Kubernetes runs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="time-for-action"&gt;Time for Action&lt;/h2&gt;
&lt;p&gt;Open-source container-orchestration systems like Kubernetes have helped users overcome several challenges in the DevOps space. However, as Kubernetes continues to evolve, your development, monitoring, and security strategies need to change. Use the Kubernetes tools and frameworks shared in this post to simplify cluster orchestration and deployment, making it easy to deploy this popular orchestrator.&lt;/p&gt;
&lt;p&gt;–&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource, please &lt;strong&gt;test&lt;/strong&gt; ideas before applying them to your production systems, and **always **secure a working back up.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Featured image photograph &lt;a href="https://pixabay.com/photos/boat-wheel-ship-sea-nautical-2387790/" target="_blank" rel="noopener noreferrer"&gt;AnnaD on Pixabay&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Gaurav Belani</author>
      <category>DevOps</category>
      <category>Kubernetes</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2019/07/kubernetes-management-tools_hu_130ec8379081eda6.jpg"/>
      <media:content url="https://percona.community/blog/2019/07/kubernetes-management-tools_hu_e815360ef9f99cd8.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Presents: Globalizing Player Accounts with MySQL at Riot Games</title>
      <link>https://percona.community/blog/2019/05/28/percona-live-presents-globalizing-player-accounts-mysql-riot-games/</link>
      <guid>https://percona.community/blog/2019/05/28/percona-live-presents-globalizing-player-accounts-mysql-riot-games/</guid>
      <pubDate>Tue, 28 May 2019 16:48:15 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/05/riot-games.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;During my presentation at &lt;a href="https://www.percona.com/live/19/sessions/globalizing-player-accounts-with-mysql-at-riot-games" target="_blank" rel="noopener noreferrer"&gt;Percona Live 2019&lt;/a&gt;, I’ll be talking about how &lt;a href="https://www.riotgames.com/en" target="_blank" rel="noopener noreferrer"&gt;Riot Games&lt;/a&gt;, the company behind League of Legends, migrated hundreds of millions of player accounts to unlock opportunities for us to delight players. This meant moving ten geographically distributed databases into a single global database replicated into four AWS regions. I’ll talk about some of the technical decisions we made, the expected vs actual outcomes, and lessons we learned along the way.&lt;/p&gt;
&lt;p&gt;Migrating hundreds of millions of player records without impacting a player’s ability to manage their account and log in was a daunting task. I’ll shed some light on how we managed to handle this data migration while modifying the database schema. I’ll also go into detail on the backend architecture of our accounts service, such as how we use Continuent Tungsten, which we’re leveraging to manage our globally replicated database.&lt;/p&gt;
&lt;p&gt;I gave a &lt;a href="https://www.youtube.com/watch?v=MJpZZm62ZKw" target="_blank" rel="noopener noreferrer"&gt;similar version of this talk&lt;/a&gt; at AWS re:Invent last year, and wrote the article “&lt;a href="https://technology.riotgames.com/news/globalizing-player-accounts" target="_blank" rel="noopener noreferrer"&gt;Globalizing Player Accounts&lt;/a&gt;” on the &lt;a href="http://technology.riotgames.com" target="_blank" rel="noopener noreferrer"&gt;Riot Games Tech Blog&lt;/a&gt;—check out these resources for more deep tech details and context on our accounts solution.&lt;/p&gt;
&lt;h2 id="whod-get-the-most-from-this-presentation"&gt;Who’d get the most from this presentation?&lt;/h2&gt;
&lt;p&gt;The presentation will be most helpful for folks who want to learn about strategies for deploying globally replicated databases, especially developers and DBA/DBEs who are building global services. I’ll also discuss how we think about deploying applications that will talk to these types of databases.&lt;/p&gt;
&lt;h2 id="whose-presentations-are-you-most-looking-forward-to"&gt;Whose presentations are you most looking forward to?&lt;/h2&gt;
&lt;p&gt;In particular, I’m really looking forward to VividCortex’s talk on &lt;a href="https://www.percona.com/live/19/sessions/optimizing-database-performance-and-efficiency" target="_blank" rel="noopener noreferrer"&gt;optimizing performance and efficiency&lt;/a&gt; because I’d like to see their perspectives on performance issues. I’m excited to learn more by comparing their solutions to the ones I’ve seen at my own company.&lt;/p&gt;
&lt;p&gt;I’m also looking forward to the Facebook talks (&lt;a href="https://www.percona.com/live/19/sessions/mysql-replication-and-ha-at-facebook-part-1" target="_blank" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt; &amp; &lt;a href="https://www.percona.com/live/19/sessions/mysql-replication-and-ha-at-facebook-part-2" target="_blank" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;) on HA MySQL because I’m interested in this problem space and I’m curious about their solutions for managing data at scale.&lt;/p&gt;</content:encoded>
      <author>Tyler Turk</author>
      <category>Events</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>Percona Live 2019</category>
      <media:thumbnail url="https://percona.community/blog/2019/05/riot-games_hu_93c2e4143b4a2238.jpg"/>
      <media:content url="https://percona.community/blog/2019/05/riot-games_hu_77470f9fb67cf964.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Presents: Gonymizer, A Tool to Anonymize Sensitive PostgreSQL Data Tables for Use in QA and Testing</title>
      <link>https://percona.community/blog/2019/05/17/percona-live-gonymizer-tool-anonymize-sensitive-postgresql-data-testing/</link>
      <guid>https://percona.community/blog/2019/05/17/percona-live-gonymizer-tool-anonymize-sensitive-postgresql-data-testing/</guid>
      <pubDate>Fri, 17 May 2019 11:11:58 UTC</pubDate>
      <description>SmithRX is a next generation pharmacy benefit platform that is using the latest technology to radically reshape the prescription benefit management industry. To move quickly, we require the ability to iterate and test new versions of our software using production like data without violating Health Information Portability and Accountability Act (HIPAA) regulations.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://smithrx.com/" target="_blank" rel="noopener noreferrer"&gt;SmithRX&lt;/a&gt;&lt;a href="https://smithrx.com/" target="_blank" rel="noopener noreferrer"&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/05/gonymizer-postgres-data-anonymizer.jpg" alt="gonymizer postgres data anonymizer" /&gt;&lt;/figure&gt;&lt;/a&gt; is a next generation pharmacy benefit platform that is using the latest technology to radically reshape the prescription benefit management industry. To move quickly, we require the ability to iterate and test new versions of our software using production like data without violating Health Information Portability and Accountability Act (HIPAA) regulations.&lt;/p&gt;
&lt;p&gt;At Percona Live 2019, we are introducing a project we open sourced to anonymize our sensitive production data for use in rapid QA and testing of our software. The talk will cover:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An introduction to HIPAA and Protected Health Information (PHI)&lt;/li&gt;
&lt;li&gt;Deciding which parts of your data need to be anonymized&lt;/li&gt;
&lt;li&gt;Column mapping and how to represent relations that need to be anonymized&lt;/li&gt;
&lt;li&gt;An introduction to the design of the software and how it works&lt;/li&gt;
&lt;li&gt;Dumping data from a sensitive source&lt;/li&gt;
&lt;li&gt;Processing the sensitive data to create an anonymized data set&lt;/li&gt;
&lt;li&gt;Loading of the anonymized data set to a QA environment&lt;/li&gt;
&lt;li&gt;How SmithRx is using multiple Kubernetes CronJob to reload our Q/A and development environments daily&lt;/li&gt;
&lt;li&gt;Other examples on how Gonymizer can be used in other scheduling systems such as AWS Lambda&lt;/li&gt;
&lt;li&gt;What this means for you and how you can contribute&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="whod-get-the-most-from-the-presentation"&gt;Who’d get the most from the presentation?&lt;/h3&gt;
&lt;p&gt;This presentation is intended for software engineers that need a quick and easy way to anonymize their data. Intended for middle level database infrastructure (devops), and continuous integration systems. This presentation is also appropriate for Go developers looking to contribute to  an open source project that is database related. Currently Gonymizer only supports PostgreSQL, but the software has been designed to handle multiple RDBMS in the future so anyone with HIPAA, DISA (Defense Information Systems Agency), or PCI () experience in other RDBMS may find this presentation useful for getting you started on porting Gonymizer to your RDBMS.&lt;/p&gt;
&lt;h3 id="whose-presentations-are-you-most-looking-forward-to"&gt;Whose presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;At SmithRx we are currently growing our infrastructure size, automation management, and monitoring systems for our PostgreSQL database tier. There are many presentations we look forward to attending, but the following four talks will be a focus for SmithRx:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/ha-postgresql-on-kubernetes-by-demo" target="_blank" rel="noopener noreferrer"&gt;HA PostgreSQL on Kubernetes&lt;/a&gt; by Josh Berkus&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/automated-database-monitoring-at-uber-with-m3-and-prometheus" target="_blank" rel="noopener noreferrer"&gt;Automated Database Monitoring at Uber With M3 and Prometheus&lt;/a&gt; by Richard Artoul&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/monitoring-postgresql-with-percona-monitoring-and-management-pmm" target="_blank" rel="noopener noreferrer"&gt;Monitoring PostgreSQL with Percona Monitoring and Management (PMM)&lt;/a&gt; by Avinash Vallarapu&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/future-of-postgres" target="_blank" rel="noopener noreferrer"&gt;Future of Postgres&lt;/a&gt; by Ken Rugg&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;__&lt;/p&gt;
&lt;p&gt;Photo by &lt;a href="https://unsplash.com/photos/bhoj9tHlsiY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Viktor Talashuk&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/mannequin?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Levi Junkert</author>
      <category>Events</category>
      <category>Kubernetes</category>
      <category>PostgreSQL</category>
      <media:thumbnail url="https://percona.community/blog/2019/05/gonymizer-postgres-data-anonymizer_hu_cae16e3cef5c69e9.jpg"/>
      <media:content url="https://percona.community/blog/2019/05/gonymizer-postgres-data-anonymizer_hu_dc455ea2d68231b2.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Presents: An Open-Source, Cloud Native Database</title>
      <link>https://percona.community/blog/2019/05/14/percona-live-presents-open-source-cloud-native-database/</link>
      <guid>https://percona.community/blog/2019/05/14/percona-live-presents-open-source-cloud-native-database/</guid>
      <pubDate>Tue, 14 May 2019 17:38:16 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/05/cloud-native-database.jpg" alt="an open source cloud native database" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;During our presentation at &lt;a href="https://www.percona.com/live/19/sessions/an-open-source-cloud-native-database-cndb" target="_blank" rel="noopener noreferrer"&gt;Percona Live 2019&lt;/a&gt; Intel and its software partners will introduce the audience to the work we’re doing to enable an open-source framework, we call Cloud Native Database. This is a collaborative effort between &lt;a href="https://intel.com/" target="_blank" rel="noopener noreferrer"&gt;Intel&lt;/a&gt;, &lt;a href="https://rockset.com/" target="_blank" rel="noopener noreferrer"&gt;Rockset&lt;/a&gt;, &lt;a href="https://planetscale.com/" target="_blank" rel="noopener noreferrer"&gt;PlanetScale&lt;/a&gt;, &lt;a href="https://mariadb.org/" target="_blank" rel="noopener noreferrer"&gt;MariaDB&lt;/a&gt; and &lt;a href="https://www.percona.com/" target="_blank" rel="noopener noreferrer"&gt;Percona&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Through the presentation the audience will be introduced to a set of principles and architectural elements that define what we mean by Cloud Native Database. We will discuss Rockset’s RocksDB-Cloud library and how it works with Facebook’s MyRocks storage engine. We also will cover PlanetScale’s Vitess project and their use of Kubernetes for deployment of our Database-as-a-Service (DBaaS) mechanisms. Lastly we share data on the performance and scale characteristics of the architecture and components that we have developed.&lt;/p&gt;
&lt;h3 id="whod-get-the-most-from-the-presentation"&gt;Who’d get the most from the presentation?&lt;/h3&gt;
&lt;p&gt;Developers, DBAs, database practitioners in general, and folks interested in building/deploying/operating Stateful, Cloud Native Micro-Services on Kubernetes will all benefit from this presentation.&lt;/p&gt;
&lt;h3 id="whose-presentations-are-you-most-looking-forward-to"&gt;Whose presentations are you most looking forward to?&lt;/h3&gt;
&lt;p&gt;I’m really looking forward to &lt;a href="https://www.percona.com/live/19/sessions/vitess-running-sharded-mysql-on-kubernetes" target="_blank" rel="noopener noreferrer"&gt;Vitess: Running Sharded MySQL on Kubernetes&lt;/a&gt; by Sugu Sougoumarane and Dan Kozlowski. The folks at PlanetScale are doing some amazing stuff with the Vitess project. I’m also super excited to get audience feedback on our second presentation, &lt;a href="https://www.percona.com/live/19/sessions/a-discussion-on-the-advantages-afforded-mysql-dbaas-offerings-hosted-on-intels-next-gen-platform" target="_blank" rel="noopener noreferrer"&gt;A Discussion on the Advantages Afforded MySQL DBaaS offerings hosted on Intel’s Next Gen Platform&lt;/a&gt;&lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/04/Percona-Live-2019.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;–&lt;/p&gt;
&lt;p&gt;Photo by &lt;a href="https://unsplash.com/photos/h-rP5KSC2W0?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Michael Weidner&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/cloud?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Dave Cohen</author>
      <category>david.cohen</category>
      <category>Events</category>
      <category>MySQL</category>
      <category>Percona Live 2019</category>
      <media:thumbnail url="https://percona.community/blog/2019/05/cloud-native-database_hu_154194f8fc206df5.jpg"/>
      <media:content url="https://percona.community/blog/2019/05/cloud-native-database_hu_6730cf7c29fb35ed.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Presents: The State of Databases in 2019</title>
      <link>https://percona.community/blog/2019/05/09/percona-live-presents-state-databases-2019/</link>
      <guid>https://percona.community/blog/2019/05/09/percona-live-presents-state-databases-2019/</guid>
      <pubDate>Thu, 09 May 2019 10:20:27 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/05/state-of-databases-2019.jpg" alt="state of databases 2019" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;At this year’s Percona Live I am talking about &lt;a href="https://www.percona.com/live/19/sessions/the-state-of-databases-in-2019" target="_blank" rel="noopener noreferrer"&gt;The State of Databases in 2019&lt;/a&gt;. As a Software Engineer in the thick of the Database landscape, there are two problems that I see repeatedly.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Due to the massive explosion of database solutions, it has become very difficult to evaluate what database solution will serve the best for one’s use case, and&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The constant tug of war between database operators and users. While users (software developers) want the best suited database solution for their use case, and operators have to find a way to deploy databases for the entire organization’s needs.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I feel these are both real world problems which will be experienced by any organization using  a database at any scale. In my talk, I will touch upon both areas, and share my experience working with various database engines over my career.&lt;/p&gt;
&lt;h3 id="whod-get-the-most-from-the-presentation"&gt;Who’d get the most from the presentation?&lt;/h3&gt;
&lt;p&gt;My talk is oriented primarily towards software developers and database operators. However, it is of general interest for all stakeholders including people in the C-suite. The database landscape is very complex and is very hard to understand so anybody who would like to understand the landscape in 2019 could benefit from my talk.&lt;/p&gt;
&lt;h3 id="what-im-looking-forward-to-the-most"&gt;What I’m looking forward to the most…&lt;/h3&gt;
&lt;p&gt;While I am looking forward to a significant number of talks spread over both days of the conference, a few stand out.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/live/19/sessions/mysql-technology-evolutions-at-facebook" target="_blank" rel="noopener noreferrer"&gt;MySQL Technology Evolutions at Facebook&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I am a big fan of how things actually run in production. I believe code &amp; software engineering practices mature when your code runs in production. As many people know, Facebook has a huge MySQL installation. It is exciting to learn how they have productionized MySQL to serve over a billion users.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/live/19/sessions/databases-at-scale-at-square" target="_blank" rel="noopener noreferrer"&gt;Databases at Scale, at Square&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Again, I love to know how different organizations productionize their databases. With Square being bang in the middle of enterprise and retail financial ecosystems, I am very interested in listening to this talk on how they balance the various requirements and deliver a great database product.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/live/19/sessions/storing-time-series-in-2019-modern-database-performance-scalability-and-reliability-comparison" target="_blank" rel="noopener noreferrer"&gt;Storing Time Series in 2019: Modern Database Performance, Scalability, and Reliability Comparison&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I am a committer on the Apache Cassandra project - a database that is frequently used for storing time series data. I also want to hear about the specifics of the monitoring system that they have built leveraging Cassandra. It is always interesting to hear first hand experiences from our users.&lt;/p&gt;
&lt;p&gt;–&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/photos/Q1p7bh3SHj8?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;NASA&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/data?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Dave Cohen</author>
      <category>Events</category>
      <category>Open Source Databases</category>
      <category>Percona Live 2019</category>
      <media:thumbnail url="https://percona.community/blog/2019/05/state-of-databases-2019_hu_22a71a457a8ad3d7.jpg"/>
      <media:content url="https://percona.community/blog/2019/05/state-of-databases-2019_hu_a4b61c990581d86.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Presents: The First Ever TiDB Track</title>
      <link>https://percona.community/blog/2019/05/06/percona-live-presents-first-ever-tidb-track/</link>
      <guid>https://percona.community/blog/2019/05/06/percona-live-presents-first-ever-tidb-track/</guid>
      <pubDate>Mon, 06 May 2019 20:45:41 UTC</pubDate>
      <description>The PingCAP team has always been a strong supporter of Percona and the wider open source database community. As the people who work day in and day out on TiDB, an open source NewSQL database with MySQL compatibility, open source database is what gets us in the morning, and there’s no better place to share that passion than Percona Live.</description>
      <content:encoded>&lt;p&gt;The PingCAP team has always been a strong supporter of Percona and the wider open source database community. As the people who work day in and day out on &lt;a href="https://github.com/pingcap/tidb" target="_blank" rel="noopener noreferrer"&gt;TiDB&lt;/a&gt;, an open source NewSQL database with MySQL compatibility, open source database is what gets us in the morning, and there’s no better place to share that passion than Percona Live.&lt;/p&gt;
&lt;p&gt;At this year’s &lt;a href="https://www.percona.com/live/19/" target="_blank" rel="noopener noreferrer"&gt;Percona Live Open Source Database Conference&lt;/a&gt; in Austin, Texas, we are particularly excited to bring you a full track of talks and demo on the latest development in TiDB during Day 1 of the conference.&lt;/p&gt;
&lt;h2 id="who-would-benefit-from-the-tidb-track"&gt;Who would benefit from the TiDB track&lt;/h2&gt;
&lt;p&gt;The TiDB track is designed to share with developers, DBAs, and practitioners in general technical know-hows, reproducible benchmarks (no benchmark-eting), and best practices on how TiDB can solve their problems. There are 7 talks total by folks from PingCAP and Intel that cover the full gamut of how you can test, migrate, and use TiDB in the cloud to solve technical problems and deliver business value. Here’s a run down of the talk topics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/tidb-30-whats-new-and-whats-next" target="_blank" rel="noopener noreferrer"&gt;How to benchmark TiDB 3.0, the newest version&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/using-chaos-engineering-to-build-a-reliable-tidb" target="_blank" rel="noopener noreferrer"&gt;Using chaos engineering to ensure system reliability&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/leveraging-optane-to-tackle-your-io-challenges-with-tidb" target="_blank" rel="noopener noreferrer"&gt;Leveraging Intel Optane to tackle IO challenges&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/deep-dive-into-tidb-sql-layer" target="_blank" rel="noopener noreferrer"&gt;A deep look at TiDB’s SQL processing layer, optimized for a distributed system&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/making-htap-real-with-tiflash-a-tidb-native-columnar-extension" target="_blank" rel="noopener noreferrer"&gt;Introducing a new columnar storage engine (TiFlash) that makes hybrid OLTP/OLAP a reality&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/making-an-aas-out-of-tidb-building-dbaas-on-a-kubernetes-operator" target="_blank" rel="noopener noreferrer"&gt;Building TiDB as a managed service (aka DBaaS) on a Kubernetes Operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/19/sessions/from-mysql-to-tidb-and-back-again" target="_blank" rel="noopener noreferrer"&gt;Migration best practices in and out of TiDB from MySQL and MariaDB&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Phew! That’s a lot. I hope you are excited to join us for this track. As Peter Zaitsev and Morgan Tocker (one of the TiDB track speakers) noted in a recent &lt;a href="https://www.percona.com/resources/webinars/how-horizontally-scale-mysql-tidb-while-avoiding-sharding-issues" target="_blank" rel="noopener noreferrer"&gt;Percona webinar&lt;/a&gt;, there’s a lot TiDB can do to help scale MySQL while avoiding common manual sharding issues. This track will peel the onion to show you all the fun stuff under the hood.&lt;/p&gt;
&lt;h2 id="whose-presentations-do-you-look-forward-to"&gt;Whose presentations do you look forward to?&lt;/h2&gt;
&lt;p&gt;Besides the TiDB track, there are many other presentations we are excited about. In particular, I look forward to attending Stacy Yuan and Yashada Jadhav of PayPal’s talk on &lt;a href="https://www.percona.com/live/19/sessions/mysql-security-and-standardization-at-paypal" target="_blank" rel="noopener noreferrer"&gt;MySQL Security and Standardization&lt;/a&gt;, and Vinicius Grippa of Percona’s presentation on &lt;a href="https://www.percona.com/live/19/sessions/enhancing-mysql-security" target="_blank" rel="noopener noreferrer"&gt;enhancing MySQL Security&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;See you soon in Austin!&lt;/p&gt;</content:encoded>
      <author>PingCAP</author>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>Percona Live 2019</category>
      <media:thumbnail url="https://percona.community/blog/2019/05/state-of-databases-2019_hu_22a71a457a8ad3d7.jpg"/>
      <media:content url="https://percona.community/blog/2019/05/state-of-databases-2019_hu_a4b61c990581d86.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Presents: The MySQL Query Optimizer Explained Through Optimizer Trace</title>
      <link>https://percona.community/blog/2019/04/24/mysql-query-optimizer-explained-optimizer-trace/</link>
      <guid>https://percona.community/blog/2019/04/24/mysql-query-optimizer-explained-optimizer-trace/</guid>
      <pubDate>Wed, 24 Apr 2019 16:16:09 UTC</pubDate>
      <description>During my presentation at Percona Live 2019 I will show how using Optimizer Trace can give insight into the inner workings of the MySQL Query Optimizer. Through the presentation, the audience will both be introduced to optimizer trace, learn more about the decisions the query optimizer makes, and learn about the query execution strategies the query optimizer has at its disposal. I’ll be covering the main phases of the MySQL optimizer and its optimization strategies, including query transformations, data access strategies, the range optimizer, the join optimizer, and subquery optimization.</description>
      <content:encoded>&lt;p&gt;During my presentation at &lt;a href="https://www.percona.com/live/19/sessions/the-mysql-query-optimizer-explained-through-optimizer-trace" target="_blank" rel="noopener noreferrer"&gt;Percona Live 2019&lt;/a&gt; I will show how using Optimizer Trace can give insight into the inner workings of the MySQL Query Optimizer. Through the presentation, the audience will both be introduced to optimizer trace, learn more about the decisions the query optimizer makes, and learn about the query execution strategies the query optimizer has at its disposal. I’ll be covering the main phases of the MySQL optimizer and its optimization strategies, including query transformations, data access strategies, the range optimizer, the join optimizer, and subquery optimization.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/04/oysteing3.jpg" alt="Øystein Grøvlen" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="whod-benefit-most-from-the-presentation"&gt;Who’d benefit most from the presentation?&lt;/h2&gt;
&lt;p&gt;DBAs, developers, support engineers and other people who are concerned about MySQL query performance will benefit from this presentation. Knowing the optimizer trace will enable them to understand why the query optimizer selected a particular query plan. This will be very helpful in order to understand how tune their queries for better performance.&lt;/p&gt;
&lt;h2 id="whose-presentations-are-you-most-looking-forward-to"&gt;Whose presentations are you most looking forward to?&lt;/h2&gt;
&lt;p&gt;I’m definitely looking forward to &lt;a href="https://www.percona.com/live/19/sessions/a-proactive-approach-to-monitoring-slow-queries" target="_blank" rel="noopener noreferrer"&gt;A Proactive Approach to Monitoring Slow Queries&lt;/a&gt; by Shashank Sahni of &lt;a href="https://www.thousandeyes.com/" target="_blank" rel="noopener noreferrer"&gt;ThousandEyes Inc&lt;/a&gt;. It is always interesting to learn how users of MySQL monitor their systems to detect and improve slow queries.&lt;/p&gt;</content:encoded>
      <author>Øystein Grøvlen</author>
      <category>Events</category>
      <category>MySQL</category>
      <category>Percona Live 2019</category>
      <media:thumbnail url="https://percona.community/blog/2019/04/oysteing3_hu_6fbb8ac78c811860.jpg"/>
      <media:content url="https://percona.community/blog/2019/04/oysteing3_hu_f3e25d50fc98dc57.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Presents: Vitess – Running Sharded MySQL on Kubernetes</title>
      <link>https://percona.community/blog/2019/04/18/percona-live-presents-vitess-running-sharded-mysql-kubernetes/</link>
      <guid>https://percona.community/blog/2019/04/18/percona-live-presents-vitess-running-sharded-mysql-kubernetes/</guid>
      <pubDate>Thu, 18 Apr 2019 17:19:49 UTC</pubDate>
      <description>The topic I’m presenting addresses a growing and unfulfilled need: the ability to run stateful workloads in Kubernetes. Running stateless applications is now considered a solved problem. However, it’s currently not practical to put databases like MySQL in containers, give them to Kubernetes, and expect it to manage their life cycles.</description>
      <content:encoded>&lt;p&gt;The topic I’m presenting addresses a growing and unfulfilled need: the ability to run stateful workloads in Kubernetes. Running stateless applications is now considered a solved problem. However, it’s currently not practical to put databases like MySQL in containers, give them to Kubernetes, and expect it to manage their life cycles.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/04/sugu_sougoumarane.jpg" alt="Sugu Sougoumarane" /&gt;&lt;/figure&gt;
Sugu Sougoumarane, CTO of Planetscale and creator of &lt;a href="https://vitess.io/" target="_blank" rel="noopener noreferrer"&gt;Vitess&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Vitess addresses this need by providing all the necessary orchestration and safety, and it has multiple years of mileage to show for it. Storage is the last piece of the puzzle that needs to be solved in Kubernetes, and it’s exciting to see people look towards Vitess to fill this gap.&lt;/p&gt;
&lt;h2 id="whod-benefit-most-from-the-presentation"&gt;Who’d benefit most from the presentation?&lt;/h2&gt;
&lt;p&gt;Anybody that’s looking to move to Kubernetes and is wondering about what to do about their data is the perfect audience. Needless to say, vitess also addresses problems of scalability. So, those who are looking to scale mysql will also benefit from our talk.&lt;/p&gt;
&lt;h2 id="whose-presentations-are-you-most-looking-forward-to"&gt;Whose presentations are you most looking forward to?&lt;/h2&gt;
&lt;p&gt;I’m looking forward to &lt;em&gt;&lt;a href="https://www.percona.com/live/19/sessions/an-open-source-cloud-native-database-cndb" target="_blank" rel="noopener noreferrer"&gt;An Open-Source, Cloud Native Database (CNDB)&lt;/a&gt;&lt;/em&gt; by David Cohen, of Intel, and others. They are doing something unique by bridging the gap from legacy systems and cloud-based architectures that are coming up today, and using all open source technology.&lt;/p&gt;
&lt;p&gt;I’ll be presenting my talk &lt;em&gt;&lt;a href="https://www.percona.com/live/19/sessions/vitess-running-sharded-mysql-on-kubernetes" target="_blank" rel="noopener noreferrer"&gt;Vitess: Running Sharded MySQL on Kubernetes&lt;/a&gt;&lt;/em&gt; at Percona Live 2019 on Wednesday, May 29 alongside Dan Kozlowski, also of &lt;a href="https://planetscale.com/" target="_blank" rel="noopener noreferrer"&gt;PlanetScale&lt;/a&gt;. If you’d like to &lt;a href="https://www.percona.com/live/19/register" target="_blank" rel="noopener noreferrer"&gt;register for the conference&lt;/a&gt;, use the code SEEMESPEAK for a 20% discount on your ticket.&lt;/p&gt;
&lt;p&gt;Percona Live 2019 takes place in Austin Texas from May 28 – May 30, &lt;a href="https://www.percona.com/live/19/" target="_blank" rel="noopener noreferrer"&gt;view the full programme here&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Sugu Sougoumarane</author>
      <category>Events</category>
      <category>MySQL</category>
      <category>Percona Live 2019</category>
      <media:thumbnail url="https://percona.community/blog/2019/04/sugu_sougoumarane_hu_20c445e194a57b97.jpg"/>
      <media:content url="https://percona.community/blog/2019/04/sugu_sougoumarane_hu_2634aeccf4aaf5bc.jpg" medium="image"/>
    </item>
    <item>
      <title>London Open Source Database Community Meetup</title>
      <link>https://percona.community/blog/2019/03/15/london-open-source-database-community-meetup/</link>
      <guid>https://percona.community/blog/2019/03/15/london-open-source-database-community-meetup/</guid>
      <pubDate>Fri, 15 Mar 2019 09:24:30 UTC</pubDate>
      <description>I strongly believe in the community.</description>
      <content:encoded>&lt;p&gt;I strongly believe in the community.&lt;/p&gt;
&lt;p&gt;Communities are the real strength of open source. Not just the theoretical ability to study, modify and share code – but the fact that other people out there are doing these things. Creating a base of knowledge and a network of relations.These can become work relationships, valuable discussions, open source tools, or even friendships.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.meetup.com/London-Open-Source-Database-Meetup/events/259662862/" target="_blank" rel="noopener noreferrer"&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2019/03/london-meetup_hu_18b35a1a448c4c27.jpg 480w, https://percona.community/blog/2019/03/london-meetup_hu_a58d2673e15cc612.jpg 768w, https://percona.community/blog/2019/03/london-meetup_hu_1a968764ca50d9bc.jpg 1400w"
src="https://percona.community/blog/2019/03/london-meetup.jpg" alt="London Open Source Database Meetup" /&gt;&lt;/figure&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That is why, when I heard that several people from the Percona support team will soon be in London, I badly wanted to organise an event.&lt;/p&gt;
&lt;p&gt;Actually, there was an interesting coincidence. When I asked &lt;a href="https://www.percona.com/blog/author/sveta-smirnova/" target="_blank" rel="noopener noreferrer"&gt;Sveta Smirnova&lt;/a&gt; if anyone from Percona lives in London, I already knew I wanted to organise an event with this new meetup group I’ve started: &lt;a href="https://www.meetup.com/London-Open-Source-Database-Meetup/" target="_blank" rel="noopener noreferrer"&gt;London Open Source Database meetup&lt;/a&gt;. But when Sveta told me that a whole team of Perconians would soon come to London? Well, trying to organise something big was natural! I asked them to speak about a broad range of technologies. And they came up with some brilliant talk descriptions.&lt;/p&gt;
&lt;p&gt;This is the list of talks (the order may change a bit):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;MongoDB ReplicaSet and Sharding&lt;/strong&gt; – Vinodh Krishnaswamy, Support Engineer&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MySQL 8.0 architecture and Enhancements&lt;/strong&gt; – Lalit Choudhary, Bug Reproduction Analyst&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Optimizer Histograms: When they Help and When Do Not?&lt;/strong&gt; – Sveta Smirnova, Principal Bug Escalation Specialist&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;New and Maturing Built-in Features in PostgreSQL to Help Build Simple Shards&lt;/strong&gt; – Jobin Augustine, Senior Support Engineer&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Brothers in Arms: Using ProxySQL + PXC to Ensure Transparent High Availability for your Application&lt;/strong&gt; – Vinicius Grippa, Support Engineer&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="details"&gt;Details&lt;/h2&gt;
&lt;p&gt;If you will be in or near London on &lt;strong&gt;Wednesday March 27&lt;/strong&gt;, between 7pm and 10pm, please sign up on the &lt;a href="https://www.meetup.com/London-Open-Source-Database-Meetup/events/259662862/" target="_blank" rel="noopener noreferrer"&gt;event page&lt;/a&gt; as soon as possible, meet the Percona experts, enjoy a few snacks courtesy of Percona, and be a part of this new idea. The event is being held at Innovation Warehouse in the Farringdon area – it’s above Smithfield Market.&lt;/p&gt;
&lt;p&gt;And I’d like to thank the Percona team for helping me get this new project off the ground. See you there!&lt;/p&gt;</content:encoded>
      <author>Federico Razzoli</author>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>PostgreSQL</category>
      <media:thumbnail url="https://percona.community/blog/2019/03/london-meetup_hu_b745b423e8cba352.jpg"/>
      <media:content url="https://percona.community/blog/2019/03/london-meetup_hu_bd036228c3dce9a7.jpg" medium="image"/>
    </item>
    <item>
      <title>#ilovefs Valentine's Day Celebration (I Love Free Software)</title>
      <link>https://percona.community/blog/2019/02/14/ilovefs-valentines-day/</link>
      <guid>https://percona.community/blog/2019/02/14/ilovefs-valentines-day/</guid>
      <pubDate>Thu, 14 Feb 2019 10:21:04 UTC</pubDate>
      <description>Free Software Foundation Europe (FSFE) is celebrating the creators of free software with their I Love Free Software campaign #ilovefs, a social campaign for Valentine’s Day.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://fsfe.org/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Free Software Foundation Europe&lt;/strong&gt;&lt;/a&gt; (FSFE) is celebrating the creators of free software with their I Love Free Software campaign #ilovefs, a social campaign for Valentine’s Day.&lt;/p&gt;
&lt;p&gt;The idea is to show some appreciation to the makers of free software. Most of our communications with free software creators are about bugs and feature requests and maybe we just forget to say “Thanks”. So FSFE are trying to provide some balance.&lt;/p&gt;
&lt;p&gt;FSFE were promoting the campaign at FOSDEM at the beginning of this month. Unfortunately my swag parcel arrived a little late to get our widely distributed remote colleagues the balloons and material to pose with, so you just get me!&lt;/p&gt;
&lt;p&gt;Since getting the swag distributed to my colleagues in time for Valentine’s day was a challenge, I headed to my local University in Aberystwyth, Wales to share the goodies, and encourage final year computer science students to celebrate free software.&lt;/p&gt;
&lt;h4 id="share-the-love"&gt;Share the love…&lt;/h4&gt;
&lt;p&gt;If you’d like to share your appreciation for free software too:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Read more about the campaign at &lt;a href="https://fsfe.org/campaigns/ilovefs/index.en.html" target="_blank" rel="noopener noreferrer"&gt;https://fsfe.org/campaigns/ilovefs/index.en.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Share on social using the hashtag #ilovefs - banners and images can be downloaded from here &lt;a href="https://fsfe.org/campaigns/ilovefs/artwork/artwork.en.html" target="_blank" rel="noopener noreferrer"&gt;https://fsfe.org/campaigns/ilovefs/artwork/artwork.en.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Put the date in your diary for next year to remember to show some love for the work of free software creators across the globe&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And of course, twice a year at &lt;a href="https://www.percona.com/live/19/" target="_blank" rel="noopener noreferrer"&gt;Percona Live Open Source Database Conferences&lt;/a&gt; EVERYONE loves free software… come and share the love! &lt;a href="https://www.percona.com/live/19/" target="_blank" rel="noopener noreferrer"&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/02/percona_ilovefs.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/a&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2019/02/ilovefs-1_hu_9bf1b3d83bc3026b.jpg 480w, https://percona.community/blog/2019/02/ilovefs-1_hu_78609dfc409ac234.jpg 768w, https://percona.community/blog/2019/02/ilovefs-1_hu_e1d6c0f3df1092a7.jpg 1400w"
src="https://percona.community/blog/2019/02/ilovefs-1.jpg" alt=" " /&gt;&lt;/figure&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/02/ilovefs-5.jpg" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;</content:encoded>
      <author>Lorraine Pocklington</author>
      <category>lorraine.pocklington</category>
      <category>Events</category>
      <media:thumbnail url="https://percona.community/blog/2019/02/ilovefs_postcard_hu_40cb9fe7921eecb7.jpg"/>
      <media:content url="https://percona.community/blog/2019/02/ilovefs_postcard_hu_68b3beaa6144164e.jpg" medium="image"/>
    </item>
    <item>
      <title>Writing a Killer Conference Proposal</title>
      <link>https://percona.community/blog/2019/01/03/writing-killer-conference-proposal/</link>
      <guid>https://percona.community/blog/2019/01/03/writing-killer-conference-proposal/</guid>
      <pubDate>Thu, 03 Jan 2019 10:40:41 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2019/01/writing-a-killer-conference-proposal.jpg" alt="writing a killer conference proposal" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;If you’re planning to submit a proposal to &lt;a href="https://www.percona.com/live/19/" target="_blank" rel="noopener noreferrer"&gt;Percona Live&lt;/a&gt; but suffering a little writer’s block, or at least want to be sure to make a good impression on our track selectors, there’s some great content online that can help. If you’re an old hand, you probably won’t need this, though it’s possible you’ll find some interesting stuff here nevertheless.&lt;/p&gt;
&lt;p&gt;Your job is to make it easy for the selectors to choose your talk. Remember, too, that your proposal will be used to ‘sell’ your presentation on the conference website. So try to make it appealing.&lt;/p&gt;
&lt;h3 id="theres-help-online"&gt;There’s help online…&lt;/h3&gt;
&lt;p&gt;Here, I list some articles and presentations that could help you along the way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This article by Russ Unger breaks the task into &lt;a href="https://alistapart.com/article/conference-proposals-that-dont-suck" target="_blank" rel="noopener noreferrer"&gt;steps of a process&lt;/a&gt;… those who pride themselves on their technical aptitude rather than their writing chops might find this structured approach helps.&lt;/li&gt;
&lt;li&gt;Experienced presenter Dave Cheney, open source contributor and project member for the Go programming language, has sat on both sides of the fence (proposing and selecting) and &lt;a href="https://dave.cheney.net/2017/02/12/how-to-write-a-successful-conference-proposal" target="_blank" rel="noopener noreferrer"&gt;offers some great advice&lt;/a&gt;. Dave also references &lt;a href="https://medium.com/@fox/how-to-write-a-successful-conference-proposal-4461509d3e32" target="_blank" rel="noopener noreferrer"&gt;this excellent article&lt;/a&gt; on Medium by Karolina Szczur&lt;/li&gt;
&lt;li&gt;If you prefer to listen and/or watch, then &lt;a href="https://youtu.be/KAzChb4MYCg?t=247" target="_blank" rel="noopener noreferrer"&gt;this workshop presentation&lt;/a&gt; by blogger and MongoDB engineer &lt;a href="https://emptysqua.re/blog/global-diversity-cfp-day-workshop/" target="_blank" rel="noopener noreferrer"&gt;Jesse Davis&lt;/a&gt; for PyLadies Global Diversity CFP Day 2018 might hit the spot. Or perhaps you prefer the style of an &lt;a href="https://youtu.be/OAQAXVU1jIo?t=121" target="_blank" rel="noopener noreferrer"&gt;earlier talk&lt;/a&gt; referenced by Jesse, presented by &lt;a href="https://www.laceyhenschel.com/" target="_blank" rel="noopener noreferrer"&gt;Lacie Williams Henschel&lt;/a&gt;. Lacie is a Python and Django consultant.&lt;/li&gt;
&lt;li&gt;Too late for Percona Live in Austin, but this year’s &lt;a href="https://www.globaldiversitycfpday.com/" target="_blank" rel="noopener noreferrer"&gt;Global Diversity CFP Day&lt;/a&gt; on March 2 could appeal. If you are a confident and experienced presenter, how about setting up a workshop to share your skills? You can find details on the website.&lt;/li&gt;
&lt;li&gt;Last but not least, O’Reilly hosts dozens of conferences every year and &lt;a href="https://www.oreilly.com/conferences/sample_proposals.html" target="_blank" rel="noopener noreferrer"&gt;provides examples&lt;/a&gt; of what they look for in a good proposal.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So sharpen your pencil and go for it, you’ve nothing to lose. Don’t forget, the &lt;a href="https://perconacfp.hubb.me/" target="_blank" rel="noopener noreferrer"&gt;call for papers closes on Sunday, January 20&lt;/a&gt;, so don’t use this as an opportunity to put things off… we hope to release a few talks before the deadline this year, it could be you…Good luck with your submission!&lt;/p&gt;
&lt;p&gt;If you have any great resources to add, please share them via the comments.&lt;/p&gt;
&lt;p&gt;PS If you need any help, you are welcome &lt;a href="mailto:lorraine.pocklington@percona.com"&gt;to drop me a line&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;–&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/photos/K3uOmmlQmOo?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Angelina Litvin&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/writing?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Lorraine Pocklington</author>
      <category>lorraine.pocklington</category>
      <category>Entry Level</category>
      <category>Events</category>
      <media:thumbnail url="https://percona.community/blog/2019/01/writing-a-killer-conference-proposal_hu_fba50b67d37578b8.jpg"/>
      <media:content url="https://percona.community/blog/2019/01/writing-a-killer-conference-proposal_hu_dec15045005f7102.jpg" medium="image"/>
    </item>
    <item>
      <title>Some Notes on MariaDB system-versioned Tables</title>
      <link>https://percona.community/blog/2018/12/14/notes-mariadb-system-versioned-tables/</link>
      <guid>https://percona.community/blog/2018/12/14/notes-mariadb-system-versioned-tables/</guid>
      <pubDate>Fri, 14 Dec 2018 14:53:29 UTC</pubDate>
      <description>As mentioned in a previous post, I gave a talk at Percona Live Europe 2018 about system-versioned tables. This is a new MariaDB 10.3 feature, which consists of preserving old versions of a table rows. Each version has two timestamps that indicate the start (INSERT,UPDATE) of the validity of that version, and its end (DELETE, UPDATE). As a result, the user is able to query these tables as they appear at a point in the past, or how data evolved in a certain time range. An alternative name for this feature is temporal table, and I will use it in the rest of this text.</description>
      <content:encoded>&lt;p&gt;As mentioned in a &lt;a href="https://www.percona.com/community-blog/2018/10/17/percona-live-europe-presents-mariadb-system-versioned-tables/" target="_blank" rel="noopener noreferrer"&gt;previous post&lt;/a&gt;, I gave a talk at &lt;a href="https://www.percona.com/live/e18/sessions/mariadb-system-versioned-tables" target="_blank" rel="noopener noreferrer"&gt;Percona Live Europe 2018&lt;/a&gt; about system-versioned tables. This is a new MariaDB 10.3 feature, which consists of preserving old versions of a table rows. Each version has two timestamps that indicate the start (INSERT,UPDATE) of the validity of that version, and its end (DELETE, UPDATE). As a result, the user is able to query these tables as they appear at a point in the past, or how data evolved in a certain time range. An alternative name for this feature is &lt;em&gt;temporal table&lt;/em&gt;, and I will use it in the rest of this text.
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/12/mariadb-system-versioned-tables.jpg" alt="mariadb system-versioned tables" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In this post, I want to talk a bit about temporal tables best practices. Some of the information that I will provide is not present in &lt;a href="https://mariadb.com/kb/en/library/system-versioned-tables/" target="_blank" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt;; while they are based on my experience and tests, there could be errors. My suggestions for good practices are also based on my experience and opinions, and I don’t consider them as universal truths. If you have different opinions, I hope that you will share them in the comments or as a separate blog post.&lt;/p&gt;
&lt;h2 id="create-temporal-columns"&gt;Create temporal columns&lt;/h2&gt;
&lt;p&gt;It is possible – but optional – to create the columns that contain the timestamps of rows. Since there is no special term for them, I call them &lt;em&gt;temporal columns&lt;/em&gt;. MariaDB allows us to give them any name we like, so I like to use the names valid_from and valid_to, which seem to be some sort of de facto standard in data warehousing. Whichever names you decide to use, I advise you to use them for all your temporal columns and for nothing else, so that the meaning will be clear.
Temporal columns are &lt;em&gt;generated columns&lt;/em&gt;, meaning that their values are generated by MariaDB and cannot be modified by the user. They are also &lt;em&gt;invisible columns&lt;/em&gt;, which means that they can only be read by mentioning them explicitly. In other words, the following query will not return those columns:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT * FROM temporal_table;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Also, that query will only show current versions of the rows. In this way, if we make a table temporal, existing applications and queries will continue to work as before.&lt;/p&gt;
&lt;p&gt;But we can still read old versions and obtain timestamp with a query like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT *, valid_from, valid_to
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;    FROM temporal_table **FOR SYSTEM_TIME ALL**
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;    WHERE valid_from &lt; NOW() - INTERVAL 1 MONTH;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;If we don’t create these columns, we will not be able to read the timestamps of current and old row versions. We will still be able to read data from a point in time or from a time range by using some special syntax. However, I believe that using the consolidated WHERE syntax is easier and more expressive than using some syntax sugar.&lt;/p&gt;
&lt;h2 id="primary-keys"&gt;Primary keys&lt;/h2&gt;
&lt;p&gt;For performance reasons, InnoDB tables should always have a primary key, and normally it shouldn’t be updated. Temporal tables provide another reason to follow this golden rule – even on storage engines that are not organised by primary key, like MyISAM.&lt;/p&gt;
&lt;p&gt;The reason is easy to demonstrate with an example:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT id, valid_from, valid_to FROM t FOR SYSTEM_TIME ALL WHERE id IN (500, 501);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----+----------------------------+----------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id | valid_from | valid_to |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----+----------------------------+----------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 500 | 2018-12-09 12:22:45.000001 | 2018-12-09 12:23:03.000001 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 501 | 2018-12-09 12:23:03.000001 | 2038-01-19 03:14:07.999999 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----+----------------------------+----------------------------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;What do these results mean? Maybe row 500 has been deleted and row 501 has been added. Or maybe row 500 has been modified, and its id became 501. The timestamps suggest that the latter hypothesis is more likely, but there is no way to know that for sure.&lt;/p&gt;
&lt;p&gt;That is why, in my opinion, we need to be able to assume that UPDATEs never touch primary key values.&lt;/p&gt;
&lt;h2 id="indexes"&gt;Indexes&lt;/h2&gt;
&lt;p&gt;Currently, the documentation says nothing about how temporal columns are indexed. However, my conclusion is that the valid_to column is appended to UNIQUE indexes and the primary key. My opinion is based on the results of some EXPLAIN commands, like the following:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EXPLAIN SELECT email, valid_to FROM customer ORDER BY email G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; id: 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; select_type: SIMPLE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; table: customer
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; type: index
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;possible_keys: NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; key: unq_email
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; key_len: 59
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ref: NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; rows: 4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Extra: Using where; Using index&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This means that the query only reads from a UNIQUE index, and not from table data – therefore, the index contains the email column. It is also able to use the index for sorting, which confirms that email is the first column (as expected). In this way, UNIQUE indexes don’t prevent the same value from appearing multiple times, but it will always be shown at different points in time.&lt;/p&gt;
&lt;p&gt;It can be a good idea to include valid_to or valid_from in some regular indexes, to optimize queries that use such columns for filtering results.&lt;/p&gt;
&lt;h2 id="transaction-safe-temporal-tables"&gt;Transaction-safe temporal tables&lt;/h2&gt;
&lt;p&gt;Temporal columns contain timestamps that indicate when a row was INSERTed, UPDATEd, or DELETEd. So, when autocommit is not enabled, temporal columns don’t match the COMMIT time. For most use cases, this behaviour is desirable or at least acceptable. But there are cases when we want to only see committed data, to avoid data inconsistencies that were never seen by applications.&lt;/p&gt;
&lt;p&gt;To do so, we can create a history-precise temporal table. This only works with InnoDB – not with RocksDB or TokuDB, even if they support transactions. A history-precise temporal table doesn’t contain timestamps; instead, it contains the id’s of transactions that created and deleted each row version. If you know PostgreSQL, you are probably familiar with the xmin and xmax columns – it’s basically the same idea, except that in postgres at some point autovacuum will make old row versions disappear. Because of the similarity, for transaction-precise temporal tables, I like to call the temporal columns xmin and xmax.&lt;/p&gt;
&lt;p&gt;From this short description, the astute reader may already see a couple of problems with this approach:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Temporal tables are based on transaction id’s &lt;strong&gt;or&lt;/strong&gt; on timestamps, not both. There is no way to run a transaction-precise query to extract data that were present one hour ago. But think about it: even if it was possible, it would be at least problematic, because transactions are meant to be concurrent.&lt;/li&gt;
&lt;li&gt;Transaction id’s are written in the binary log, but such information is typically only accessible by DBAs. An analyst (someone who’s typically interested in temporal tables) has no access to transaction id’s.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A partial workaround would be to query tables with columns like created_at and modified_at. We can run queries like this:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT created_at, xmin
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; FROM some_table
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; WHERE created_at &gt;= '2018-05-05 16:00:00'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ORDER BY created_at
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; LIMIT 1;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This will return the timestamp of the first row created since ‘2018-05-05 16:00:00’, as well as the id of the transaction which inserted it.&lt;/p&gt;
&lt;p&gt;While this approach could give us the information we need with a reasonable extra work, it’s possible that we don’t have such columns, or that rows are not inserted often enough in tables that have them.&lt;/p&gt;
&lt;p&gt;In this case, we can occasionally write in a table the current timestamp and the current transaction id. This should allow us to associate a transaction to the timestamp we are interested in. We cannot write all transaction id’s for performance reasons, so we can use two different approaches:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Write the transaction id and the timestamp periodically, for example each minute. This will not create performance problems. On the other hand, we are arbitrarily deciding the granularity of our “log”. This could be acceptable or not.&lt;/li&gt;
&lt;li&gt;Write this information when certain events happen. For example when a product is purchased, or when a user changes their password. This will give us a very precise way to see the data as they appeared during critical events, but will not allow us to investigate with the same precision other types of events.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="partitioning"&gt;Partitioning&lt;/h2&gt;
&lt;p&gt;If we look at older implementations of temporary tables, in the world of proprietary databases (Db2, SQL Server, Oracle), they generally store historical data in a separate physical table or partition, sometimes called a history table. In MariaDB this doesn’t happen automatically or by default, leaving the choice to the user. However, it seems to me a good idea in the general case to create one or more partitions to store historical rows. The main reason is that, rarely, a query has to read both historical and current data, and reading only one partition is an interesting optimization.&lt;/p&gt;
&lt;h2 id="excluding-columns-from-versioning"&gt;Excluding columns from versioning&lt;/h2&gt;
&lt;p&gt;MariaDB allows us to exclude some columns from versioning. This means that if we update the values of those columns, we update the current row version in place rather than creating a new one. This is probably useful if a column is frequently updated and we don’t care about these changes. However, if we update more columns with one statement, and only a subset of them is excluded from versioning, a new row version is still created. All in all, the partial exclusion of some rows could be more confusing than useful in several cases.&lt;/p&gt;
&lt;h2 id="replication"&gt;Replication&lt;/h2&gt;
&lt;p&gt;10.3 is a stable version, but it is still recent. Some of us adopt a new major version after some years, and we can even have reasons to stick with an old version. Furthermore, of course, many of us use MySQL, and MariaDB is not a drop-in replacement.&lt;/p&gt;
&lt;p&gt;But we can still enjoy temporal tables by adding a MariaDB 10.3 slave. I attached such a slave to older MariaDB versions, and to MySQL 5.6. In all tests, the feature behaved as expected.&lt;/p&gt;
&lt;p&gt;Initially, I was worried about replication lags. I assumed that, if replication lags, the slave applies the changes with a delay, and the timestamps in the tables are delayed accordingly. I am glad to say that I was wrong: the timestamps in temporal tables seem to match the ones in the binary log, so replication lags don’t affect their correctness.&lt;/p&gt;
&lt;p&gt;This is true both with row-based replication and with statement-based replication.&lt;/p&gt;
&lt;p&gt;A small caveat about temporal tables is that the version timestamps are only precise at second level. The fractional part should be ignored. You may have noticed this in the example at the beginning of this post.&lt;/p&gt;
&lt;h2 id="backups"&gt;Backups&lt;/h2&gt;
&lt;p&gt;For backups you will need to use &lt;a href="https://mariadb.com/kb/en/library/mariabackup-overview/" target="_blank" rel="noopener noreferrer"&gt;mariabackup&lt;/a&gt; instead of xtrabackup.&lt;/p&gt;
&lt;p&gt;mysqldump can be used, not necessarily from a MariaDB distribution. However, it treats temporal tables as regular tables. It does not backup historical data. This is necessary because of a design choice: we cannot insert rows with timestamps in the past. This makes temporal tables much more reliable. Also, temporal tables are likely to be (or become) quite big, so a dump is probably not the best way to backup them.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;–&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/photos/WeYamle9fDM?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Ashim D’Silva&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/canyon?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Federico Razzoli</author>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>system-versioned tables</category>
      <media:thumbnail url="https://percona.community/blog/2018/12/mariadb-system-versioned-tables_hu_67f40cbc67439966.jpg"/>
      <media:content url="https://percona.community/blog/2018/12/mariadb-system-versioned-tables_hu_d2a41d9d1c19f4c6.jpg" medium="image"/>
    </item>
    <item>
      <title>MySQL Setup at Hostinger Explained</title>
      <link>https://percona.community/blog/2018/12/11/mysql-setup-hostinger-explained/</link>
      <guid>https://percona.community/blog/2018/12/11/mysql-setup-hostinger-explained/</guid>
      <pubDate>Tue, 11 Dec 2018 15:27:45 UTC</pubDate>
      <description>Ever wondered how hosting companies manage their MySQL database architecture? At Hostinger, we have various MySQL setups starting from the standalone replica-less instances to Percona XtraDB Cluster (later just PXC), ProxySQL routing-based and even absolutely custom and unique solutions which I’m going to describe in this blog post.</description>
      <content:encoded>&lt;p&gt;Ever wondered how hosting companies manage their MySQL database architecture? At &lt;a href="https://www.hostinger.com/" target="_blank" rel="noopener noreferrer"&gt;Hostinger,&lt;/a&gt; we have various MySQL setups starting from the standalone replica-less instances to &lt;a href="https://www.percona.com/software/mysql-database/percona-xtradb-cluster" target="_blank" rel="noopener noreferrer"&gt;Percona XtraDB Cluster&lt;/a&gt; (later just PXC), &lt;a href="http://www.proxysql.com/" target="_blank" rel="noopener noreferrer"&gt;ProxySQL&lt;/a&gt; routing-based and even absolutely custom and unique solutions which I’m going to describe in this blog post.&lt;/p&gt;
&lt;p&gt;We do not have elephant-sized databases for internal services like API, billing, and clients. Thus almost every decision ends up with high availability as a top priority instead of scalability.&lt;/p&gt;
&lt;p&gt;Still, scaling vertically is good enough for our case, as the database size does not exceed 500GB. One and the top requirements is the ability to access the master node, as we have fairly equal-distanced workloads for reading and writing.&lt;/p&gt;
&lt;p&gt;Our current setup for storing all the data about the clients, servers and so forth is using PXC formed of three nodes without any geo-replication. All nodes are running in the same datacenter.&lt;/p&gt;
&lt;p&gt;We have plans to migrate this cluster to geo-replicated cluster across three locations: the United States, Netherlands, and Singapore. This would allow us to warrant high availability if one of the locations became unreachable.&lt;/p&gt;
&lt;p&gt;Since PXC uses fully synchronous replication, there will be higher latencies for writes. But the reads will be much quicker because of the local replica in every location.&lt;/p&gt;
&lt;p&gt;We did some research on &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/group-replication.html" target="_blank" rel="noopener noreferrer"&gt;MySQL Group Replication&lt;/a&gt;, but it requires instances to be closer to each other and is more sensitive to latencies.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Group Replication is designed to be deployed in a cluster environment where server instances are very close to each other, and is impacted by both network latency as well as network bandwidth.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;PXC was used previously, thus we to know how to deal with it in critical circumstances and make it more highly available.&lt;/p&gt;
&lt;p&gt;In &lt;a href="https://www.000webhost.com/" target="_blank" rel="noopener noreferrer"&gt;000webhost.com&lt;/a&gt; project and hAPI (Hostinger API) we use our aforementioned unique solution which selects the master node using Layer3 protocol.&lt;/p&gt;
&lt;p&gt;One of our best friends is BGP and BGP protocol, which is aged enough to buy its own beer, hence we use it a lot. This implementation also uses BGP as the underlying protocol and helps to point to the real master node. To run BGP protocol we use the ExaBGP service and announce VIP address as anycast from both master nodes.&lt;/p&gt;
&lt;p&gt;You should be asking: but how are you sure MySQL queries go to the one and the same instance instead of hitting both? We use &lt;a href="https://zookeeper.apache.org/doc/current/zookeeperOver.html" target="_blank" rel="noopener noreferrer"&gt;Zookeeper’s ephemeral nodes&lt;/a&gt; to acquire the lock as mutually exclusive.&lt;/p&gt;
&lt;p&gt;Zookeeper acts like a circuit breaker between BGP speakers and the MySQL clients. If the lock is acquired we announce the VIP from the master node and applications send the queries toward this path. If the lock is released, another node can take it over and announce the VIP, so the application will send the queries without any efforts.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/12/mysql-setup-hostinger.jpg" alt="mysql setup at hostinger" /&gt;&lt;/figure&gt;
&lt;em&gt;MySQL Setup at Hostinger&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The second question comes: what conditions should be met to stop announcing VIP? This can be implemented differently depending on use case, but we release the lock if MySQL process is down using systemd’s &lt;code&gt;Requires&lt;/code&gt; in the unit file of ExaBGP:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Besides, with or without specifying After=, this unit will be stopped if one of the other units is explicitly stopped.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;With &lt;a href="https://www.freedesktop.org/wiki/Software/systemd/" target="_blank" rel="noopener noreferrer"&gt;systemd&lt;/a&gt; we can create a nice dependency tree which ensures all of them are met. Stopping, killing, or even rebooting the MySQL will make systemd stop the ExaBGP process and withdraw the VIP announcement. The final result is a new master selected.&lt;/p&gt;
&lt;p&gt;We battle tested those master failovers during our &lt;a href="https://www.hostinger.com/blog/new-network-infrastructure" target="_blank" rel="noopener noreferrer"&gt;Gaming days&lt;/a&gt; and nothing critical was noticed &lt;em&gt;yet&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;If you think good architecture is expensive, try bad architecture 😉&lt;/p&gt;
&lt;p&gt;– &lt;em&gt;This post was originally published at &lt;a href="https://www.hostinger.com/blog/mysql-setup-at-hostinger-explained/" target="_blank" rel="noopener noreferrer"&gt;https://www.hostinger.com/blog/mysql-setup-at-hostinger-explained/&lt;/a&gt; in June 2018. The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource &lt;strong&gt;test&lt;/strong&gt; ideas before applying them to your production systems, and **always **secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Donatas Abraitis</author>
      <category>hosting</category>
      <category>MySQL</category>
      <category>ProxySQL</category>
      <category>Tools</category>
      <category>Zookeeper Cluster</category>
      <media:thumbnail url="https://percona.community/blog/2018/12/mysql-setup-hostinger_hu_acb7bb452c7e7b06.jpg"/>
      <media:content url="https://percona.community/blog/2018/12/mysql-setup-hostinger_hu_d028d027cd277e4d.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Presents: pg_chameleon MySQL to PostgreSQL Replica Made Easy</title>
      <link>https://percona.community/blog/2018/10/26/percona-live-europe-presents-pg_chameleon-mysql-postgresql-replica-made-easy/</link>
      <guid>https://percona.community/blog/2018/10/26/percona-live-europe-presents-pg_chameleon-mysql-postgresql-replica-made-easy/</guid>
      <pubDate>Fri, 26 Oct 2018 14:42:12 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/igor_small.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;What excites me is the possibility that this tool is giving to other people. Also, the challenges I’ve faced and the new ideas for the future releases are always source of interest that keep me focused on the project. So I’m looking forward to &lt;a href="https://www.percona.com/live/e18/sessions/pgchameleon-mysql-to-postgresql-replica-made-easy" target="_blank" rel="noopener noreferrer"&gt;sharing this with the conference delegates&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;pg_chameleon can achieve two tasks in a very simple way. It can setup a permanent replica between MySQL and PostgreSQL, giving the freedom of choice for the right tool for the right job, or can migrate multiple schemas to a PostgreSQL database.&lt;/p&gt;
&lt;p&gt;Anybody that want to extend their database experience, taking the best of the two worlds, or who is seeking a simple way to migrate data with minimal downtime will find the presentation interesting.&lt;/p&gt;
&lt;h3 id="what-else-am-i-looking-forward-to-at-percona-live-europe"&gt;What else am I looking forward to at Percona Live Europe?&lt;/h3&gt;
&lt;p&gt;I’m looking forward to Bruce Momjian’s &lt;a href="https://www.percona.com/live/e18/sessions/explaining-the-postgres-query-optimizer" target="_blank" rel="noopener noreferrer"&gt;Explaining the Postgres Query Optimizer&lt;/a&gt;, Bo Wang’s &lt;a href="https://www.percona.com/live/e18/sessions/how-we-use-and-improve-percona-xtrabackup-at-alibaba-cloud" target="_blank" rel="noopener noreferrer"&gt;How we use and improve Percona XtraBackup at Alibaba Cloud&lt;/a&gt; and Federico Razzoli’s &lt;a href="https://www.percona.com/live/e18/sessions/mariadb-system-versioned-tables" target="_blank" rel="noopener noreferrer"&gt;MariaDB system-versioned tables&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/blog/2018/08/17/replication-from-percona-server-for-mysql-to-postgresql-using-pg_chameleon/" target="_blank" rel="noopener noreferrer"&gt;Read the Percona blog&lt;/a&gt; about pg_chameleon 
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/postgres-mysql-replication-using-pg_chameleon.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;</content:encoded>
      <author>Federico Campoli</author>
      <category>Events</category>
      <category>MySQL</category>
      <category>Percona Live Europe 2018</category>
      <category>PostgreSQL</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/igor_small_hu_23c337dddb576606.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/igor_small_hu_adc5c6e00e2a4808.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Presents: MariaDB 10.4 Reverse Privileges (DENY)</title>
      <link>https://percona.community/blog/2018/10/23/mariadb-10-4-reverse-privileges-deny/</link>
      <guid>https://percona.community/blog/2018/10/23/mariadb-10-4-reverse-privileges-deny/</guid>
      <pubDate>Tue, 23 Oct 2018 11:41:48 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/MariaDB-Foundation-vertical.png" alt="MariaDB Foundation" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;One of the most common questions about privileges in MySQL and MariaDB is how would a user revoke access to a particular table, in a large database with hundreds or thousands of tables, while keeping the rest available. Currently, there is no easy solution. Just grant access to everything else, individually. Not only does this reduce server performance, but is a nightmare to maintain. Reverse privileges solve this and more. And they are simple to explain to new admins too! So I look forward to sharing the knowledge during &lt;a href="https://www.percona.com/live/e18/sessions/mariadb-104-reverse-privileges-deny" target="_blank" rel="noopener noreferrer"&gt;my presentation at PLE18&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;DBAs&lt;/strong&gt; would benefit from this talk the most. As it is a feature still under development, we are open for input from the community. Tell us what you think we should do to make this feature the best it can be.&lt;/p&gt;
&lt;h2 id="what-im-looking-forward-to"&gt;What I’m looking forward to…&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/vicentiu_ciorbaru-m18-2s.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;It will be quite interesting to see what challenges people have faced with MySQL and MariaDB and how they were overcome. As a database developer, it’s always important to understand how your users make use of the product. It is only through this that we can make it better.&lt;/p&gt;</content:encoded>
      <author>Vicențiu Ciorbaru</author>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>Percona Live Europe 2018</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/MariaDB-Foundation-vertical_hu_752fc1922c64346b.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/MariaDB-Foundation-vertical_hu_17205386f99e3d54.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Presents: Need for speed - Boosting Apache Cassandra's performance using Netty</title>
      <link>https://percona.community/blog/2018/10/22/percona-live-europe-presents-need-speed-boosting-apache-cassandras-performance-using-netty/</link>
      <guid>https://percona.community/blog/2018/10/22/percona-live-europe-presents-need-speed-boosting-apache-cassandras-performance-using-netty/</guid>
      <pubDate>Mon, 22 Oct 2018 08:34:35 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/apache-cassandra-logo-3.png" alt=" " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;My talk is titled &lt;a href="https://www.percona.com/live/e18/sessions/need-for-speed-boosting-apache-cassandras-performance-using-netty" target="_blank" rel="noopener noreferrer"&gt;Need for speed: Boosting Apache Cassandra’s performance using Netty&lt;/a&gt;. Over the years that I have worked in the software industry, making code run fast has fascinated me. So, naturally when I first started contributing to Apache Cassandra, I started looking opportunities to improve its performance. My talk takes us through some interesting challenges within a distributed system like &lt;a href="http://cassandra.apache.org/" target="_blank" rel="noopener noreferrer"&gt;Apache Cassandra&lt;/a&gt; and various techniques to significantly improve its performance. Talking about performance is incredibly exciting because you can easily quantify and see the results. Making improvements to the database’s performance not only improves the user experience but also reflects positively on the organization’s bottom line. It also has the added benefit of pushing the boundaries of scale. Furthermore, my talk spans beyond Apache Cassandra and is generally applicable for writing performant networking applications in Java.&lt;/p&gt;
&lt;h2 id="whod-benefit-most-from-the-presentation"&gt;Who’d benefit most from the presentation?&lt;/h2&gt;
&lt;p&gt;My talk is oriented primarily towards developers and operators. Although Apache Cassandra is written in Java and we talk about Netty, there is plenty in the talk that is generic and the lessons learned could be applied towards any Distributed System. I think developers with various experience levels would benefit from the talk. However, intermediate developers would benefit the most.&lt;/p&gt;
&lt;h2 id="what-im-most-looking-forward-to-at-ple-18"&gt;What I’m most looking forward to at PLE ‘18…&lt;/h2&gt;
&lt;p&gt;There are many interesting sessions at the conference. Here are some of the interesting sessions -&lt;/p&gt;
&lt;h4 id="performance-analyses-technologies-for-databases"&gt;&lt;a href="https://www.percona.com/live/e18/sessions/performance-analyses-technologies-for-databases" target="_blank" rel="noopener noreferrer"&gt;Performance Analyses Technologies for Databases&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;As I mentioned, I am a big performance geek and in this talk Peter is going to talk about various methods to data infrastructure performance analysis including monitoring.&lt;/p&gt;
&lt;h4 id="securing-access-to-facebook"&gt;&lt;a href="https://www.percona.com/live/e18/sessions/securing-access-to-facebooks-databases" target="_blank" rel="noopener noreferrer"&gt;Securing Access to Facebook’s Databases&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;This is an interesting session from a security standpoint. Andrew is talking about securing access to MySQL. As most people know Facebook has a huge MySQL deployment and as security and privacy has become a prime concern, we see a lot of movement towards encryption. This talk is going to be particularly interesting because Facebook is using x509 client certs to authenticate. This is a non-trivial challenge for anybody at scale.&lt;/p&gt;
&lt;h4 id="tls-for-mysql-at-large-scale"&gt;&lt;a href="https://www.percona.com/live/e18/sessions/tls-for-mysql-at-large-scale" target="_blank" rel="noopener noreferrer"&gt;TLS for MySQL at large scale&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;This talk from Wikipedia is along similar lines as the previous one. It just goes to emphasize the importance of security in today’s climate. What’s interesting is that Wikipedia and Facebook, both are talking about it! I am curious to find out what sort of privacy challenges Wikipedia is solving.&lt;/p&gt;
&lt;h4 id="advanced-mysql-data-at-rest-encryption-in-percona-server"&gt;&lt;a href="https://www.percona.com/live/e18/sessions/advanced-mysql-data-at-rest-encryption-in-percona-server" target="_blank" rel="noopener noreferrer"&gt;Advanced MySQL Data at Rest Encryption in Percona Server&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;Another security related talk! This one’s about encryption at rest. This is interesting in and of itself as we tend to talk a lot about security in transit and less often about security of data at rest. I hope to learn more about the cost of implementing encryption at rest and its impact on the database performance, operations as well as security.&lt;/p&gt;
&lt;h4 id="artificial-intelligence-database-performance-tuning"&gt;&lt;a href="https://www.percona.com/live/e18/sessions/artificial-intelligence-database-performance-tuning" target="_blank" rel="noopener noreferrer"&gt;Artificial Intelligence Database Performance Tuning&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;I think this is an exciting time for the database industry as we’ve not only seen large increase in data volumes but also user expectations have gone up around performance. So, can AI help us tune our databases? Traditionally, the domain of an experienced DBA, I think AI can help us deliver better performance. This talk is about using Genetic Algorithms to tune the database performance. I am curious to find out how these algorithms are applied to tune databases.&lt;/p&gt;</content:encoded>
      <author>Dinesh Joshi</author>
      <category>DevOps</category>
      <category>Events</category>
      <category>MySQL</category>
      <category>Percona Live Europe 2018</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/apache-cassandra-logo-3_hu_5fd86578ee1f1a14.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/apache-cassandra-logo-3_hu_5a14f5eb72076809.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Presents: The Latest MySQL Replication Features</title>
      <link>https://percona.community/blog/2018/10/19/latest-mysql-replication-features/</link>
      <guid>https://percona.community/blog/2018/10/19/latest-mysql-replication-features/</guid>
      <pubDate>Fri, 19 Oct 2018 15:06:19 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/PLE-Frankfurt-Logo.png" alt="PLE Frankfurt Logo" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Considering the modern world of technology, where distributed system play a key role, replication in MySQL® is at the very heart of that change. It is very exciting to deliver &lt;a href="https://www.percona.com/live/e18/sessions/the-latest-mysql-replication-features" target="_blank" rel="noopener noreferrer"&gt;this presentation&lt;/a&gt; and to be able to show everyone the greatest and the latest features that MySQL brings in order to continue the success that it has always been in the past.&lt;/p&gt;
&lt;p&gt;The talk is suitable for anyone that’s interested in knowing what Oracle is doing with MySQL replication. Old acquaintances will get familiarized about new features already delivered and being considered and newcomers to the MySQL ecosystem will see how great MySQL Replication has grown to be and how it fits in their business..&lt;/p&gt;
&lt;h2 id="what-im-most-looking-forward-to-at-percona-live-europe"&gt;What I’m most looking forward to at Percona Live Europe…&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/tiago-jorge.jpg" alt="tiago jorge" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We are always eager to get feedback about the product.&lt;/p&gt;
&lt;p&gt;Moreover, MySQL being MySQL has a very large user base and, as such, is deployed and used in many different ways. It is very appealing and useful to continuously learn how our customers and users are making the most out of the product. Especially when it comes to replication, since MySQL replication infrastructure is anenabler for advanced and complex setups, making it a powerful and indispensable tool in virtually any setup nowadays.&lt;/p&gt;</content:encoded>
      <author>Tiago Jorge</author>
      <category>Events</category>
      <category>MySQL</category>
      <category>Oracle</category>
      <category>Percona Live Europe 2018</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/PLE-Frankfurt-Logo_hu_b6a203b169366d6e.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/PLE-Frankfurt-Logo_hu_f7acbdfbcdd4a32e.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Presents: MariaDB System-Versioned Tables</title>
      <link>https://percona.community/blog/2018/10/17/percona-live-europe-presents-mariadb-system-versioned-tables/</link>
      <guid>https://percona.community/blog/2018/10/17/percona-live-europe-presents-mariadb-system-versioned-tables/</guid>
      <pubDate>Wed, 17 Oct 2018 16:14:38 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/PLE-Frankfurt-Logo.png" alt="PLE Frankfurt Logo" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;System-versioned tables, or temporal tables, are a typical feature of proprietary database management systems like DB2, Oracle and SQL Server. They also appeared at some point in PostgreSQL, but only as an extension; and also in CockroachDB, but in a somewhat limited fashion.&lt;/p&gt;
&lt;p&gt;The MariaDB® implementation is the first appearance of temporal tables in the MySQL ecosystem, and the most complete implementation in the open source world.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/live/e18/sessions/mariadb-system-versioned-tables" target="_blank" rel="noopener noreferrer"&gt;My presentation&lt;/a&gt; will be useful for &lt;strong&gt;analysts&lt;/strong&gt;, and some &lt;strong&gt;managers&lt;/strong&gt;, who will definitely benefit from learning how to use temporal tables. Statistics about how data evolves over time is an important part of their job. This feature will allow them to query data as it was at a certain point in time. Or to query how data changed over a period, including rows that were added, deleted or modified.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Developers&lt;/strong&gt; will also find this feature useful, if they deal with data versioning or auditing. Recording the evolution of data into a database is not easy - several solutions are possible, but none is perfect. Streaming data changes to some event-based technology is also complex, and sometimes it’s simply a waste of resources. System-versioned tables are a good solution for many use cases.&lt;/p&gt;
&lt;p&gt;And of course, &lt;strong&gt;DBA’s&lt;/strong&gt;. Those guys will need to know what this feature is about, suggest it when appropriate, and maintain it in production systems.&lt;/p&gt;
&lt;p&gt;More generally, many people are interested in understanding MariaDB’s unique features, as well as its MySQL ones. Their approach allows them to choose “the right tool for the right purpose”.&lt;/p&gt;
&lt;h4 id="what-im-looking-forward-to"&gt;What I’m looking forward to…&lt;/h4&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/federico-razzoli.jpg" alt="federico razzoli" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I am excited about Percona Live agenda. A session that I definitely want to attend is &lt;strong&gt;&lt;a href="https://www.percona.com/live/e18/sessions/demystifying-mysql-replication-crash-safety" target="_blank" rel="noopener noreferrer"&gt;MySQL Replication Crash Safety&lt;/a&gt;&lt;/strong&gt;. I find extremely useful and interesting the talks about technology limitations and flaws. Jean-François has a long series of writings on MySQL replication and crash-safety, and I have questions for him.&lt;/p&gt;
&lt;p&gt;I also like the evolution that PMM and its components had over the years. I want to understand how to use them at best in my new job, so I am glad to see that there will be several sessions on the topic. I plan to attend some sessions about PMM and Prometheus.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.percona.com/live/e18/sessions/performance-analyses-technologies-for-databases" target="_blank" rel="noopener noreferrer"&gt;Performance Analyses Technologies for Databases&lt;/a&gt;&lt;/strong&gt; makes me think to the cases when I saw a technology evaluated in an inappropriate way, and the talks I had with people impressed by some blog posts showing impressive benchmarks which didn’t fully understand. I will definitely attend.&lt;/p&gt;
&lt;p&gt;And finally, I plan to learn something about &lt;strong&gt;&lt;a href="https://www.percona.com/live/e18/sessions/advanced-features-of-clickhouse" target="_blank" rel="noopener noreferrer"&gt;ClickHouse&lt;/a&gt;&lt;/strong&gt;, &lt;a href="https://www.percona.com/live/e18/sessions/myrocks-production-case-studies-at-facebook" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;MyRocks&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://www.percona.com/live/e18/sessions/tidb-distributed-horizontally-scalable-mysql-compatible" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;TiDB&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;See you there!&lt;/p&gt;</content:encoded>
      <author>Federico Razzoli</author>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>Percona Live Europe 2018</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/PLE-Frankfurt-Logo_hu_b6a203b169366d6e.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/PLE-Frankfurt-Logo_hu_f7acbdfbcdd4a32e.jpg" medium="image"/>
    </item>
    <item>
      <title>Export to JSON from MySQL All Ready for MongoDB</title>
      <link>https://percona.community/blog/2018/10/16/export-to-json-from-mysql-all-ready-for-mongodb/</link>
      <guid>https://percona.community/blog/2018/10/16/export-to-json-from-mysql-all-ready-for-mongodb/</guid>
      <pubDate>Tue, 16 Oct 2018 15:18:36 UTC</pubDate>
      <description>This post walks through how to export data from MySQL® into JSON format, ready to ingest into MongoDB®. Starting from MySQL 5.7+, there is native support for JSON. MySQL provides functions that actually create JSON values, so I will be using these functions in this article to export to JSON from MySQL:</description>
      <content:encoded>&lt;p&gt;This post walks through how to export data from &lt;a href="https://dev.mysql.com/" target="_blank" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt;® into JSON format, ready to ingest into &lt;a href="https://www.mongodb.com/" target="_blank" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt;®. Starting from MySQL 5.7+, there is native support for JSON. MySQL provides functions that actually create JSON values, so I will be using these functions in this article to export to JSON from MySQL:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;JSON_OBJECT&lt;/li&gt;
&lt;li&gt;JSON_ARRAY&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These functions make it easy to convert MySQL data to JSON e.g.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SELECT json_object('employee_id', emp_no, 'first_name', first_name ) AS 'JSON' FROM employees LIMIT 2;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| JSON |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| {"first_name": "Aamer", "employee_id": 444117} |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| {"first_name": "Aamer", "employee_id": 409151} |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------------------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2 rows in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this article, I will be using the employees sample database available from here: &lt;a href="https://dev.mysql.com/doc/employee/en/employees-installation.html" target="_blank" rel="noopener noreferrer"&gt;https://dev.mysql.com/doc/employee/en/employees-installation.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You can find the employees schema on &lt;a href="https://dev.mysql.com/doc/employee/en/images/employees-schema.png" target="_blank" rel="noopener noreferrer"&gt;dev.mysql.com&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When mapping relations with collections, generally there is no one to one mapping, you would want to merge data from some MySQL tables into a single collection.&lt;/p&gt;
&lt;h2 id="export-data-to-json-format"&gt;Export data to JSON format&lt;/h2&gt;
&lt;p&gt;To export data, I have constructed the following SQL (the data is combined from 3 different tables: employees, salaries, and departments):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT json_pretty(json_object(
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'emp_no', emp.emp_no,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'first_name', emp.first_name,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'last_name', emp.last_name,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'hire_date',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;json_object("$date", DATE_FORMAT(emp.hire_date,'%Y-%m-%dT%TZ')),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'Department', JSON_ARRAY(json_object('dept_id', dept.dept_no, 'dept_name', dept.dept_name)),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'Salary', s.salary)) AS 'json'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FROM employees emp
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INNER JOIN salaries s ON s.emp_no=emp.emp_no
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INNER JOIN current_dept_emp c on c.emp_no = emp.emp_no
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INNER JOIN departments dept on dept.dept_no = c.dept_no
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;LIMIT 1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Output:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;json: {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"Salary": 60117,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"emp_no": 10001,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"hire_date": "1986-06-26",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"last_name": "Facello",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"Department": [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"dept_id": "d005",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"dept_name": "Development"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"first_name": "Georgi"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You can see from this that json_object did not convert ‘hire_date’ column value to be compatible with MongoDB.  We have to convert date into ISODate format:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; select json_object('hire_date', hire_date) AS "Original Date", json_object('hire_date', DATE_FORMAT(hire_date,'%Y-%m-%dT%TZ')) AS "ISODate" from employees limit 1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------------------------+---------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Original Date | ISODate |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------------------------+---------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| {"hire_date": "1985-01-01"} | {"hire_date": "1985-01-01T00:00:00Z"} |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-----------------------------+---------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Next, we dump the output to a file (the above query is slightly modified) e.g.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT json_object(
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'emp_no', emp.emp_no,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'first_name', emp.first_name,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'last_name', emp.last_name,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'hire_date', json_object("$date", DATE_FORMAT(emp.hire_date,'%Y-%m-%dT%TZ')),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'Department', JSON_ARRAY(json_object('dept_id', dept.dept_no, 'dept_name', dept.dept_name)),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;'Salary', s.salary) as 'json'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/employees.json' ## IMPORTANT you may want to adjust outfile path here
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FROM employees emp
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INNER JOIN salaries s ON s.emp_no=emp.emp_no
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INNER JOIN current_dept_emp c on c.emp_no = emp.emp_no
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INNER JOIN departments dept on dept.dept_no = c.dept_no&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="importing-data"&gt;Importing data&lt;/h2&gt;
&lt;p&gt;To load the file employees.json  into MongoDB, I use the &lt;a href="https://docs.mongodb.com/manual/reference/program/mongoimport/" target="_blank" rel="noopener noreferrer"&gt;mongoimport&lt;/a&gt; utility.  It’s a multi-threaded tool that can load large files efficiently.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# mongoimport --db test --collection employees --drop &lt; employees.json
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:30.401+0100 connected to: localhost
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:30.401+0100 dropping: test.employees
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:33.400+0100 test.employees 34.0MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:36.401+0100 test.employees 67.3MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:39.399+0100 test.employees 100MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:42.400+0100 test.employees 134MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:45.401+0100 test.employees 168MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:48.402+0100 test.employees 202MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:51.402+0100 test.employees 235MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:54.400+0100 test.employees 269MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:32:57.400+0100 test.employees 303MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:00.403+0100 test.employees 335MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:03.404+0100 test.employees 368MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:06.399+0100 test.employees 397MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:09.400+0100 test.employees 430MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:12.400+0100 test.employees 465MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:15.403+0100 test.employees 499MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:18.401+0100 test.employees 530MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:18.589+0100 test.employees 533MB
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;2018-10-05T12:33:18.589+0100 imported 2844047 documents&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="validate"&gt;Validate&lt;/h2&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&gt; db.employees.find({}).pretty()
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"_id" : ObjectId("5bb740cfd73e26bf45435181"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"Salary" : 60117,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"emp_no" : 10001,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"hire_date" : ISODate("1986-06-26T00:00:00Z"),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"last_name" : "Facello",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"Department" : [
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"dept_id" : "d005",
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"dept_name" : "Development"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;],
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;"first_name" : "Georgi"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We have successfully migrated some data from MySQL to MongoDB!&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource &lt;strong&gt;test&lt;/strong&gt; ideas before applying them to your production systems, and **always **secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Aftab Khan</author>
      <category>Entry Level</category>
      <category>export data</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>tools</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/export-data-to-JSON-from-MySQL_hu_42c14ff7c0d70c61.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/export-data-to-JSON-from-MySQL_hu_db9c8048c3d6f089.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Session: What's New in MariaDB Server 10.3</title>
      <link>https://percona.community/blog/2018/10/16/percona-live-europe-session-whats-new-mariadb-server-10-3/</link>
      <guid>https://percona.community/blog/2018/10/16/percona-live-europe-session-whats-new-mariadb-server-10-3/</guid>
      <pubDate>Tue, 16 Oct 2018 12:42:19 UTC</pubDate>
      <description>**</description>
      <content:encoded>&lt;p&gt;**
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/MariaDB-Foundation-vertical.png" alt="MariaDB Foundation" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Having spent my recent years “in the real world”, working with many users, I’ve learnt that a particular new feature does not necessarily excite users as much as one might expect. MariaDB 10.3 however actually has some very interesting features that users do get excited about.&lt;/p&gt;
&lt;p&gt;So that’s great!&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/live/e18/sessions/whats-new-in-and-around-mariadb-server-103" target="_blank" rel="noopener noreferrer"&gt;My session at Percona Live Europe in Frankfurt&lt;/a&gt; is going to be best for people deploying MariaDB or related infra, who haven’t had a chance to explore what the various features actually mean, or what they can do with them. The presentation will provide some practical examples to guide that process.&lt;/p&gt;
&lt;h3 id="what-im-most-looking-forward-to"&gt;What I’m most looking forward to…&lt;/h3&gt;
&lt;p&gt;Given my new position as CEO of the &lt;a href="https://mariadb.org/about/" target="_blank" rel="noopener noreferrer"&gt;MariaDB Foundation&lt;/a&gt;, I’m most looking forward to meeting lots of people. Many I know from way back and it will be good to catch up, others I haven’t met yet. The program looks fabulous, but I expect to spend a lot of time in the “hallway track”, and doing a lot of listening.&lt;/p&gt;
&lt;p&gt;Arjen Lentz is an old hand from the early and golden MySQL AB eras. After the acquisition of his company Open Query, he is now once again accumulating jetlag. This time as CEO of the MariaDB Foundation, eager to meet people the world over and talk about the MariaDB ecosystem.&lt;/p&gt;
&lt;p&gt;Arjen was responding to our questions about his session at the forthcoming &lt;a href="https://www.percona.com/live/e18/" target="_blank" rel="noopener noreferrer"&gt;Percona Live Europe 2018&lt;/a&gt; conference in Frankfurt. _&lt;/p&gt;</content:encoded>
      <author>Arjen Lentz</author>
      <category>Events</category>
      <category>MariaDB</category>
      <category>Percona Live Europe 2018</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/MariaDB-Foundation-vertical_hu_752fc1922c64346b.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/MariaDB-Foundation-vertical_hu_17205386f99e3d54.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Tutorial: Query Optimization and TLS at Large Scale</title>
      <link>https://percona.community/blog/2018/10/15/percona-live-europe-tutorial-query-optimization-workshop-tls-large-scale-session/</link>
      <guid>https://percona.community/blog/2018/10/15/percona-live-europe-tutorial-query-optimization-workshop-tls-large-scale-session/</guid>
      <pubDate>Mon, 15 Oct 2018 14:05:06 UTC</pubDate>
      <description> For Percona Live Europe this year, I got accepted a workshop on query optimization and a 50-minute talk covering TLS for MySQL at Large Scale, talking about our experiences at the Wikimedia Foundation.</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://www.percona.com/live/e18/sessions/tls-for-mysql-at-large-scale" target="_blank" rel="noopener noreferrer"&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/MySQL-at-scale.jpg" alt="MySQL has many ways to provide scalability, but can it provide it while at the same time guarantee perfect privacy? Learn it at my tutorial!" /&gt;&lt;/figure&gt;&lt;/a&gt; For Percona Live Europe this year, &lt;a href="https://www.percona.com/live/e18/speaker/jaime-crespo" target="_blank" rel="noopener noreferrer"&gt;I got accepted&lt;/a&gt; a workshop on query optimization and a 50-minute talk covering TLS for MySQL at Large Scale, talking about our experiences at the &lt;a href="https://wikimediafoundation.org/" target="_blank" rel="noopener noreferrer"&gt;Wikimedia Foundation&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="workshop"&gt;Workshop&lt;/h3&gt;
&lt;p&gt;The 3-hour workshop on Monday, titled &lt;a href="https://www.percona.com/live/e18/sessions/query-optimization-with-mysql-80-and-mariadb-103-the-basics" target="_blank" rel="noopener noreferrer"&gt;&lt;em&gt;&lt;strong&gt;Query Optimization with MySQL 8.0 and MariaDB 10.3: The Basics&lt;/strong&gt;&lt;/em&gt;&lt;/a&gt; is a beginners’ tutorial–though dense in content. It’s for people who are more familiar with database storage systems other than InnoDB for MySQL, MariaDB or Percona Server. Or who, already familiar with them, are suffering performance and scaling issues with their SQL queries. If you get confused with the output of basic commands like EXPLAIN and SHOW STATUS and want to learn some SQL-level optimizations, such as creating the right indexes or altering the schema to get the most out of the performance of your database server, then you want to attend this tutorial before going into more advanced topics. Even veteran DBAs and developers may learn one or two new tricks, only available on the latest server versions!&lt;/p&gt;
&lt;p&gt;Something that people may enjoy is that, during the tutorial, every attendee will be able to throw queries to a real-time copy of the Wikipedia database servers—or setup their own offline Wikipedia copy in their laptop. They’ll get practice by themselves what is being explained—so it will be fully hands-on. I like my sessions to be interactive, so all attendees should get ready to answer questions and think through the proposed problems by themselves!&lt;/p&gt;
&lt;h3 id="fifty-minutes-talk"&gt;Fifty minutes talk&lt;/h3&gt;
&lt;p&gt;My 50 minute talk &lt;a href="https://www.percona.com/live/e18/sessions/tls-for-mysql-at-large-scale" target="_blank" rel="noopener noreferrer"&gt;&lt;em&gt;&lt;strong&gt;TLS for MySQL at Large Scale&lt;/strong&gt;&lt;/em&gt;&lt;/a&gt; will be a bit more advanced, although maybe more attractive to users of other database technologies. On Tuesday, I will tell the tale of the mistakes and lessons learned while deploying encryption (TLS/SSL) for the replication, administration, and client connections of our databases. At the Wikimedia Foundation we take very seriously the privacy of our users—Wikipedia readers, project contributors, data reusers and every members of our community—and while none of our databases are publicly reachable, our aim is to encrypt every single connection between servers, even within our datacenters.&lt;/p&gt;
&lt;p&gt;However, when people talk about security topics, most of the time they are trying to show off the good parts of their set up, while hiding the ugly parts. Or maybe they are too theoretical to actually learn something. My focus will not be on the security principles everybody should follow, but on the pure operational problems, and the solutions we needed to deploy, as well what we would have done differently if we had known, while deploying TLS on our 200+ MariaDB server pool.&lt;/p&gt;
&lt;h3 id="looking-forward"&gt;Looking forward…&lt;/h3&gt;
&lt;p&gt;For me, as an attendee, I always look forward to the &lt;a href="https://www.percona.com/live/e18/speaker/ren-canna" target="_blank" rel="noopener noreferrer"&gt;ProxySQL sessions&lt;/a&gt;, as it is something we are currently deploying in our production. Also, I want to know more about the maturity and roadmap of the newest &lt;a href="https://www.percona.com/live/e18/sessions/mysql-80-performance-scalability-benchmarks" target="_blank" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt; and &lt;a href="https://www.percona.com/live/e18/sessions/whats-new-in-and-around-mariadb-server-103" target="_blank" rel="noopener noreferrer"&gt;MariaDB&lt;/a&gt; releases, as they keep adding new interesting features we need, as well as cluster technologies such as Galera and &lt;a href="https://www.percona.com/live/e18/sessions/the-latest-mysql-replication-features" target="_blank" rel="noopener noreferrer"&gt;InnoDB Cluster&lt;/a&gt;. I like, too, to talk with people developing and using other technologies outside of my stack, and you never know when they will fill in a need we have (&lt;a href="https://www.percona.com/live/e18/sessions/clickhouse-at-messagebird-analysing-billions-of-events-in-real-time" target="_blank" rel="noopener noreferrer"&gt;analytics&lt;/a&gt;, &lt;a href="https://www.percona.com/live/e18/sessions/myrocks-production-case-studies-at-facebook" target="_blank" rel="noopener noreferrer"&gt;compression&lt;/a&gt;, &lt;a href="https://www.percona.com/live/e18/sessions/sharedrocks-a-scalable-master-slave-replication-with-rocksdb-and-shared-file-storage" target="_blank" rel="noopener noreferrer"&gt;NoSQL&lt;/a&gt;, etc.).&lt;/p&gt;
&lt;p&gt;But above all, the thing I enjoy the most is the networking—being able to talk with professionals that suffer the same problems that I do is something I normally cannot do, and that I enjoy doing a lot during Percona Live. [caption id=“attachment_390” align=“alignright” width=“808”]&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/jaime_crespo_2018.jpeg" alt="Jaime Crespo" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Jaime Crespo in a Percona Live T-Shirt - why not come to this year’s event and start YOUR collection.&lt;/em&gt; [/caption]&lt;/p&gt;</content:encoded>
      <author>Jaime Crespo</author>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>Percona Live Europe 2018</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/MySQL-at-scale_hu_3c5128ac9f54aa12.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/MySQL-at-scale_hu_ec646f6415dc7148.jpg" medium="image"/>
    </item>
    <item>
      <title>Generating Identifiers – from AUTO_INCREMENT to Sequence</title>
      <link>https://percona.community/blog/2018/10/12/generating-identifiers-auto_increment-sequence/</link>
      <guid>https://percona.community/blog/2018/10/12/generating-identifiers-auto_increment-sequence/</guid>
      <pubDate>Fri, 12 Oct 2018 11:00:58 UTC</pubDate>
      <description>There are a number of options for generating ID values for your tables. In this post, Alexey Mikotkin of Devart explores your choices for generating identifiers with a look at auto_increment, triggers, UUID and sequences.</description>
      <content:encoded>&lt;p&gt;There are a number of options for generating ID values for your tables. In this post, Alexey Mikotkin of Devart explores your choices for generating identifiers with a look at auto_increment, triggers, UUID and sequences.&lt;/p&gt;
&lt;h2 id="auto_increment"&gt;AUTO_INCREMENT&lt;/h2&gt;
&lt;p&gt;Frequently, we happen to need to fill tables with unique identifiers. Naturally, the first example of such identifiers is PRIMARY KEY data. These are usually integer values hidden from the user since their specific values are unimportant.&lt;/p&gt;
&lt;p&gt;When adding a row to a table, you need to take this new key value from somewhere. You can set up your own process of generating a new identifier, but MySQL comes to the aid of the user with the &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html" target="_blank" rel="noopener noreferrer"&gt;AUTO_INCREMENT&lt;/a&gt; column setting. It is set as a column attribute and allows you to generate unique integer identifiers. As an example, consider the &lt;code&gt;**users**&lt;/code&gt; table, the primary key includes an &lt;code&gt;**id**&lt;/code&gt; column of type INT:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE users (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; id int NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; first_name varchar(100) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; last_name varchar(100) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; email varchar(254) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRIMARY KEY (id)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;);&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Inserting a NULL value into the &lt;code&gt;**id**&lt;/code&gt; field leads to the generation of a unique value; inserting 0 value is also possible unless the &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_no_auto_value_on_zero" target="_blank" rel="noopener noreferrer"&gt;NO_AUTO_VALUE_ON_ZERO&lt;/a&gt; Server SQL Mode is enabled:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO users(id, first_name, last_name, email) VALUES (NULL, 'Simon', 'Wood', 'simon@testhost.com');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO users(id, first_name, last_name, email) VALUES (0, 'Peter', 'Hopper', 'peter@testhost.com');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It is possible to omit the &lt;code&gt;**id**&lt;/code&gt; column. The same result is obtained with:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO users(first_name, last_name, email) VALUES ('Simon', 'Wood', 'simon@testhost.com');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO users(first_name, last_name, email) VALUES ('Peter', 'Hopper', 'peter@testhost.com');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The selection will provide the following result:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/09/select-from-users-table.png" alt="select from users table in dbForge studio" /&gt;&lt;/figure&gt;
&lt;em&gt;Select from users table shown in dbForge Studio&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;You can get the automatically generated value using the &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id" target="_blank" rel="noopener noreferrer"&gt;LAST_INSERT_ID()&lt;/a&gt; session function. This value can be used to insert a new row into a related table.&lt;/p&gt;
&lt;p&gt;There are aspects to consider when using AUTO_INCREMENT, here are some:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the case of rollback of a data insertion transaction, no data will be added to a table. However, the AUTO_INCREMENT counter will increase, and the next time you insert a row in the table, holes will appear in the table.&lt;/li&gt;
&lt;li&gt;In the case of multiple data inserts with a single INSERT command, the LAST_INSERT_ID() function will return an automatically generated value for the first row.&lt;/li&gt;
&lt;li&gt;The problem with the AUTO_INCREMENT counter value is described in &lt;a href="https://bugs.mysql.com/bug.php?id=199" target="_blank" rel="noopener noreferrer"&gt;Bug #199 - Innodb autoincrement stats los on restart&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, let’s consider several cases of using AUTO_INCREMENT for &lt;code&gt;**table1**&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE table1 (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; id int NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; PRIMARY KEY (id)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ENGINE = INNODB; -- transactional table
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Insert operations.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT LAST_INSERT_ID() INTO @p1; -- 3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Insert operations within commited transaction.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;START TRANSACTION;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 5
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 6
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;COMMIT;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT LAST_INSERT_ID() INTO @p3; -- 6
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Insert operations within rolled back transaction.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;START TRANSACTION;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 7 won't be inserted (hole)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 8 won't be inserted (hole)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL); -- 9 won't be inserted (hole)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ROLLBACK;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT LAST_INSERT_ID() INTO @p2; -- 9
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Insert multiple rows operation.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table1 VALUES (NULL), (NULL), (NULL); -- 10, 11, 12
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT LAST_INSERT_ID() INTO @p4; -- 10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Let’s check which LAST_INSERT_ID() values were at different stages of the script execution:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT @p1, @p2, @p3, @p4;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+------+------+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| @p1 | @p2 | @p3 | @p4 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+------+------+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 3 | 9 | 6 | 10 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+------+------+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- The data selection from the table shows that there are holes in the table in the values of identifiers:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT * FROM table1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 2 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 3 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 4 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 5 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 6 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 10 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 11 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 12 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+----+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;**Note: **The next AUTO_INCREMENT value for the table can be parsed from the &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/show-create-table.html" target="_blank" rel="noopener noreferrer"&gt;SHOW CREATE TABLE&lt;/a&gt; result or read from the AUTO_INCREMENT field of the &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/tables-table.html" target="_blank" rel="noopener noreferrer"&gt;INFORMATION_SCHEMA TABLES&lt;/a&gt; table.&lt;/p&gt;
&lt;p&gt;The rarer case is when the primary key is surrogate — it consists of two columns. The &lt;strong&gt;MyISAM engine&lt;/strong&gt; has an interesting solution that provides the possibility of generating values for such keys. Let’s consider the example:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE roomdetails (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; room char(30) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; id int NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRIMARY KEY (room, id)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ENGINE = MYISAM;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO roomdetails VALUES ('ManClothing', NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO roomdetails VALUES ('WomanClothing', NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO roomdetails VALUES ('WomanClothing', NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO roomdetails VALUES ('WomanClothing', NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO roomdetails VALUES ('Fitting', NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO roomdetails VALUES ('ManClothing', NULL);&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It is quite a convenient solution:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/09/select-from-roomdetails-table.png" alt="select from roomdetails table" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="special-values-auto-generation"&gt;Special values auto generation&lt;/h3&gt;
&lt;p&gt;The possibilities of the AUTO_INCREMENT attribute are limited because it can be used only for generating simple integer values. But what about complex identifier values? For example, depending on the date/time or [A0001, A0002, B0150…]). To be sure, such values should not be used in primary keys, but they might be used for some auxiliary identifiers.&lt;/p&gt;
&lt;p&gt;The generation of such unique values can be automated, but it will be necessary to write code for such purposes. We can use the &lt;strong&gt;BEFORE INSERT&lt;/strong&gt; trigger to perform the actions we need.&lt;/p&gt;
&lt;p&gt;Let’s consider a simple example. We have the &lt;code&gt;**sensors&lt;/code&gt;** table for sensors registration. Each sensor in the table has its own name, location, and type: 1 –analog, 2 –discrete, 3 –valve. Moreover, each sensor should be marked with a unique label like [symbolic representation of the sensor type + a unique 4-digit number] where the symbolic representation corresponds to such values [AN, DS, VL].&lt;/p&gt;
&lt;p&gt;In our case, it is necessary to form values like these [DS0001, DS0002…] and insert them into the &lt;code&gt;**label&lt;/code&gt;** column.&lt;/p&gt;
&lt;p&gt;When the trigger is executed, it is necessary to understand if any sensors of this type exist in the table. It is enough to assign number “1” to the first sensor of a certain type when it is added to the table.&lt;/p&gt;
&lt;p&gt;In case such sensors already exist, it is necessary to find the maximum value of the identifier in this group and form a new one by incrementing the value by 1. Naturally, it is necessary to take into account that the label should start with the desired symbol and the number should be 4-digit.&lt;/p&gt;
&lt;p&gt;So, here is the table and the trigger creation script:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE sensors (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; id int NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; type int NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name varchar(255) DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; `position` int DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; label char(6) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRIMARY KEY (id)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DELIMITER $$
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TRIGGER trigger_sensors
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;BEFORE INSERT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ON sensors
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;FOR EACH ROW
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;BEGIN
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; IF (NEW.label IS NULL) THEN
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -- Find max existed label for specified sensor type
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SELECT
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; MAX(label) INTO @max_label
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; FROM
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sensors
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; WHERE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; type = NEW.type;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; IF (@max_label IS NULL) THEN
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SET @label =
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; CASE NEW.type
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; WHEN 1 THEN 'AN'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; WHEN 2 THEN 'DS'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; WHEN 3 THEN 'VL'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ELSE 'UNKNOWN'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; END;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -- Set first sensor label
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SET NEW.label = CONCAT(@label, '0001');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ELSE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -- Set next sensor label
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; SET NEW.label = CONCAT(SUBSTR(@max_label, 1, 2), LPAD(SUBSTR(@max_label, 3) + 1, 4, '0'));
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; END IF;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; END IF;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;END$$
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DELIMITER;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The code for generating a new identifier can, of course, be more complex. In this case, it is desirable to implement some of the code as a stored procedure/function. Let’s try to add several sensors to the table and look at the result of the labels generation:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'temperature 1', 10, 'AN0025'); -- Set exact label value 'AN0025'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'temperature 2', 11, NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'pressure 1', 15, NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 2, 'door 1', 10, NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 2, 'door 2', 11, NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 3, 'valve 1', 20, NULL);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 3, 'valve 2', 21, NULL);&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/09/generating-complex-sequences.png" alt="generating complex keys" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="using-uuid"&gt;Using UUID&lt;/h3&gt;
&lt;p&gt;Another version of the identification data is worth mentioning - Universal Unique Identifier (UUID), also known as GUID. This is a 128-bit number suitable for use in primary keys.&lt;/p&gt;
&lt;p&gt;A UUUI value can be represented as a string - CHAR(36)/VARCHAR(36) or a binary value - BINARY(16). Benefits:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ability to generate values ​​from the outside, for example from an application.&lt;/li&gt;
&lt;li&gt;UUID values ​​are unique across tables and databases since the standard assumes uniqueness in space and time.&lt;/li&gt;
&lt;li&gt;There is a specification - &lt;a href="http://www.ietf.org/rfc/rfc4122.txt" target="_blank" rel="noopener noreferrer"&gt;A Universally Unique IDentifier (UUID) URN Namespace&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Disadvantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Possible performance problems.&lt;/li&gt;
&lt;li&gt;Data increase.&lt;/li&gt;
&lt;li&gt;More complex data analysis (debugging).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To generate this value, MySQL function &lt;strong&gt;UUID()&lt;/strong&gt; is used. New functions have been added to Oracle MySQL 8.0 server to work with UUID values ​​- UUID_TO_BIN, BIN_TO_UUID, IS_UUID. Learn more about it at the Oracle MySQL website - &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_uuid" target="_blank" rel="noopener noreferrer"&gt;UUID()&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The code shows the use of UUID values:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE table_uuid (id binary(16) PRIMARY KEY);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID()));
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID()));
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID()));
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;SELECT BIN_TO_UUID(id) FROM table_uuid;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| BIN_TO_UUID(id) |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| d9008d47-cdf4-11e8-8d6f-0242ac11001b |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| d900e2b2-cdf4-11e8-8d6f-0242ac11001b |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| d9015ce9-cdf4-11e8-8d6f-0242ac11001b |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+--------------------------------------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;You may also find useful the following article - &lt;a href="https://www.percona.com/blog/2014/12/19/store-uuid-optimized-way/" target="_blank" rel="noopener noreferrer"&gt;Store UUID in an optimized way&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="using-sequences"&gt;Using sequences&lt;/h3&gt;
&lt;p&gt;Some databases support the object type called Sequence that allows generating sequences of numbers. The Oracle MySQL server does not support this object type yet but the MariaDB 10.3 server has the &lt;strong&gt;Sequence&lt;/strong&gt; engine that allows working with the &lt;a href="https://mariadb.com/kb/en/library/sequence-overview/" target="_blank" rel="noopener noreferrer"&gt;Sequence&lt;/a&gt; object.&lt;/p&gt;
&lt;p&gt;The Sequence engine provides DDL commands for creating and modifying sequences as well as several auxiliary functions for working with the values. It is possible to specify the following parameters while creating a named sequence: START – a start value, INCREMENT – a step, MINVALUE/MAXVALUE – the minimum and maximum value; CACHE – the size of the cache values; CYCLE/NOCYCLE – the sequence cyclicity. For more information, see the &lt;a href="https://mariadb.com/kb/en/library/create-sequence/" target="_blank" rel="noopener noreferrer"&gt;CREATE SEQUENCE documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Moreover, the sequence can be used to generate unique numeric values.  This possibility can be considered as an alternative to AUTO_INCREMENT but the sequence additionally provides an opportunity to specify a step of the values. Let’s take a look at this example by using the &lt;code&gt;**users&lt;/code&gt;** table. The sequence object &lt;code&gt;**users_seq&lt;/code&gt;** will be used to fill the values of the primary key. It is enough to specify the &lt;strong&gt;NEXT VALUE FOR&lt;/strong&gt; function in the &lt;strong&gt;DEFAULT&lt;/strong&gt; property of the column:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE SEQUENCE users_seq;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CREATE TABLE users (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; id int NOT NULL DEFAULT (NEXT VALUE FOR users_seq),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; first_name varchar(100) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; last_name varchar(100) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; email varchar(254) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRIMARY KEY (id)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO users (first_name, last_name, email) VALUES ('Simon', 'Wood', 'simon@testhost.com');
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;INSERT INTO users (first_name, last_name, email) VALUES ('Peter', 'Hopper', 'peter@testhost.com');&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Table content output:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/09/using-sequences-for-pk.png" alt="using sequences for pk generation" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="information"&gt;Information&lt;/h2&gt;
&lt;p&gt;The images for this article were produced while using &lt;a href="https://www.devart.com/dbforge/mysql/studio/" target="_blank" rel="noopener noreferrer"&gt;dbForge Studio for MySQL Express Edition,&lt;/a&gt; a download is available from &lt;a href="https://www.devart.com/dbforge/mysql/studio/download.html" target="_blank" rel="noopener noreferrer"&gt;https://www.devart.com/dbforge/mysql/studio/download.html&lt;/a&gt;&lt;/p&gt;
&lt;h4 id="its-free"&gt;It’s free!&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Thank you to community reviewer &lt;a href="https://jfg-mysql.blogspot.com/" target="_blank" rel="noopener noreferrer"&gt;Jean-François Gagné&lt;/a&gt; for his review and suggestions for this post.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource &lt;strong&gt;test&lt;/strong&gt; ideas before applying them to your production systems, and **always **secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Alexey Mikotkin</author>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>dbForge</category>
      <category>Entry Level</category>
      <category>GUI tools</category>
      <category>MyISAM</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2018/09/generating-complex-sequences_hu_21437ace800b752a.jpg"/>
      <media:content url="https://percona.community/blog/2018/09/generating-complex-sequences_hu_c70b4fee487cd4e0.jpg" medium="image"/>
    </item>
    <item>
      <title>Deploying MySQL on Kubernetes with a Percona-based Operator</title>
      <link>https://percona.community/blog/2018/10/11/deploying-mysql-on-kubernetes-with-a-percona-based-operator/</link>
      <guid>https://percona.community/blog/2018/10/11/deploying-mysql-on-kubernetes-with-a-percona-based-operator/</guid>
      <pubDate>Thu, 11 Oct 2018 17:03:04 UTC</pubDate>
      <description>In the context of providing managed WordPress hosting services, at Presslabs we operate with lots of small to medium-sized databases, in a DB-per-service model, as we call it. The workloads are mostly reads, so we need to efficiently scale that. The MySQL® asynchronous replication model fits the bill very well, allowing us to scale horizontally from one server—with the obvious availability pitfalls—to tens of nodes. The next release of the stack is going to be open-sourced.</description>
      <content:encoded>&lt;p&gt;In the context of providing managed WordPress hosting services, at &lt;a href="https://www.presslabs.com/" target="_blank" rel="noopener noreferrer"&gt;Presslabs&lt;/a&gt; we operate with lots of small to medium-sized databases, in a DB-per-service model, as we call it. The workloads are mostly reads, so we need to efficiently scale that. The MySQL® asynchronous replication model fits the bill very well, allowing us to scale horizontally from one server—with the obvious availability pitfalls—to tens of nodes. The next release of the stack is going to be open-sourced.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/kubernetes-mysql-operator.png" alt="Kubernetes MySQL Operator" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As we were already using &lt;a href="https://kubernetes.io/" target="_blank" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, we were looking for an operator that could automate our DB deployments and auto-scaling. Those available were doing synchronous replication using MySQL group replication or Galera-based replication. Therefore, we decided to write our own operator.&lt;/p&gt;
&lt;h2 id="solution-architecture"&gt;Solution architecture&lt;/h2&gt;
&lt;p&gt;The &lt;a href="https://www.presslabs.com/code/mysqloperator/" target="_blank" rel="noopener noreferrer"&gt;MySQL operator&lt;/a&gt;, released under Apache 2.0 license, is based on Percona Server for MySQL for its operational improvements —like utility user and backup locks—and relies on the tried and tested &lt;a href="https://github.com/github/orchestrator" target="_blank" rel="noopener noreferrer"&gt;Orchestrator&lt;/a&gt; to do the automatic failovers. We’ve been using &lt;a href="https://www.percona.com/software/mysql-database/percona-server" target="_blank" rel="noopener noreferrer"&gt;Percona Server&lt;/a&gt; in production for about four years, with very good results, thus encouraging us to continue implementing it in the operator as well.&lt;/p&gt;
&lt;p&gt;The MySQL Operator-Orchestrator integration is highly important for topology, as well as for cluster healing and system failover. Orchestrator is a MySQL high availability and replication management tool that was coded and opened by &lt;a href="https://github.com/" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As we’re writing this, the operator is undergoing a full rewrite to implement the operator using the &lt;a href="https://github.com/kubernetes-sigs/kubebuilder" target="_blank" rel="noopener noreferrer"&gt;Kubebuilder&lt;/a&gt; framework, which is a pretty logical step to simplify and standardize the operator to make it more readable to contributors and users.&lt;/p&gt;
&lt;h2 id="aims-for-the-project"&gt;Aims for the project&lt;/h2&gt;
&lt;p&gt;We’ve built the MySQL operator with several considerations in mind, generated by the needs that no other operator could satisfy at the time we started working on it, last year.&lt;/p&gt;
&lt;p&gt;Here are some of them:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Easily deployable MySQL clusters in Kubernetes, following the cluster-per-service model&lt;/li&gt;
&lt;li&gt;DevOps-friendly, critical to basic operations such as monitoring, availability, scalability, and backup stories&lt;/li&gt;
&lt;li&gt;Out-of-the-box backups, scheduled or on-demand, and point-in-time recovery&lt;/li&gt;
&lt;li&gt;Support for cloning, both inside a cluster and across clusters&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It’s good to know that the MySQL operator is now in beta version, and can be tested in production workloads. However, you can take a spin and decide for yourself—we’re already successfully using it for a part of our production workloads at &lt;a href="https://www.presslabs.com/" target="_blank" rel="noopener noreferrer"&gt;Presslabs&lt;/a&gt;, for our customer dashboard services.&lt;/p&gt;
&lt;p&gt;Going further to some more practical info, we’ve successfully installed and tested the operator on AWS, Google Cloud Platform, and Microsoft Azure and covered the step by step process in three tutorials here.&lt;/p&gt;
&lt;h2 id="set-up-and-configuration"&gt;Set up and configuration&lt;/h2&gt;
&lt;p&gt;It’s fairly simple to use the operator. Prerequisites would be the ubiquitous &lt;a href="https://helm.sh/" target="_blank" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/reference/kubectl/overview/" target="_blank" rel="noopener noreferrer"&gt;Kubectl&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The first step is to install the controller. Two commands should be run, to make use of the Helm chart bundled in the operator:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ helm repo add presslabs https://presslabs.github.io/charts
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ helm install presslabs/mysql-operator --name mysql-operator&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;These commands will deploy the controller together with an Orchestrator cluster. The configuration parameters of the Helm chart for the operator and its default values are as follows:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Default value&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;replicaCount&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;replicas for controller&lt;/td&gt;
&lt;td&gt;&lt;code&gt;1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;image&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;controller container image&lt;/td&gt;
&lt;td&gt;&lt;code&gt;quay.io/presslabs/mysql-operator:v0.1.5&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;imagePullPolicy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;controller image pull policy&lt;/td&gt;
&lt;td&gt;&lt;code&gt;IfNotPresent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;helperImage&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;mysql helper image&lt;/td&gt;
&lt;td&gt;&lt;code&gt;quay.io/presslabs/mysql-helper:v0.1.5&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;installCRDs&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;whether or not to install CRDS&lt;/td&gt;
&lt;td&gt;&lt;code&gt;true&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;resources&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;controller pod resources&lt;/td&gt;
&lt;td&gt;{}&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nodeSelector&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;controller pod nodeSelector&lt;/td&gt;
&lt;td&gt;{}&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;tolerations&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;controller pod tolerations&lt;/td&gt;
&lt;td&gt;{}&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;affinity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;controller pod affinity&lt;/td&gt;
&lt;td&gt;{}&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;extraArgs&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;args that are passed to controller&lt;/td&gt;
&lt;td&gt;[]&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;rbac.create&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;whether or not to create rbac service account, role and roleBinding&lt;/td&gt;
&lt;td&gt;&lt;code&gt;true&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;rbac.serviceAccountName&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;If rbac.create is false then this service account is used&lt;/td&gt;
&lt;td&gt;&lt;code&gt;default&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;orchestrator.replicas&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Control Orchestrator replicas&lt;/td&gt;
&lt;td&gt;&lt;code&gt;3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;orchestrator.image&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Orchestrator container image&lt;/td&gt;
&lt;td&gt;&lt;code&gt;quay.io/presslabs/orchestrator:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Further Orchestrator values can be tuned by checking the &lt;a href="https://github.com/presslabs/docker-orchestrator/blob/master/charts/orchestrator/values.yaml" target="_blank" rel="noopener noreferrer"&gt;values.yaml&lt;/a&gt; config file.&lt;/p&gt;
&lt;h3 id="cluster-deployment"&gt;Cluster deployment&lt;/h3&gt;
&lt;p&gt;The next step is to deploy a cluster. For this, you need to create a Kubernetes secret that contains MySQL credentials (root password, database name, user name, user password), to initialize the cluster and a custom resource MySQL cluster as you can see below:&lt;/p&gt;
&lt;p&gt;An example of a secret (&lt;a href="https://github.com/presslabs/mysql-operator/blob/master/examples/example-cluster-secret.yaml" target="_blank" rel="noopener noreferrer"&gt;example-cluster-secret.yaml&lt;/a&gt;):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Secret
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: my-secret
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;type: Opaque
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;data:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ROOT_PASSWORD: # root password, base_64 encoded&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;An example of simple cluster (&lt;a href="https://github.com/presslabs/mysql-operator/blob/master/examples/example-cluster.yaml" target="_blank" rel="noopener noreferrer"&gt;example-cluster.yaml&lt;/a&gt;):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: mysql.presslabs.org/v1alpha1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: MysqlCluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: my-cluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; replicas: 2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; secretName: my-secret&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The usual kubectl commands can be used to do various operations, such as a basic listing:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl get mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;or detailed cluster information:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ kubectl describe mysql my-cluster&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="backups"&gt;Backups&lt;/h3&gt;
&lt;p&gt;A further step could be setting up the backups on an object storage service. To create a backup is as simple as creating a MySQL Backup resource that can be seen in this example (&lt;a href="https://github.com/presslabs/mysql-operator/blob/master/examples/example-backup.yaml" target="_blank" rel="noopener noreferrer"&gt;example-backup.yaml&lt;/a&gt;):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: mysql.presslabs.org/v1alpha1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: MysqlBackup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: my-cluster-backup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; clusterName: my-cluster
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; backupUri: gs://bucket_name/path/to/backup.xtrabackup.gz
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; backupSecretName: my-cluster-backup-secret&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;To provide credentials for a storage service, you have to create a secret and specify your credentials to your provider; we currently support AWS, GCS or HTTP as in this example (&lt;a href="https://github.com/presslabs/mysql-operator/blob/master/examples/example-backup-secret.yaml" target="_blank" rel="noopener noreferrer"&gt;example-backup-secret.yaml&lt;/a&gt;):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiVersion: v1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kind: Secret
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; name: my-cluster-backup-secret
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;type: Opaque
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Data:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; # AWS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; AWS_ACCESS_KEY_ID: #add here your key, base_64 encoded
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; AWS_SECRET_KEY: #and your secret, base_64 encoded
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; # or Google Cloud base_64 encoded
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; # GCS_SERVICE_ACCOUNT_JSON_KEY: #your key, base_64 encoded
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; # GCS_PROJECT_ID: #your ID, base_64 encoded&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Also, recurrent cluster backups and cluster initialization from a backup are some additional operations you can opt for. For more details head for our &lt;a href="https://www.presslabs.com/code/mysqloperator/mysql-operator-backups/" target="_blank" rel="noopener noreferrer"&gt;documentation page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Further operations and new usage information are kept up-to-date on the project homepage.&lt;/p&gt;
&lt;p&gt;Our future plans include developing the MySQL operator and integrating it with &lt;a href="https://www.percona.com/software/database-tools/percona-monitoring-and-management" target="_blank" rel="noopener noreferrer"&gt;Percona Management &amp; Monitoring&lt;/a&gt; for better exposing the internals of the Kubernetes DB cluster.&lt;/p&gt;
&lt;h2 id="open-source-community"&gt;Open source community&lt;/h2&gt;
&lt;p&gt;Community contributions are highly appreciated; we should mention the pull requests from &lt;a href="https://platform9.com" target="_blank" rel="noopener noreferrer"&gt;Platform9&lt;/a&gt;, so far, but also the sharp questions on the channel we’ve opened on &lt;a href="https://gitter.im/PressLabs/mysql-operator" target="_blank" rel="noopener noreferrer"&gt;Gitter&lt;/a&gt;, for which we do the best to answer in detail, as well as issue reports from early users of the operator.&lt;/p&gt;
&lt;h3 id="come-and-talk-to-us-about-the-project"&gt;Come and talk to us about the project&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/live/e18/registration-information" target="_blank" rel="noopener noreferrer"&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/ple18_logo.png" alt="ple18_logo" /&gt;&lt;/figure&gt;&lt;/a&gt;Along with my colleague Calin Don, I’ll be talking about this at &lt;a href="https://www.percona.com/live/e18/sessions/automating-mysql-deployments-on-kubernetes" target="_blank" rel="noopener noreferrer"&gt;Percona Live Europe&lt;/a&gt; in November. It would be great to have the chance to meet other enthusiasts and talk about what we’ve discovered so far!&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource &lt;strong&gt;test&lt;/strong&gt; ideas before applying them to your production systems, and **always **secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Flavius Mecea</author>
      <category>Advanced Level</category>
      <category>auto-scaling</category>
      <category>automated deployments</category>
      <category>Containers</category>
      <category>Deployment</category>
      <category>DevOps</category>
      <category>GitHub</category>
      <category>Kubernetes</category>
      <category>MySQL</category>
      <category>Orchestrator</category>
      <category>Percona Server for MySQL</category>
      <category>Scalability</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/kubernetes-mysql-operator_hu_4287e9c6027468e1.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/kubernetes-mysql-operator_hu_7790fd442eb84810.jpg" medium="image"/>
    </item>
    <item>
      <title>Percona Live Europe Tutorial: Elasticsearch 101</title>
      <link>https://percona.community/blog/2018/10/03/percona-live-tutorial-elasticsearch-101/</link>
      <guid>https://percona.community/blog/2018/10/03/percona-live-tutorial-elasticsearch-101/</guid>
      <pubDate>Wed, 03 Oct 2018 07:17:53 UTC</pubDate>
      <description/>
      <content:encoded>&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/elasticsearch-mark.png" alt="Elasticsearch mark" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;For &lt;a href="https://www.percona.com/live/e18/" target="_blank" rel="noopener noreferrer"&gt;Percona Live Europe&lt;/a&gt;, I’ll be presenting the tutorial &lt;a href="https://www.percona.com/live/e18/sessions/elasticsearch-101" target="_blank" rel="noopener noreferrer"&gt;&lt;em&gt;Elasticsearch 101&lt;/em&gt;&lt;/a&gt; alongside my colleagues and fellow presenters from &lt;a href="https://www.objectrocket.com/" target="_blank" rel="noopener noreferrer"&gt;ObjectRocket&lt;/a&gt; &lt;strong&gt;Alex Cercel&lt;/strong&gt;, DBA, and &lt;strong&gt;Mihai Aldoiu&lt;/strong&gt;, Data Engineer. Here’s a brief overview of our tutorial.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.elastic.co/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Elasticsearch®&lt;/strong&gt;&lt;/a&gt; is well known as a highly scalable search engine that stores data in a structure optimized for language based searches but its capabilities and use cases don’t stop there. In this tutorial, we’ll give you a hands-on introduction to Elasticsearch and give you a glimpse at some of the fundamental concepts. We’ll cover various administrative topics like installation and configuration, Cluster/Node management, indexes management and monitoring cluster health. We will also look at developer-oriented topics like mappings and analysis, aggregations and schema design that will help you build a robust application. There will be lab sessions too - bring a laptop!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why’s it exciting?&lt;/strong&gt; Well, although my main focus is on MongoDB, I am a huge fan of polyglot persistence. I start dealing with Elasticsearch, like, a year ago to overcome some MongoDB hard limits, and I must admit I entered a whole new world. Before working with Elasticsearch I was under the misconception “&lt;em&gt;it’s for full-text search only&lt;/em&gt;”. The truth is that the product offers way more than that. I am looking forward to sharing my experience through this presentation.&lt;/p&gt;
&lt;p&gt;Alex and Mihai are senior Elasticsearch data engineers who’ll share their deep knowledge and expertise with the attendees.&lt;/p&gt;
&lt;h2 id="who-would-get-the-most-from-this-talk"&gt;Who would get the most from this talk?&lt;/h2&gt;
&lt;p&gt;Well, everyone &lt;em&gt;could&lt;/em&gt; benefit but if I wanted to make it a little bit more specific those with most to gain are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;those who know nothing about Elasticsearch or you fall under the same misconception as I did :)&lt;/li&gt;
&lt;li&gt;someone who wants to start a new project and consider Elasticsearch as an option&lt;/li&gt;
&lt;li&gt;if you are already dealing with Elasticsearch and you want to develop more knowledge of its operations and internals.&lt;/li&gt;
&lt;li&gt;those running Elasticsearch in production and you are facing any type of challenge&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="what-presentations-am-i-most-looking-forward-to"&gt;What presentations am I most looking forward to?&lt;/h2&gt;
&lt;p&gt;At Percona conferences, I wish I was &lt;a href="https://en.wikipedia.org/wiki/Jamie_Madrox" target="_blank" rel="noopener noreferrer"&gt;Jamie Madrox&lt;/a&gt;. I wish I could create “dupes” of myself and attend every presentation. I will try to attend all MongoDB related talks since it’s my primary focus. However, this year I will also watch out for postgres-related talks. Postgres made huge steps forward since the last time I worked with it—it’s become “hot” again in the database world.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/10/Antonios.jpeg" alt="Antonios Giannopoulos in PL tee" /&gt;&lt;/figure&gt;
&lt;em&gt;Editor: Thanks to Antonios for modelling a past Percona Live tee. Come join in the tutorial and pick up one of your own.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="register-now"&gt;&lt;a href="https://www.percona.com/live/e18/" target="_blank" rel="noopener noreferrer"&gt;Register Now&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Percona Live conferences provide the open source database community with an opportunity to discover and discuss the latest open source trends, technologies and innovations. The conference includes the best and brightest innovators and influencers in the open source database industry so don’t delay? &lt;a href="https://www.percona.com/live/e18/" target="_blank" rel="noopener noreferrer"&gt;Register now!&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <author>Antonios Giannopoulos</author>
      <category>Elasticsearch</category>
      <category>Events</category>
      <category>MariaDB</category>
      <category>MongoDB</category>
      <category>MySQL</category>
      <category>Percona Live Europe 2018</category>
      <category>Tools</category>
      <category>Tutorial</category>
      <media:thumbnail url="https://percona.community/blog/2018/10/MySQL-at-scale_hu_3c5128ac9f54aa12.jpg"/>
      <media:content url="https://percona.community/blog/2018/10/MySQL-at-scale_hu_ec646f6415dc7148.jpg" medium="image"/>
    </item>
    <item>
      <title>Minimize MySQL Deadlocks with 3 Steps</title>
      <link>https://percona.community/blog/2018/09/24/minimize-mysql-deadlocks-3-steps/</link>
      <guid>https://percona.community/blog/2018/09/24/minimize-mysql-deadlocks-3-steps/</guid>
      <pubDate>Mon, 24 Sep 2018 10:49:35 UTC</pubDate>
      <description>MySQL has locking capabilities, for example table and row level locking, and such locks are needed to control data integrity in multi-user concurrency. Deadlocks—where two or more transactions are waiting for one another to give up locks before the transactions can proceed successfully—are an unwanted situation. It is a classic problem for all databases including MySQL/PostgreSQL/Oracle etc. By default, MySQL detects the deadlock condition and to break the deadlock it rolls back one of the transactions.</description>
      <content:encoded>&lt;p&gt;MySQL has locking capabilities, for example table and row level locking, and such locks are needed to control data integrity in multi-user concurrency. Deadlocks—where two or more transactions are waiting for one another to give up locks before the transactions can proceed successfully—are an unwanted situation. It is a classic problem for all databases including MySQL/PostgreSQL/Oracle etc. By default, MySQL detects the deadlock condition and to break the deadlock it rolls back one of the transactions.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/09/application-deadlock-in-MySQL-transactions.jpg" alt="application deadlock in MySQL transactions" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;For a deadlock example, see &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/innodb-deadlock-example.html" target="_blank" rel="noopener noreferrer"&gt;InnoDB deadlocks&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="some-misconceptions"&gt;Some misconceptions&lt;/h2&gt;
&lt;p&gt;There are some misconceptions about deadlocks:&lt;/p&gt;
&lt;p&gt;a) &lt;strong&gt;Transaction isolation levels are responsible for deadlocks&lt;/strong&gt;. The possibility of deadlocks is not affected by isolation level. Isolation level changes the behavior of read operations, but deadlock occurs due to write operations. However, isolation level sets fewer locks, hence it can help you to avoid certain lock types (e.g. gap locking).&lt;/p&gt;
&lt;p&gt;b) &lt;strong&gt;Small transactions are not affected by deadlocks.&lt;/strong&gt; Small transactions are less prone to deadlocks but it can still happen if transactions do not use the same order of operations.&lt;/p&gt;
&lt;p&gt;c) &lt;strong&gt;Deadlocks are dangerous.&lt;/strong&gt; I still hear from some customers who are using MyISAM tables that their reason for not switching to InnoDB is the deadlock problem. Deadlocks aren’t dangerous if you retry the transaction that failed due to deadlock and follow the steps given below in this article.&lt;/p&gt;
&lt;p&gt;I hope that this article will help clear such misconceptions.&lt;/p&gt;
&lt;p&gt;Back to the topic of this article. There are many possibilities that can cause deadlocks to occur and, for simplicity, I have grouped my recommendations into 3 steps.&lt;/p&gt;
&lt;h2 id="1-use-a-lock-avoiding-design-strategy"&gt;1. Use a lock-avoiding design strategy&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Break big transactions into smaller transactions: keeping transactions short make them less prone to collision.&lt;/li&gt;
&lt;li&gt;If you use INSERT INTO … SELECT to copy some or all rows from one table to another, consider using a lesser locking transaction isolation level (e.g. READ_COMMITTED) and set the binary log format to row/mixed for that transaction. Alternatively, design your application to put a single INSERT statement in a loop and copy row(s) into the table.&lt;/li&gt;
&lt;li&gt;If your application performs locking reads, for example SELECT … FOR UPDATE or SELECT .. FOR SHARE consider using the NOWAIT and SKIPPED LOCK options available in MySQL 8.0, see &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html#innodb-locking-reads-nowait-skip-locked" target="_blank" rel="noopener noreferrer"&gt;Locking Read Concurrency with NOWAIT and SKIP LOCKED&lt;/a&gt;. Alternatively, you may consider using a lesser locking transaction isolation level (described earlier)&lt;/li&gt;
&lt;li&gt;Multiple transactions updating data set in one or more tables, should use the same order of operation for their transactions. Avoid locking table A, B, C in one transaction and C,A,B in another.&lt;/li&gt;
&lt;li&gt;If you have the application retry when a transaction fails due to deadlock, you should ideally have the application take a brief pause before resubmitting its query/transaction. This gives the other transaction involved in the deadlock a chance to complete and release the locks that formed part of the deadlock cycle.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2-optimize-queries"&gt;2. Optimize queries&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Well optimized queries examine fewer rows and as result set fewer locks.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="3-disable-deadlock-detection-for-systems-running-mysql-8"&gt;3. Disable deadlock detection (for systems running MySQL 8+)&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;If you’re running a high concurrency system, it maybe more efficient to disable deadlock detection and rely on the &lt;a href="https://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_lock_wait_timeout" target="_blank" rel="noopener noreferrer"&gt;innodb_lock_wait_timeout&lt;/a&gt; setting. However, keep this setting low. The default timeout setting is 50 seconds which is too long if you’re running without deadlock detection. Be careful when disabling deadlock detection as it may do more harm than good.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource &lt;strong&gt;test&lt;/strong&gt; ideas before applying them to your production systems, and **always **secure a working back up.&lt;/em&gt;&lt;/p&gt;</content:encoded>
      <author>Aftab Khan</author>
      <category>Dev</category>
      <category>Deadlock</category>
      <category>Entry Level</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2018/09/application-deadlock-in-MySQL-transactions_hu_ed9ef2c46ac511fb.jpg"/>
      <media:content url="https://percona.community/blog/2018/09/application-deadlock-in-MySQL-transactions_hu_bcc897efd2d2043a.jpg" medium="image"/>
    </item>
    <item>
      <title>Multi-master with MariaDB 10 - a tutorial</title>
      <link>https://percona.community/blog/2018/09/10/multi-master-with-mariadb-10-tutorial/</link>
      <guid>https://percona.community/blog/2018/09/10/multi-master-with-mariadb-10-tutorial/</guid>
      <pubDate>Mon, 10 Sep 2018 13:57:46 UTC</pubDate>
      <description>The goal of this tutorial is to show you how to use multi-master to aggregate databases with the same name, but different data from different masters, on the same slave.</description>
      <content:encoded>&lt;p&gt;The goal of this tutorial is to show you how to use multi-master to aggregate databases with the same name, but different data from different masters, on the same slave.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;master1&lt;/strong&gt; =&gt; a French subsidiary&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;master2&lt;/strong&gt; =&gt; a British subsidiary&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Both have the same database PRODUCTION but the data are totally different.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/09/pmacli-schema-diagram.jpg" alt="PmaControl schema topology" /&gt;&lt;/figure&gt;
&lt;em&gt;This screenshot is made from my own monitoring tool: PmaControl. You have to read 10.10.16.232 on master2 and not 10.10.16.235. The fault of my admin system! :p)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;We will start with three servers—2 masters and 1 slave—you can add more masters if needed. For this tutorial, I used Ubuntu 12.04. I’ll let you choose the right procedure for your distribution from&lt;a href="https://downloads.mariadb.org/mariadb/" target="_blank" rel="noopener noreferrer"&gt;Downloads.&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="scenario"&gt;Scenario&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;10.10.16.231 : first master (referred to subsequently as master1) =&gt; a French subsidiary&lt;/li&gt;
&lt;li&gt;10.10.16.232 : second master (referred to subsequently as master2) =&gt; a British subsidiary&lt;/li&gt;
&lt;li&gt;10.10.16.233 : slave (multi-master) (referred to subsequently as slave)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you already have your three servers correctly installed, you can scroll down directly to “&lt;em&gt;Dump your master1 and master2 databases from slave&lt;/em&gt;”.&lt;/p&gt;
&lt;h3 id="default-installation-on-3-servers"&gt;Default installation on 3 servers&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apt-get -y install python-software-properties
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;```The main reason I put it in a different file because we use [Chef](https://en.wikipedia.org/wiki/Chef_(software)) as the configuration manager and this overwrites /etc/apt/sources.list . The other reason is that if any trouble occurs, you can just remove this file and restart with the default configuration.```
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;echo "deb http://mirror.stshosting.co.uk/mariadb/repo/10.0/ubuntu precise main" &gt; /etc/apt/sources.list.d/mariadb.list&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apt-get update
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apt-get install mariadb-server&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The goal of this small script is to get the IP of the server and make a CRC32 from this IP to generate one unique server-id. Generally the command CRC32 isn’t installed, so we will use the one from MySQL. To set account // password we use the account system of Debian / Ubuntu.&lt;/p&gt;
&lt;p&gt;Even if your server has more interfaces, you should have no trouble because the IP address should be unique.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;user=`egrep user /etc/mysql/debian.cnf | tr -d ' ' | cut -d '=' -f 2 | head -n1 | tr -d 'n'`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;passwd=`egrep password /etc/mysql/debian.cnf | tr -d ' ' | cut -d '=' -f 2 | head -n1 | tr -d 'n'`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ip=`ifconfig eth0 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}' | head -n1 | tr -d 'n'`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;crc32=`mysql -u $user -p$passwd -e "SELECT CRC32('$ip')"`
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;id_server=`echo -n $crc32 | cut -d ' ' -f 2 | tr -d 'n'`&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This configuration file is not one I use in production, but a minimal version that’s shown just as an example. The config may work fine for me, but perhaps it won’t be the same for you, and it might just crash your MySQL server.&lt;/p&gt;
&lt;p&gt;If you’re interested in my default installof MariaDB 10 you can see it here: &lt;a href="https://raw.githubusercontent.com/Esysteme/Debian/master/mariadb.sh" target="_blank" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/Esysteme/Debian/master/mariadb.sh&lt;/a&gt; (this script as been updated since 4 years)&lt;/p&gt;
&lt;p&gt;example:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;./mariadb.sh -p 'secret_password' -v 10.3 -d /src/mysql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cat &gt;&gt; /etc/mysql/conf.d/mariadb10.cnf &lt;&lt; EOF
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[client]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# default-character-set = utf8
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[mysqld]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;character-set-client-handshake = FALSE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;character-set-server = utf8
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;collation-server = utf8_general_ci
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bind-address = 0.0.0.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;external-locking = off
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;skip-name-resolve
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#make a crc32 of ip server
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;server-id=$id_server
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;#to prevent auto start of thread slave
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;skip-slave-start
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[mysql]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;default-character-set = utf8
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EOF&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We restart the server&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/etc/init.d/mysql restart&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; * Stopping MariaDB database server mysqld [ OK ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; * Starting MariaDB database server mysqld [ OK ]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; * Checking for corrupt, not cleanly closed and upgrade needing tables.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Repeat these actions on all three servers.&lt;/p&gt;
&lt;h2 id="create-users-on-both-masters"&gt;Create users on both masters&lt;/h2&gt;
&lt;h3 id="create-the-replication-user-on-both-masters"&gt;Create the replication user on both masters&lt;/h3&gt;
&lt;p&gt;on &lt;strong&gt;master1&lt;/strong&gt; (10.10.16.231)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -u root -p -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'%' IDENTIFIED BY 'passwd';"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;on &lt;strong&gt;master2&lt;/strong&gt; (10.10.16.232)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -u root -p -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'%' IDENTIFIED BY 'passwd';"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="create-a-user-for-external-backup"&gt;Create a user for external backup&lt;/h3&gt;
&lt;p&gt;On master1 and on master2&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -u root -p -e "GRANT SELECT, LOCK TABLES, RELOAD, REPLICATION CLIENT, SUPER ON *.* TO 'backup'@'10.10.16.%' IDENTIFIED BY 'passwd' WITH GRANT OPTION;"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="if-you-are-just-testing"&gt;If you are just testing…&lt;/h2&gt;
&lt;p&gt;If you don’t have a such a configuration and you want to set up tests:&lt;/p&gt;
&lt;h3 id="create-a-database-on-master1-101016231"&gt;Create a database on master1 (10.10.16.231)&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [(NONE)]&gt; CREATE DATABASE PRODUCTION;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="create-a-database-on-master2-101016232"&gt;Create a database on master2 (10.10.16.232)&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-11" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-11"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master2 [(NONE)]&gt; CREATE DATABASE PRODUCTION;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="dump-your-master1-and-master2-databases-from-slave-101016233"&gt;Dump your master1 and master2 databases from slave (10.10.16.233)&lt;/h2&gt;
&lt;p&gt;All the commands from now until the end have to be carried out on the &lt;strong&gt;slave&lt;/strong&gt; server&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;–master-data=2 get the file (binary log) and its position, and add it to the beginning of the dump as a comment&lt;/li&gt;
&lt;li&gt;–single-transaction This option issues a BEGIN SQL statement before dumping data from the server (this works only on tables with the InnoDB storage engine)&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-12" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-12"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysqldump -h 10.10.16.231 -u root -p --master-data=2 --single-transaction PRODUCTION &gt; PRODUCTION_10.10.16.231.sql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysqldump -h 10.10.16.232 -u root -p --master-data=2 --single-transaction PRODUCTION &gt; PRODUCTION_10.10.16.232.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Create both new databases:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-13" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-13"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave[(NONE)]&gt; CREATE DATABASE PRODUCTION_FR;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave[(NONE)]&gt; CREATE DATABASE PRODUCTION_UK;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Load the data:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-14" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-14"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -h 10.10.16.233 -u root -p PRODUCTION_FR &lt; PRODUCTION_10.10.16.231.sql
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql -h 10.10.16.233 -u root -p PRODUCTION_UK &lt; PRODUCTION_10.10.16.232.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="set-up-both-replications-on-the-slave"&gt;Set up both replications on the slave&lt;/h2&gt;
&lt;p&gt;Edit both dumps to get file name and position of the binlog, and replace it here: (use the command “less” instead of other commands in huge files)&lt;/p&gt;
&lt;h3 id="french-subsidiary--master1"&gt;French subsidiary – master1&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-15" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-15"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;less PRODUCTION_10.10.16.231.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;get the line : (the MASTER_LOG_FILE and MASTER_LOG_POS values will be different to this example)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-16" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-16"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;replace the file and position in this command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-17" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-17"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CHANGE MASTER 'PRODUCTION_FR' TO MASTER_HOST = "10.10.16.231", MASTER_USER = "replication", MASTER_PASSWORD ="passwd", MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="english-subsidiary--master2"&gt;English subsidiary – master2&lt;/h3&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-18" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-18"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;less PRODUCTION_10.10.16.232.sql&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;get the line: (the MASTER_LOG_FILE and MASTER_LOG_POS values will be different to this example, and would normally be different between master1 and master2. It’s just in my test example they were the same)&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-19" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-19"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;replace the file and position in this command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-20" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-20"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CHANGE MASTER 'PRODUCTION_UK' TO MASTER_HOST = "10.10.16.232", MASTER_USER = "replication", MASTER_PASSWORD ="passwd", MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="rules-of-replication-on-config-file"&gt;Rules of replication on config file&lt;/h3&gt;
&lt;p&gt;Unfortunately, the option replicate-rewrite-db doesn’t exist for variables, and we cannot set up this kind of configuration without restarting the slave server. In the section relating to the slave, add the following lines to&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-21" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-21"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/etc/mysql/my.cnf&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;add these lines :&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-22" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-22"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRODUCTION_FR.replicate-rewrite-db="PRODUCTION-&gt;PRODUCTION_FR"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRODUCTION_UK.replicate-rewrite-db="PRODUCTION-&gt;PRODUCTION_UK"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRODUCTION_FR.replicate-do-db="PRODUCTION_FR"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRODUCTION_UK.replicate-do-db="PRODUCTION_UK"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After that, you can restart the daemon without a problem – but don’t forgot to launch the slaves because we skipped that at the start ;).&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-23" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-23"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;/etc/init.d/mysql restart&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Start the replication:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;one by one&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-24" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-24"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;START SLAVE 'PRODUCTION_FR';
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;START SLAVE 'PRODUCTION_UK';&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;all at the same time:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-25" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-25"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;START ALL SLAVES;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Now to check the replication:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-26" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-26"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave[(NONE)]&gt;SHOW SLAVE 'PRODUCTION_UK' STATUS;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave[(NONE)]&gt;SHOW SLAVE 'PRODUCTION_FR' STATUS;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave[(NONE)]&gt;SHOW ALL SLAVES STATUS;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="tests"&gt;Tests&lt;/h2&gt;
&lt;p&gt;on &lt;strong&gt;slave&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-27" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-27"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [(NONE)]&gt; USE PRODUCTION_FR;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DATABASE changed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [PRODUCTION_FR]&gt; SHOW TABLES;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Empty SET (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [(NONE)]&gt; USE PRODUCTION_UK;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DATABASE changed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [PRODUCTION_UK]&gt; SHOW TABLES;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Empty SET (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;on &lt;strong&gt;master1&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-28" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-28"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [(NONE)]&gt; USE PRODUCTION;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DATABASE changed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [PRODUCTION]&gt;CREATE TABLE `france` (id INT);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 ROWS affected (0.13 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [PRODUCTION]&gt; INSERT INTO `france` SET id=1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 1 ROW affected (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;on &lt;strong&gt;master2&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-29" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-29"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master2 [(NONE)]&gt; USE PRODUCTION;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DATABASE changed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master2 [PRODUCTION]&gt;CREATE TABLE `british` (id INT);
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 ROWS affected (0.13 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master2 [PRODUCTION]&gt; INSERT INTO `british` SET id=2;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 1 ROW affected (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;on &lt;strong&gt;slave&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-30" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-30"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- for FRANCE
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [(NONE)]&gt; USE PRODUCTION_FR;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DATABASE changed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [PRODUCTION_FR]&gt; SHOW TABLES;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Tables_in_PRODUCTION_FR |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| france |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [PRODUCTION_FR]&gt; SELECT * FROM france;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- for British
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [(NONE)]&gt; USE PRODUCTION_UK;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DATABASE changed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [PRODUCTION_UK]&gt; SHOW TABLES;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Tables_in_PRODUCTION_UK |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| british |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [PRODUCTION_UK]&gt; SELECT * FROM british;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 2 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;It works!&lt;/p&gt;
&lt;p&gt;If you want do this online, please add +1 to: &lt;a href="https://jira.mariadb.org/browse/MDEV-17165" target="_blank" rel="noopener noreferrer"&gt;https://jira.mariadb.org/browse/MDEV-17165&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="limitations"&gt;Limitations&lt;/h2&gt;
&lt;h4 id="warning-it-doesnt-work-with-the-database-specified-in-query-with-binlog_format--statement-or-mixed"&gt;&lt;strong&gt;WARNING&lt;/strong&gt;: it doesn’t work with the database specified in query. (With Binlog_format = STATEMENT or MIXED)&lt;/h4&gt;
&lt;p&gt;This works fine:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-31" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-31"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;USE PRODUCTION;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;UPDATE `ma_table` SET id=1 WHERE id =2;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This query will break the replication:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-32" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-32"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;USE PRODUCTION;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;UPDATE `PRODUCTION`.`ma_table` SET id=1 WHERE id =2;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;=&gt; databases &lt;code&gt;PRODUCTION&lt;/code&gt; does not exist on this server.&lt;/p&gt;
&lt;h3 id="a-real-example"&gt;A real example&lt;/h3&gt;
&lt;h4 id="missing-update"&gt;Missing update&lt;/h4&gt;
&lt;p&gt;on &lt;strong&gt;master1:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-33" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-33"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [(NONE)]&gt;UPDATE `PRODUCTION`.`france` SET id=3 WHERE id =1;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 1 ROW affected (0.02 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ROWS matched: 1 Changed: 1 Warnings: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [(NONE)]&gt; SELECT * FROM `PRODUCTION`.`france`;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 3 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;on &lt;strong&gt;slave:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-34" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-34"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [PRODUCTION_FR]&gt; SELECT * FROM france;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 1 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this case we missed the update. It’s a real problem, because if the replication should crash, our slave is desynchronized with master1 and we didn’t realize it.&lt;/p&gt;
&lt;h4 id="crash-replication"&gt;Crash replication&lt;/h4&gt;
&lt;p&gt;on &lt;strong&gt;master1&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-35" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-35"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1[(NONE)]&gt; USE PRODUCTION;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DATABASE changed
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [PRODUCTION]&gt; SELECT * FROM`PRODUCTION`.`france`;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 3 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [PRODUCTION]&gt;UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 1 ROW affected (0.01 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ROWS matched: 1 Changed: 1 Warnings: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;master1 [PRODUCTION]&gt; SELECT * FROM `PRODUCTION`.`france`;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| id |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| 4 |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;on PmaControl:&lt;/em&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/09/pmacli-schema-diagram-1.jpg" alt="pmacli schema diagram showing error" /&gt;&lt;/figure&gt; on &lt;strong&gt;slave:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-36" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-36"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;slave [PRODUCTION_FR]&gt; SHOW slave 'PRODUCTION_FR' STATUSG;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. ROW ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Slave_IO_State: Waiting FOR master TO send event
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_Host: 10.10.16.231
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_User: replication
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_Port: 3306
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Connect_Retry: 60
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_Log_File: mariadb-bin.000010
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Read_Master_Log_Pos: 2737
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Relay_Log_File: mysqld-relay-bin-production_fr.000003
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Relay_Log_Pos: 2320
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Relay_Master_Log_File: mariadb-bin.000010
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Slave_IO_Running: Yes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Slave_SQL_Running: No
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Replicate_Do_DB: PRODUCTION_FR
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Replicate_Ignore_DB:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Replicate_Do_Table:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Replicate_Ignore_Table:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Replicate_Wild_Do_Table:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Replicate_Wild_Ignore_Table:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Last_Errno: 1146
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Last_Error: Error 'Table 'PRODUCTION.france' doesn't exist' on query. Default database: 'PRODUCTION_FR'. Query: 'UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Skip_Counter: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Exec_Master_Log_Pos: 2554
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Relay_Log_Space: 2815
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Until_Condition: None
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Until_Log_File:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Until_Log_Pos: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_SSL_Allowed: No
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_SSL_CA_File:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_SSL_CA_Path:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_SSL_Cert:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_SSL_Cipher:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_SSL_Key:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Seconds_Behind_Master: NULL
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Master_SSL_Verify_Server_Cert: No
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Last_IO_Errno: 0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Last_IO_Error:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Last_SQL_Errno: 1146
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Last_SQL_Error: Error 'TABLE 'PRODUCTION.france' doesn't exist' ON query. DEFAULT DATABASE: 'PRODUCTION_FR'. Query: 'UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Replicate_Ignore_Server_Ids:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_Server_Id: 2370966657
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_SSL_Crl:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Master_SSL_Crlpath:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Using_Gtid: No
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Gtid_IO_Pos:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 ROW IN SET (0.00 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR: No query specified&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;And we got the error which crash replication : Error TABLE ‘PRODUCTION.france’ doesn’t exist’ ON query. DEFAULT DATABASE: ‘PRODUCTION_FR’. Query: ‘UPDATE &lt;code&gt;PRODUCTION&lt;/code&gt;.&lt;code&gt;france&lt;/code&gt; SET id=4 WHERE id =3&lt;/p&gt;
&lt;p&gt;NB : Everything works fine with binlog_format=ROW.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Author:&lt;/strong&gt; Aurélien LEQUOY &lt;aurelien.lequoy＠esysteme.com&gt; you don’t copy/paste the email, it won’t work. You didn’t think I would post it like that in the open for all bots, right? ;).&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;This article is published under: The GNU General Public License v3.0 &lt;a href="http://opensource.org/licenses/GPL-3.0" target="_blank" rel="noopener noreferrer"&gt;http://opensource.org/licenses/GPL-3.0&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="others"&gt;Others&lt;/h2&gt;
&lt;p&gt;The point of interest is to describe a real use case with full technical information to allow you to reproduce it by yourself. This article was originally published just after the release of MariaDB 10.0 on the now defunct website &lt;a href="https://www.mysqlplus.net" target="_blank" rel="noopener noreferrer"&gt;www.mysqlplus.net&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <author>Aurélien LEQUOY</author>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>Replication</category>
      <media:thumbnail url="https://percona.community/blog/2018/09/pmacli-schema-diagram_hu_2e2b17a4625d1e6f.jpg"/>
      <media:content url="https://percona.community/blog/2018/09/pmacli-schema-diagram_hu_5b37391faddafa9e.jpg" medium="image"/>
    </item>
    <item>
      <title>Question about Semi-Synchronous Replication: the Answer with All the Details</title>
      <link>https://percona.community/blog/2018/08/23/question-about-semi-synchronous-replication-answer-with-all-the-details/</link>
      <guid>https://percona.community/blog/2018/08/23/question-about-semi-synchronous-replication-answer-with-all-the-details/</guid>
      <pubDate>Thu, 23 Aug 2018 12:49:59 UTC</pubDate>
      <description>I was recently asked a question by mail about MySQL Lossless Semi-Synchronous Replication. As I think the answer could benefit many people, I am answering it in a blog post. The answer brings us to the internals of transaction committing, of semi-synchronous replication, of MySQL (server) crash recovery, and of storage engine (InnoDB) crash recovery. I am also debunking some misconceptions that I have often seen and heard repeated by many. Let’s start by stating one of those misconceptions.</description>
      <content:encoded>&lt;p&gt;I was recently asked a question by mail about &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/replication-semisync.html" target="_blank" rel="noopener noreferrer"&gt;MySQL Lossless Semi-Synchronous Replication&lt;/a&gt;. As I think the answer could benefit many people, I am answering it in a blog post. The answer brings us to the internals of transaction committing, of semi-synchronous replication, of MySQL (server) crash recovery, and of storage engine (InnoDB) crash recovery. I am also debunking some misconceptions that I have often seen and heard repeated by many. Let’s start by stating one of those misconceptions.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/semi-sync-replication-MySQL.jpg" alt="semi-sync replication MySQL" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;One of those misconceptions is the following (this is NOT true): semi-synchronous enabled slaves are always the most up-to-date slaves (again, this is &lt;strong&gt;NOT&lt;/strong&gt; true). If you hear it yourself, then please call people out on it to avoid this spreading more. Even if some slaves have semi-synchronous replication disabled (I will use semi-sync for short in the rest of this post), these could still be the most up-to-date slaves after a master crash. I guess this false idea is coming from the name of the feature, not much can be done about this anymore (naming is hard). The details are in the rest of this post.&lt;/p&gt;
&lt;p&gt;Back to the question I received by mail, it can be summarized as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In a deployment where a MySQL 5.7 master is crashed (kill -9 or echo c &gt; /proc/sysrq-trigger ), a slave is promoted as the new master;&lt;/li&gt;
&lt;li&gt;when the old master is brought back up, transactions that are not on the new master are observed on this old master;&lt;/li&gt;
&lt;li&gt;is this normal in a lossless semi-sync environment?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The answer to that question is yes: it is normal to have transactions on the recovered old master that are not on the new master. This is not a violation of the semi-sync promise. To understand this, we need to go in detail about semi-sync (MySQL 5.5 and 5.6) and lossless semi-sync (MySQL 5.7).&lt;/p&gt;
&lt;h2 id="semi-sync-and-lossless-semi-sync"&gt;Semi-Sync and Lossless Semi-Sync&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/5.5/en/replication-semisync.html" target="_blank" rel="noopener noreferrer"&gt;Semi-sync replication&lt;/a&gt; was introduced in MySQL 5.5. Its promise is that every transaction where the client has received a COMMIT acknowledgment would be replicated to a slave. It had a caveat though: while a client is waiting for this COMMIT acknowledgment, other clients could see the data of the committing transaction. If the master crashes at this moment (without a slave having received the transaction), it is a violation of transaction isolation. This is also known as phantom read: data observed by a client has disappeared. This is not very satisfactory.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/5.7/en/replication-semisync.html" target="_blank" rel="noopener noreferrer"&gt;Lossless semi-sync replication&lt;/a&gt; was introduced in MySQL 5.7 to solve this problem. With lossless semi-sync, we keep the promise of semi-sync (every transaction where clients have received a COMMIT acknowledgment is replicated), with the additional promise that there is no phantom reads. To understand how this works, we need to dive into the way MySQL commits transactions.&lt;/p&gt;
&lt;h2 id="the-way-mysql-commits-transactions"&gt;The Way MySQL Commits Transactions&lt;/h2&gt;
&lt;p&gt;When MySQL commits a transaction, it is going through the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Prepare&lt;/em&gt; the transaction in the storage engine (InnoDB),&lt;/li&gt;
&lt;li&gt;Write the transaction to the binary logs,&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Complete&lt;/em&gt; the transaction in the storage engine,&lt;/li&gt;
&lt;li&gt;Return an acknowledgment to the client.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The implementation of semi-sync or lossless semi-sync inserts themselves into the above process.&lt;/p&gt;
&lt;p&gt;Semi-sync in MySQL 5.5 and 5.6 happens between step #3 and #4. After “completing” the transaction in the storage engine, a semi-sync master waits for one slave to confirm the replication of the transaction. As this happens after the storage engine has “completed” the transaction, other clients can see this transaction. &lt;strong&gt;This is the cause of phantom reads.&lt;/strong&gt; Also — unrelated to phantom reads — if the master crashes at that moment and after bringing it back up, this transaction will be in the database as it has been fully “completed” in the storage engine.&lt;/p&gt;
&lt;p&gt;It is important to realize that for semi-sync (and lossless-semi-sync), transactions are written to the binary logs in the same way as in standard (non-semi-sync) replication. In other words, standard and semi-sync replication behave exactly the same way up to and including step #2. Also, once transactions are in the binary logs, they are visible to all slaves, not only to the semi-sync slaves. So a non-semi-sync slave could receive a transaction before the semi-sync slaves. This is why it is false to assume that the semi-sync slaves are the most up-to-date slaves after a master crash.&lt;/p&gt;
&lt;h4 id="it-is-false-to-assume-that-the-semi-sync-slaves-are-the-most-up-to-date-slaves-after-a-master-crash"&gt;It is false to assume that the semi-sync slaves are the most up-to-date slaves after a master crash.&lt;/h4&gt;
&lt;p&gt;In lossless semi-sync, waiting for transaction replication happens between steps #2 and #3. At this point, the transaction is not “completed” in the storage engine, so other clients do not see its data yet. But even if this transaction is not “completed”, a master crash at that moment and a subsequent restart would cause this transaction to be in the database. To understand why, we need to dive into MySQL and InnoDB crash recovery.&lt;/p&gt;
&lt;h2 id="mysql-and-innodb-crash-recovery"&gt;MySQL and InnoDB Crash Recovery&lt;/h2&gt;
&lt;p&gt;During InnoDB crash recovery, transactions that are not “completed” (have not reached step #3 of transaction committing) are rolled back. So a transaction that is not yet committed (has not reached step #1) or a transaction that is not yet written to the binary logs (has not reached step #2) will not be in the database after InnoDB crash recovery. However, if InnoDB rolled back a transaction that has reached the binary logs (step #2) but that is not “completed” (step #3), this would mean a transaction that could have reached a slave would disappear from the master. This would create data inconsistency in replication and would be bad.&lt;/p&gt;
&lt;h4 id="once-a-transaction-reaches-the-binary-logs-it-should-roll-forward"&gt;Once a transaction reaches the binary logs it should roll forward.&lt;/h4&gt;
&lt;p&gt;To avoid the data inconsistency described above, MySQL does its own crash recovery before storage engine crash recovery. This recovery consists of making sure that all the transactions in the binary logs are flagged as “completed”. So if a transaction is between step #2 and #3 at the time of the crash, it is flagged as “completed” in the storage engine during MySQL crash recovery and it is rolled forward during storage engine crash recovery. In the case where this transaction has not reached at least a slave at the moment of the crash, it will appear in the master after crash recovery. It is important to note that this could happen even without semi-sync.&lt;/p&gt;
&lt;h4 id="having-extra-transactions-on-a-recovered-master-can-happen-even-without-semi-sync"&gt;Having extra transactions on a recovered master can happen even without semi-sync.&lt;/h4&gt;
&lt;p&gt;The extra transactions that are visible on the recovered old master are because of the way MySQL and InnoDB carry out crash recovery. This is more likely to happen in a lossless semi-sync environment because of the delay introduced between steps #2 and #3 of the way MySQL commits transactions, but it could also happen without semi-sync if the timing is right.&lt;/p&gt;
&lt;h2 id="the-facebook-trick-to-avoid-extra-transactions"&gt;The Facebook Trick to Avoid Extra Transactions&lt;/h2&gt;
&lt;p&gt;There is an original trick to avoid having extra transactions on a recovered master. This trick was presented by Facebook during a talk at &lt;a href="https://www.percona.com/live/" target="_blank" rel="noopener noreferrer"&gt;Percona Live&lt;/a&gt; a few years ago (sorry, I cannot find any link to this, please post a comment below if you know of public content about this). The idea is to force MySQL to roll-back (instead of rolling forward) the transactions that are not yet “completed” in the storage engine. It must be noted that this should only be done on an old master that has been replaced by a slave. If it is done on a recovering master without failing over to a slave, a transaction that could have reached a slave would disappear from the master.&lt;/p&gt;
&lt;p&gt;To trick MySQL into rolling back the non “completed” transactions, Facebook truncates the binary logs before restarting the old master. This way, MySQL thinks that the crash happened before writing to the binary logs (step #2). So MySQL crash recovery will not flag the transactions as “complete” in the storage engine and these will be rolled back during storage engine crash recovery. This avoids the recovered old master having extra transactions. Obviously, because these transactions were once in the binary logs, they could have been replicated to slaves. So the Facebook trick avoids the old master being ahead of the new master, possibly at the cost of bringing the old master behind the new master.&lt;/p&gt;
&lt;p&gt;I know that Facebook then re-slaves the recovered old master to the new master, but I am not sure that this is possible with standard MySQL. The Facebook variant of MySQL includes additional features, and I think one of those is to put GTIDs in the InnoDB Redo logs. With this, and after the recovery of the old master, the GTID state of the database can be determined even if the binary logs are gone. In standard MySQL, I think that truncating the binary logs will result in losing the GTID state of the database, which will prevent re-slaving the old master to the new master. However, as InnoDB crash recovery prints the binary log position or the last committed transaction, I think re-slaving the old master to a &lt;a href="https://medium.com/booking-com-infrastructure/abstracting-binlog-servers-and-mysql-master-promotion-without-reconfiguring-all-slaves-44be1febc8a0" target="_blank" rel="noopener noreferrer"&gt;Binlog Server&lt;/a&gt; would be possible in a semi-sync environment.&lt;/p&gt;
&lt;p&gt;You can read more about semi-synchronous replication at Facebook below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://yoshinorimatsunobu.blogspot.com/2014/04/semi-synchronous-replication-at-facebook.html" target="_blank" rel="noopener noreferrer"&gt;Semi-Synchronous Replication at Facebook&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/live/data-performance-conference-2016/sessions/highs-and-lows-semi-synchronous-replication" target="_blank" rel="noopener noreferrer"&gt;The highs and lows of semi-synchronous replication&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="debunking-other-misconceptions"&gt;Debunking Other Misconceptions&lt;/h2&gt;
&lt;p&gt;Before closing this post, I would like to debunk other misconceptions that I often hear. Some people say that semi-sync (or lossless semi-sync) increases the availability of MySQL. In my humble opinion, &lt;strong&gt;this is false.&lt;/strong&gt; Semi-sync and lossless semi-sync actually lower availability, there is no increase here.&lt;/p&gt;
&lt;h4 id="lossless-semi-sync-is-not-a-high-availability-solution"&gt;Lossless semi-sync is not a high availability solution.&lt;/h4&gt;
&lt;p&gt;The statement that semi-sync and lossless semi-sync have lower availability than standard replication is justified by the introduction of new situations where transactions could be prevented from committing. As an example, if no semi-sync slaves are present, transactions will not be able to commit. The promise of lossless semi-sync is not about increasing availability, it is about preventing the loss of committed transactions in case of a crash. The cost of this promise is the added COMMIT latency and the new cases where COMMIT would be prevented from succeeding (thus reducing availability).&lt;/p&gt;
&lt;h4 id="group-replication-is-not-a-high-availability-solution"&gt;Group Replication is not a high availability solution.&lt;/h4&gt;
&lt;p&gt;For the same reasons, Group Replication (or Galera or Percona XtraDB Cluster) reduces availability. Group replication also brings the promise of preventing the loss of committed transactions at the cost of adding COMMIT latency. There is also another cost of Group Replication: failing COMMIT in some situations (I do not know of any situation in standard MySQL where COMMIT can fail, if you know of one, please post a comment below). An example of COMMIT failing is mentioned in my previous post on &lt;a href="http://jfg-mysql.blogspot.com/2018/01/more-write-set-in-mysql-5-7-group-replication-certification.html" target="_blank" rel="noopener noreferrer"&gt;Group Replication certification&lt;/a&gt;. This additional cost introduces another interesting promise, but as this is not a post on Group Replication, so I am not covering this here.&lt;/p&gt;
&lt;h4 id="group-replication-also-introduces-cases-where-commit-can-fail"&gt;Group Replication also introduces cases where COMMIT can fail.&lt;/h4&gt;
&lt;p&gt;This does not mean that lossless semi-sync and Group Replication cannot be used as a building block for a high availability solution, but by themselves and without other important components, they are not a high availability solution.&lt;/p&gt;
&lt;h2 id="thoughts-about-rpl_semi_sync_master_timeoutwait_no_slave"&gt;Thoughts about rpl_semi_sync_master_{timeout,wait_no_slave}&lt;/h2&gt;
&lt;p&gt;Above, I write that there are situations where a transaction will be prevented from committing. One of those situations is when there are no semi-sync slaves or when those slaves are not acknowledging transactions (for any good or bad reasons). There are two parameters to bypass this: &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/replication-options-master.html#sysvar_rpl_semi_sync_master_wait_no_slave" target="_blank" rel="noopener noreferrer"&gt;rpl_semi_sync_master_wait_no_slave&lt;/a&gt; and &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/replication-options-master.html#sysvar_rpl_semi_sync_master_timeout" target="_blank" rel="noopener noreferrer"&gt;rpl_semi_sync_master_timeout&lt;/a&gt;. Let’s talk about these a little.&lt;/p&gt;
&lt;p&gt;The rpl_semi_sync_master_wait_no_slave parameter allows MySQL to bypass the semi-sync wait when there are not enough semi-sync slaves (semi-sync in MySQL 5.7 can wait for more than one slave and this behavior is controlled by the &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/replication-options-master.html#sysvar_rpl_semi_sync_master_wait_for_slave_count" target="_blank" rel="noopener noreferrer"&gt;rpl_semi_sync_master_wait_for_slave_count&lt;/a&gt; parameter). The default value for the “wait_no_slave” parameter is ON, which means it still waits even if there are not enough semi-sync slaves. This is a safe default as it enforces the promise of semi-sync (not acknowledging COMMIT before the transaction is replicated to slaves). Even if setting this parameter to OFF is voiding that promise, I like that it exists (details below). However, I would not run MySQL unattended with waiting disabled in a full semi-sync environment.&lt;/p&gt;
&lt;p&gt;The rpl_semi_sync_master_timeout parameter allows MySQL to short-circuit waiting for slaves after a timeout with acknowledging COMMIT to the client event is the transaction was not replicated. Its default is 10 seconds, which I think is wrong. After 10 seconds, there are probably thousands of transactions waiting for commit on the master and MySQL is already struggling. If we want to prevent MySQL from struggling, this parameter should be lower. However, if we want a zero-loss failover (and failover is taking more than 10 seconds), we should not commit transactions without replicating them to slaves, in which case this parameter should be higher. Higher or lower, which one should be used…&lt;/p&gt;
&lt;p&gt;Using a “low” value for rpl_semi_sync_master_timeout looks very strange to me in a full semi-sync environment. It looks like the DBA cannot choose between committing as often as possible (standard non-semi-sync replication) or only committing transactions that are replicated (semi-sync). There is no way to have the best of both worlds here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;either someone wants &lt;strong&gt;high success rate on commit&lt;/strong&gt;, which means that the DBA does not deploy semi-sync (and the cost of this is to lose committed transactions on failover),&lt;/li&gt;
&lt;li&gt;or someone wants &lt;strong&gt;high persistence on committed transactions&lt;/strong&gt;, in which case the DBA deploys semi-sync at the cost of lowering the probability of a successful commit (and increasing commit latency).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I see one situation where these parameters are useful: transitioning from a non-semi-sync environment to a full semi-sync environment. During this transition, we want to learn about the new restrictions of semi-sync without causing too much disruption in production, and these parameters come in handy here. But once in a full semi-sync deployment, where we fully want to avoid loosing committed transactions when a master crash, I would not consider it a good idea to let transactions commit without being replicated to slaves.&lt;/p&gt;
&lt;p&gt;As a last comment on this, there are thoughts that a full semi-sync enabled master should probably crash itself when it is blocked for too long in waiting for slave acknowledgment. This is an interesting idea as it is the only way that MySQL has to unblock clients. I am not sure if this is implemented in some variant of MySQL though (maybe the Facebook variant).&lt;/p&gt;
&lt;p&gt;I hope this post clarified semi-sync and lossless semi-sync replication. If you still have questions about this or on related subjects, feel free to post them in the comments below.&lt;/p&gt;</content:encoded>
      <author>Jean-François Gagné</author>
      <category>Galera</category>
      <category>InnoDB</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <category>Percona Server</category>
      <category>Percona XtraDB Cluster</category>
      <category>Replication</category>
      <media:thumbnail url="https://percona.community/blog/2018/08/semi-sync-replication-MySQL_hu_73ec11f75c462b8b.jpg"/>
      <media:content url="https://percona.community/blog/2018/08/semi-sync-replication-MySQL_hu_3e58863ae8ce0efe.jpg" medium="image"/>
    </item>
    <item>
      <title>Easy and Effective Way of Building External Dictionaries for ClickHouse with Pentaho Data Integration Tool</title>
      <link>https://percona.community/blog/2018/08/02/easy-effective-building-external-dictionaries-clickhouse-pentaho-data-integration-tool/</link>
      <guid>https://percona.community/blog/2018/08/02/easy-effective-building-external-dictionaries-clickhouse-pentaho-data-integration-tool/</guid>
      <pubDate>Thu, 02 Aug 2018 16:09:26 UTC</pubDate>
      <description>In this post, I provide an illustration of how to use Pentaho Data Integration (PDI) tool to set up external dictionaries in MySQL to support ClickHouse. Although I use MySQL in this example, you can use any PDI supported source.</description>
      <content:encoded>&lt;p&gt;In this post, I provide an illustration of how to use Pentaho Data Integration (PDI) tool to set up external dictionaries in MySQL to support ClickHouse. Although I use MySQL in this example, you can use any PDI supported source.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/pentaho-clickhouse.jpg" alt="pentaho pdt with clickhouse" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="clickhouse"&gt;ClickHouse&lt;/h2&gt;
&lt;p&gt;ClickHouse is an open-source column-oriented DBMS (columnar database management system) for online analytical processing. Source: &lt;a href="https://en.wikipedia.org/wiki/ClickHouse" target="_blank" rel="noopener noreferrer"&gt;wiki&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="pentaho-data-integration"&gt;Pentaho Data Integration&lt;/h2&gt;
&lt;p&gt;Information from the Pentaho &lt;a href="https://wiki.pentaho.com/display/EAI/Pentaho+Data+Integration+%28Kettle%29+Tutorial" target="_blank" rel="noopener noreferrer"&gt;wiki&lt;/a&gt;: Pentaho Data Integration (PDI, also called Kettle) is the component of Pentaho responsible for the Extract, Transform and Load (ETL) processes. Though ETL tools are most frequently used in data warehouses environments, PDI can also be used for other purposes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Migrating data between applications or databases&lt;/li&gt;
&lt;li&gt;Exporting data from databases to flat files&lt;/li&gt;
&lt;li&gt;Loading data massively into databases&lt;/li&gt;
&lt;li&gt;Data cleansing&lt;/li&gt;
&lt;li&gt;Integrating applications&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;PDI is easy to use. Every process is created with a graphical tool where you specify what to do without writing code to indicate how to do it; because of this, you could say that PDI is &lt;em&gt;metadata oriented&lt;/em&gt;.&lt;/p&gt;
&lt;h2 id="external-dictionaries"&gt;External dictionaries&lt;/h2&gt;
&lt;p&gt;You can add your own dictionaries from various data sources. The data source for a dictionary can be a local text or executable file, an HTTP(s) resource, or another DBMS. For more information, see “&lt;a href="https://clickhouse.yandex/docs/en/dicts/external_dicts_dict_sources/#dicts-external_dicts_dict_sources" target="_blank" rel="noopener noreferrer"&gt;Sources for external dictionaries&lt;/a&gt;”. ClickHouse:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fully or partially stores dictionaries in RAM.&lt;/li&gt;
&lt;li&gt;Periodically updates dictionaries and dynamically loads missing values. In other words, dictionaries can be loaded dynamically.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The configuration of external dictionaries is located in one or more files. The path to the configuration is specified in the &lt;a href="https://clickhouse.yandex/docs/en/operations/server_settings/settings/#server_settings-dictionaries_config" target="_blank" rel="noopener noreferrer"&gt;dictionaries_config&lt;/a&gt; parameter. Dictionaries can be loaded at server startup or at first use, depending on the &lt;a href="https://clickhouse.yandex/docs/en/operations/server_settings/settings/#server_settings-dictionaries_lazy_load" target="_blank" rel="noopener noreferrer"&gt;dictionaries_lazy_load&lt;/a&gt; setting. Source: &lt;a href="https://clickhouse.yandex/docs/en/query_language/dicts/" target="_blank" rel="noopener noreferrer"&gt;dictionaries&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="example-of-external-dictionary"&gt;Example of external dictionary&lt;/h3&gt;
&lt;p&gt;In two words, dictionary is a key(s)-value(s) mapping that could be used for storing some value(s) which will be retrieved using a key. It is a way to build a “star” schema, where &lt;em&gt;dictionaries are dimensions&lt;/em&gt;:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/example-external-dictionary.jpg" alt="example external dictionary" /&gt;&lt;/figure&gt; Using dictionaries you can lookup data by key(customer_id in this example). Why do not use tables for simple JOIN? Here is what documentation says:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you need a JOIN for joining with dimension tables (these are relatively small tables that contain dimension properties, such as names for advertising campaigns), a JOIN might not be very convenient due to the bulky syntax and the fact that the right table is re-accessed for every query. For such cases, there is an “external dictionaries” feature that you should use instead of JOIN. For more information, see the section “External dictionaries”.&lt;/p&gt;&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;h5 id="main-point-of-this-blog-post"&gt;Main point of this blog post:&lt;/h5&gt;
&lt;blockquote&gt;
&lt;p&gt;Demonstrating filling a MySQL table using PDI tool and connecting this table to ClickHouse as an external dictionary. You can create a scheduled job for loading or updating this table.&lt;/p&gt;&lt;/blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;Filling dictionaries during the ETL process is a challenge. Of course you can write a script (or scripts) that will do all of this, but I’ve found a better way. Benefits:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Self-documented: you see what exactly PDI job does;&lt;/li&gt;
&lt;li&gt;Easy to modify(see example below)&lt;/li&gt;
&lt;li&gt;Built-in logging&lt;/li&gt;
&lt;li&gt;Very flexible&lt;/li&gt;
&lt;li&gt;If you use the &lt;a href="https://wiki.pentaho.com/display/COM/Community+Edition+Downloads" target="_blank" rel="noopener noreferrer"&gt;Community Edition&lt;/a&gt; you will not pay anything.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="pentaho-data-integration-part"&gt;Pentaho Data Integration part&lt;/h2&gt;
&lt;p&gt;You need a UI for running/developing ETL, but it’s not necessary to use the UI for running a transformation or job. Here’s an example of running it from a Linux shell(read PDI’s docs about jobs/transformation):&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;${PDI_FOLDER}/kitchen.sh -file=${PATH_TO_PDI_JOB_FILE}.kjb [-param:SOMEPARAM=SOMEVALUE]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;${PDI_FOLDER}/pan.sh -file=${PATH_TO_PDI_TRANSFORMATION_FILE}.ktr [-param:SOMEPARAM=SOMEVALUE]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Here is a PDI transformation. In this example I use three tables as a source of information, but you can create very complex logic:
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2018/08/pdi-transformation_hu_e8066f1ac7a9a3fe.png 480w, https://percona.community/blog/2018/08/pdi-transformation_hu_fdb1807e374723c6.png 768w, https://percona.community/blog/2018/08/pdi-transformation_hu_4ce0782f6b6e0830.png 1400w"
src="https://percona.community/blog/2018/08/pdi-transformation.png" alt="PDI transformation" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="datasource1-definition-example"&gt;“Datasource1” definition example&lt;/h3&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/datasource-definition-example.png" alt="datasource definition example" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Dimension lookup/update is a step that updates the MySQL table (in this example, it could be any database supported by PDI output step). It will be the source for ClickHouse’s external dictionary:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/dimension-lookup-update-id-1.png" alt="dimension lookup update id " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Fields definition:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/dimension-lookup-update-fields-2.png" alt="dimension fields definition" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Once you have done this, you hit the “SQL” button and it will generate the DDL code for D_CUSTOMER table. You can manage the algorithm of storing data in the step above: update or insert new record(with time_start/time_end fields). Also, if you use PDI for ETL, then you can generate a “technical key” for your dimension and store this key in ClickHouse, this is a different story… For this example, I will use “id” as a key in the ClickHouse dictionary.&lt;/p&gt;
&lt;p&gt;The last step is setting up external dictionary in ClickHouse’s server config.&lt;/p&gt;
&lt;h3 id="the-clickhouse-part"&gt;The ClickHouse part&lt;/h3&gt;
&lt;p&gt;External dictionary config, in this example you’ll see that I use MySQL:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;dictionaries&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;dictionary&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;name&gt;customers&lt;/name&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;source&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;!-- Source configuration --&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;mysql&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;port&gt;3306&lt;/port&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;user&gt;MySQL_User&lt;/user&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;password&gt;MySQL_Pass&lt;/password&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;replica&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;host&gt;MySQL_host&lt;/host&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;priority&gt;1&lt;/priority&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/replica&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;db&gt;DB_NAME&lt;/db&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;table&gt;D_CUSTOMER&lt;/table&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/mysql&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/source&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;layout&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;!-- Memory layout configuration --&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;flat/&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/layout&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;structure&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;id&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;name&gt;id&lt;/name&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/id&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;attribute&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;name&gt;name&lt;/name&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;type&gt;String&lt;/type&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;null_value&gt;&lt;/null_value&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/attribute&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;attribute&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;name&gt;address&lt;/name&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;type&gt;String&lt;/type&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;null_value&gt;&lt;/null_value&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/attribute&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;!-- Will be uncommented later
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;attribute&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;name&gt;phone&lt;/name&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;type&gt;String&lt;/type&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;null_value&gt;&lt;/null_value&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/attribute&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/structure&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;lifetime&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;min&gt;3600&lt;/min&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;max&gt;86400&lt;/max&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;/lifetime&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;/dictionary&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;/dictionaries&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Creating the fact table in ClickHouse:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/table-in-ClickHouse.png" alt="Create table in ClickHouse" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Some sample data:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/sample-data.png" alt="Sample data" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Now we can fetch data aggregated against the customer name:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/aggregated-data-with-customer-name.png" alt="aggregated data with customer name" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="dictionary-modification"&gt;Dictionary modification&lt;/h3&gt;
&lt;p&gt;Sometimes, it happens that you need to modify your dimensions. In my example I am going to add phone number to the “customers” dictionary. Not a problem at all. You update your datasource in PDI job:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/dictionary-modification.png" alt="dictionary modification add new field " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Open the “Dimension lookup/update” step and add the &lt;em&gt;phone&lt;/em&gt; field:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/add-a-field.png" alt="Add a field " /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;And hit the SQL button.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/alter-data-statement.png" alt="alter table statement" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Also add the “phone” field in ClickHouse’s dictionary config:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;   &lt;attribute&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;       &lt;name&gt;phone&lt;/name&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;               &lt;type&gt;String&lt;/type&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;               &lt;null_value&gt;&lt;/null_value&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;   &lt;/attribute&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;ClickHouse will update a dictionary on the fly and we are ready to go—if not please check the logs. Now you can run the query without a modification of fact_table:
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/08/query-without-modifying-fact.png" alt="query without modifying fact" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Also, note that PDI job is an XML file that could be put under version source control tools, so it is easy to track or rollback if needed. Please do not hesitate to ask if you have questions!&lt;/p&gt;</content:encoded>
      <author>Timur Solodovnikov</author>
      <category>ClickHouse</category>
      <category>Data Warehouse</category>
      <category>MySQL</category>
      <category>Open Source Databases</category>
      <category>Tools</category>
      <media:thumbnail url="https://percona.community/blog/2018/08/pentaho-clickhouse_hu_7cfbcd393858d937.jpg"/>
      <media:content url="https://percona.community/blog/2018/08/pentaho-clickhouse_hu_8b2d3d847705586b.jpg" medium="image"/>
    </item>
    <item>
      <title>How to Automate Minor Version Upgrades for MySQL on RDS</title>
      <link>https://percona.community/blog/2018/07/10/automate-minor-version-upgrades-mysql-rds/</link>
      <guid>https://percona.community/blog/2018/07/10/automate-minor-version-upgrades-mysql-rds/</guid>
      <pubDate>Tue, 10 Jul 2018 12:19:11 UTC</pubDate>
      <description>Amazon RDS for MySQL offers the option to automate minor version upgrades using the minor version upgrade policy, a property that lets you decide if Amazon is allowed to perform the upgrades on your behalf. Usually the goal is not to upgrade automatically every RDS instance but to keep up to date automatically non-production deployments. This helps you address engine issues as soon as possible and improve the automation of the deployment process.</description>
      <content:encoded>&lt;p&gt;Amazon RDS for MySQL offers the option to automate &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.Minor" target="_blank" rel="noopener noreferrer"&gt;minor version upgrades&lt;/a&gt; using the &lt;em&gt;minor version upgrade policy&lt;/em&gt;, a property that lets you decide if Amazon is allowed to perform the upgrades on your behalf. Usually the goal is not to upgrade automatically every RDS instance but to keep up to date automatically non-production deployments. This helps you address engine issues as soon as possible and improve the automation of the deployment process.&lt;/p&gt;
&lt;p&gt;If your are using the AWS Command Line Interface (CLI) and you have an instance called &lt;em&gt;test-rds01&lt;/em&gt; it is as simple as changing&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[--auto-minor-version-upgrade | --no-auto-minor-version-upgrade]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws rds modify-db-instance --db-instance-identifier test-rds01 --apply-immediately
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--auto-minor-version-upgrade true&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;And if you use the AWS Management Console, it is just a check box. All sorted? Unfortunately not. The main problem is that Amazon performs those upgrade only in rare circumstances.&lt;/p&gt;
&lt;p&gt;As for Amazon’s &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.Minor" target="_blank" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Minor version upgrades only occur automatically if a minor upgrade replaces an unsafe version, such as a minor upgrade that contains bug fixes for a previous version. In all other cases, you must modify the DB instance manually to perform a minor version upgrade.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;If the new version fixes any vulnerabilities that were present in the previous version, then the auto minor version upgrade will automatically take place during the next weekly maintenance window on your DB instance. In all other cases, you should manually perform the minor version upgrade. So in most scenarios, the automatic upgrade is unlikely to happen and using the auto-minor-version-upgrade  attribute is not the way to keep your MySQL running on RDS updated to the latest available minor version.&lt;/p&gt;
&lt;h4 id="how-to-improve-automation-of-minor-version-upgrades-amazon-rds-for-mysql"&gt;How to improve automation of minor version upgrades Amazon RDS for MySQL&lt;/h4&gt;
&lt;p&gt;Let’s say you want to reduce the time a newer minor version reaches your development environments or even your production ones. How can you achieve that on RDS? First of all you have to consider the delay it takes for a minor version to reach RDS that can be anything between a few weeks and a few months.  And you might even not notice that a new minor is available as it is not obvious how to be notified when it is.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is the best way to be notified of new minor versions available on RDS MySQL?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the past you could (even automatically) monitor the &lt;a href="https://aws.amazon.com/releasenotes/?tag=releasenotes%23keywords%23amazon-rds" target="_blank" rel="noopener noreferrer"&gt;release notes page&lt;/a&gt; but the page is not anymore used for RDS. Now you have to monitor the &lt;a href="https://aws.amazon.com/new/#database-services" target="_blank" rel="noopener noreferrer"&gt;database announcement page&lt;/a&gt;, something that you can hardly automate.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Any way to speed up the minor version upgrades?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can use the AWS CLI invoking the &lt;em&gt;describe-db-engine-versions&lt;/em&gt; API or write a simple Lambda function to retrieve the latest available minor version and act accordingly: you can, for example, notify your team of DBAs using Amazon Simple Notification Service (SNS) or you can automatically upgrade the instance. Let’s first see how to achieve that using the command line:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws --profile sandbox rds describe-db-engine-versions --engine 'mysql' --engine-version '5.7'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--query "DBEngineVersions[-1].EngineVersion"&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;where the -1 in the array let you filter out the very latest version of the engine available on RDS. Today the result is “5.7.21” and a simple cron job will monitor and can trigger notification for changes. Note that the same approach can be used to retrieve the latest available minor version for engines running MySQL 5.5 and MySQL 5.6. And PostgreSQL engines too.&lt;/p&gt;
&lt;p&gt;If you want to automatically and immediately upgrade your instance, the logic can be easily done in a few lines in bash with a cron on a EC2. For example, the following function requires only the database instance identifier:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rds_minor_upgrade() {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; rds_endpoint=$1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; engine_version="5.7"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; rds_current_minor=$(aws rds describe-db-instances
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --db-instance-identifier="$rds_endpoint" --query "DBInstances[].EngineVersion")
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; rds_latest_minor=$(aws rds describe-db-engine-versions -- engine 'mysql'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --engine-version $eng_version --query "DBEngineVersions[-1].EngineVersion")
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; if [ "$rds_latest_minor" != "$rds_current_minor" ]; then
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; aws rds modify-db-instance --apply-immediately --engine-version
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; $rds_latest_minor --db-instance-identifier $rds_endpoint
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; fi
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Alternatively you can write the code as a scheduled Lambda function in your favourite language. For example, using the AWS node.js SDK you can &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/RDS.html" target="_blank" rel="noopener noreferrer"&gt;manage RDS&lt;/a&gt; and implement the logic above using the &lt;em&gt;rds.describeDBEngineVersions&lt;/em&gt; and_rds.modifyDBInstance_ to achieve the same.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rds.describeDBEngineVersions(params, function(err, data) {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;});
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;var params = {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;DBInstanceIdentifier: 'test-rds01',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ApplyImmediately: true,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;EngineVersion: '&lt;new minor version&gt;',
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;};
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rds.modifyDBInstance(params, function(err, data) {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;(...)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;});&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="speed-up-your-minor-upgrade"&gt;Speed up your minor upgrade!&lt;/h4&gt;
&lt;p&gt;To summarize, Amazon Web Services does not offer a real way to automatically upgrade a RDS instance to the latest available minor in the most common scenarios, but it is very easy to achieve that by taking advantage of the AWS CLI or the many SDKs.&lt;/p&gt;
&lt;p&gt;The goal is not to upgrade automatically every deployment. You would not normally use this for production deployments. However, being able to monitor the latest available minor version on RDS and apply the changes automatically for development and staging deployment can significantly reduce the time it takes to have MySQL up to date on RDS and make your upgrade process more automated.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;&lt;img src="https://percona.community/blog/2018/07/upgrade-minor-versions-MySQL-Amazon-RDS.jpg" alt="upgrade minor versions MySQL Amazon RDS" /&gt;&lt;/figure&gt;&lt;/p&gt;</content:encoded>
      <author>Renato Losio</author>
      <category>Amazon RDS</category>
      <category>AWS</category>
      <category>DevOps</category>
      <category>MySQL</category>
      <category>RDS</category>
      <category>Upgrade</category>
      <media:thumbnail url="https://percona.community/blog/2018/07/upgrade-minor-versions-MySQL-Amazon-RDS_hu_a12a8e21cdc46c96.jpg"/>
      <media:content url="https://percona.community/blog/2018/07/upgrade-minor-versions-MySQL-Amazon-RDS_hu_4b895255f425a679.jpg" medium="image"/>
    </item>
    <item>
      <title>A Nice Feature in MariaDB 10.3: no InnoDB Buffer Pool in Core Dumps</title>
      <link>https://percona.community/blog/2018/06/28/nice-feature-in-mariadb-103-no-innodb-buffer-pool-in-coredumps/</link>
      <guid>https://percona.community/blog/2018/06/28/nice-feature-in-mariadb-103-no-innodb-buffer-pool-in-coredumps/</guid>
      <pubDate>Thu, 28 Jun 2018 12:28:58 UTC</pubDate>
      <description>MariaDB 10.3 is now generally available (10.3.7 was released GA on 2018-05-25). The article What’s New in MariaDB Server 10.3 by the MariaDB Corporation lists three key improvements in 10.3: temporal data processing, Oracle compatibility features, and purpose-built storage engines. Even if I am excited about MyRocks and curious on Spider, I am also very interested in less flashy but still very important changes that make running the database in production easier. This post describes such improvement: no InnoDB Buffer Pool in core dumps.</description>
      <content:encoded>&lt;p&gt;MariaDB 10.3 is now generally available (10.3.7 was released GA on 2018-05-25). The article &lt;a href="https://mariadb.com/resources/blog/whats-new-mariadb-server-103" target="_blank" rel="noopener noreferrer"&gt;What’s New in MariaDB Server 10.3&lt;/a&gt; by the MariaDB Corporation lists three key improvements in 10.3: temporal data processing, Oracle compatibility features, and purpose-built storage engines. Even if I am excited about &lt;a href="https://mariadb.com/kb/en/library/myrocks/" target="_blank" rel="noopener noreferrer"&gt;MyRocks&lt;/a&gt; and curious on &lt;a href="https://mariadb.com/kb/en/library/spider-storage-engine-overview/" target="_blank" rel="noopener noreferrer"&gt;Spider&lt;/a&gt;, I am also very interested in less flashy but still very important changes that make running the database in production easier. This post describes such improvement: &lt;strong&gt;no&lt;/strong&gt; &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool.html" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;InnoDB Buffer Pool&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;in core dumps&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Hidden in the &lt;em&gt;Compression&lt;/em&gt; section of the page &lt;a href="https://mariadb.com/kb/en/library/changes-improvements-in-mariadb-103/" target="_blank" rel="noopener noreferrer"&gt;Changes &amp; Improvements in MariaDB 10.3&lt;/a&gt; from the &lt;a href="https://mariadb.com/kb/" target="_blank" rel="noopener noreferrer"&gt;Knowledge Base&lt;/a&gt;, we can read:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;On Linux, shrink the core dumps by omitting the InnoDB buffer pool&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This is it, no more details, only a link to &lt;a href="https://jira.mariadb.org/browse/MDEV-10814" target="_blank" rel="noopener noreferrer"&gt;MDEV-10814 (Feature request: Optionally exclude large buffers from core dumps)&lt;/a&gt;. This Jira ticket was open in 2016-09-15 by a well-known MariaDB Support Engineer: Hartmut Holzgraefe. I know Booking.com was asking for this feature for a long time, this is even mentioned by Hartmut in a &lt;a href="https://github.com/MariaDB/server/pull/333#issuecomment-296206130" target="_blank" rel="noopener noreferrer"&gt;GitHub comment&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The ways this feature eases operations with MariaDB are well documented by Hartmut in the description of the Jira ticket:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;it needs less available disk space to store core dumps,&lt;/li&gt;
&lt;li&gt;it reduces the time required to write core dumps (and hence restart MySQL after a crash),&lt;/li&gt;
&lt;li&gt;it improves security by omitting substantial amount of user data from core dumps.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In addition to that, I would add that smaller core dumps are easier to share in tickets. I am often asked by support engineers to provide a core dump in relation to a crash, and my reply is “&lt;em&gt;How do you want me to give you with a 192 GB file ?&lt;/em&gt;” (or even bigger files as I saw MySQL/MariaDB being used on servers with 384 GB of RAM). This often leads to a “&lt;em&gt;Let me think about this and I will come back to you&lt;/em&gt;” answer. Avoiding the InnoDB Buffer Pool in core dumps makes this less of an issue for both DBAs and support providers.&lt;/p&gt;
&lt;p&gt;Before continuing the discussion on this improvement, I need to give more details about what a core dump is.&lt;/p&gt;
&lt;h4 id="what-is-a-core-dump-and-why-is-it-useful"&gt;&lt;strong&gt;What is a Core Dump and Why is it Useful ?&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;By looking at the &lt;a href="http://man7.org/linux/man-pages/man5/core.5.html" target="_blank" rel="noopener noreferrer"&gt;Linux manual page for core (and core dump file)&lt;/a&gt;, we can read:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;[A core dump is] a disk file containing an image of the process’s memory at the time of termination. This image can be used in a debugger to inspect the state of the program at the time that it terminated.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/Core_dump" target="_blank" rel="noopener noreferrer"&gt;Wikipedia article for core dump&lt;/a&gt; also tells us that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the core dump includes key pieces of program state as processor registers, memory management details, and other processor and operating system flags and information,&lt;/li&gt;
&lt;li&gt;the name comes from &lt;a href="https://en.wikipedia.org/wiki/Magnetic_core_memory" target="_blank" rel="noopener noreferrer"&gt;magnetic core memory&lt;/a&gt;, the principal form of random access memory from the 1950s to the 1970s, and the name has remained even if magnetic core technology is obsolete.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So a core dump is a file that can be very useful to understand the context of a crash. The exact details of how to use a core dump have been already discussed in many places and is beyond the subject of this post. The interested reader can learn more by following those links:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/blog/2011/08/26/getting-mysql-core-file-on-linux/" target="_blank" rel="noopener noreferrer"&gt;Getting MySQL Core file on Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mariadb.com/kb/en/library/how-to-produce-a-full-stack-trace-for-mysqld/" target="_blank" rel="noopener noreferrer"&gt;How to Produce a Full Stack Trace for mysqld&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.percona.com/blog/2015/08/17/mysql-is-crashing-a-support-engineers-point-of-view/" target="_blank" rel="noopener noreferrer"&gt;MySQL is crashing: a support engineer’s point of view&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.dropbox.com/s/j4salsgphyrsnjw/Cheat%20Sheet.pdf" target="_blank" rel="noopener noreferrer"&gt;Database issue cheat sheet (including gdb commands for using core dumps)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.mysql.com/doc/refman/5.7/en/crashing.html" target="_blank" rel="noopener noreferrer"&gt;What to Do If MySQL Keeps Crashing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.mysql.com/doc/refman/5.7/en/using-gdb-on-mysqld.html" target="_blank" rel="noopener noreferrer"&gt;Debugging mysqld under gdb&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Update 2018-07-31&lt;/strong&gt;: more links about how to use core dumps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mysqlentomologist.blogspot.com/2017/08/how-to-find-values-of-session-variables.html" target="_blank" rel="noopener noreferrer"&gt;How to Find Values of Session Variables With gdb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://mysqlentomologist.blogspot.com/2017/07/how-to-find-processlist-thread-id-in-gdb.html" target="_blank" rel="noopener noreferrer"&gt;How to Find Processlist Thread id in gdb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://archive.fosdem.org/2015/schedule/event/mysql_gdb/attachments/slides/595/export/events/attachments/mysql_gdb/slides/595/FOSDEM2015_gdb_tips_and_tricks_for_MySQL_DBAs.pdf" target="_blank" rel="noopener noreferrer"&gt;gdb tips and tricks for MySQL DBAs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now that we know more about core dumps, we can get back to the discussion of the new feature.&lt;/p&gt;
&lt;h4 id="the-no-innodb-buffer-pool-in-core-dump-feature-from-mariadb-103"&gt;&lt;strong&gt;The&lt;/strong&gt; &lt;strong&gt;&lt;em&gt;no InnoDB Buffer Pool in Core Dump&lt;/em&gt;&lt;/strong&gt; &lt;strong&gt;Feature from MariaDB 10.3&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;As already pointed out above, there are very few details in the release notes about how this feature works. By digging in &lt;a href="https://jira.mariadb.org/browse/MDEV-10814" target="_blank" rel="noopener noreferrer"&gt;MDEV-10814&lt;/a&gt;, following pointers to pull requests (#&lt;a href="https://github.com/MariaDB/server/pull/333" target="_blank" rel="noopener noreferrer"&gt;333&lt;/a&gt;, #&lt;a href="https://github.com/MariaDB/server/pull/364" target="_blank" rel="noopener noreferrer"&gt;364&lt;/a&gt;, &lt;a href="https://github.com/MariaDB/server/pull/365" target="_blank" rel="noopener noreferrer"&gt;365&lt;/a&gt;, …), and reading the &lt;a href="https://github.com/MariaDB/server/pull/364/commits/b600f30786816e33c1706dd36cdabf21034dc781" target="_blank" rel="noopener noreferrer"&gt;commit message&lt;/a&gt;, I was able to gather this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An initial patch was written by Hartmut in 2015.&lt;/li&gt;
&lt;li&gt;It uses the MADV_DONTDUMP flag to the &lt;a href="http://man7.org/linux/man-pages/man2/madvise.2.html" target="_blank" rel="noopener noreferrer"&gt;madvise&lt;/a&gt; system call (available in Linux kernel 3.4 and higher).&lt;/li&gt;
&lt;li&gt;Hartmut’s patch was rebased by Daniel Black, a well-known MariaDB Community Contributor (pull request #&lt;a href="https://github.com/MariaDB/server/pull/333" target="_blank" rel="noopener noreferrer"&gt;333&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;The first work by Daniel had a configuration parameter to allow including/excluding the InnoDB Buffer Pool in/from core dumps, but after a &lt;a href="https://github.com/MariaDB/server/pull/333#issuecomment-295460913" target="_blank" rel="noopener noreferrer"&gt;discussion&lt;/a&gt; in pull request #333, it was decided that the RELEASE builds would not put the InnoDB Buffer Pool in core dumps and that &lt;a href="https://mariadb.com/kb/en/library/compiling-mariadb-for-debugging/" target="_blank" rel="noopener noreferrer"&gt;DEBUG builds&lt;/a&gt; would include it (more about this below).&lt;/li&gt;
&lt;li&gt;The function buf_madvise_do_dump is added but never invoked by the server; it is there to be called from a debugger to re-enable full core dumping if needed (from this &lt;a href="https://github.com/MariaDB/server/pull/364/commits/b600f30786816e33c1706dd36cdabf21034dc781" target="_blank" rel="noopener noreferrer"&gt;commit message&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-redo-log-buffer.html" target="_blank" rel="noopener noreferrer"&gt;InnoDB Redo Log buffer&lt;/a&gt; is also excluded from core dumps (from this &lt;a href="https://github.com/MariaDB/server/pull/364#issuecomment-345655419" target="_blank" rel="noopener noreferrer"&gt;comment&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I have doubts about the absence of a configuration parameter for controlling the feature. Even if the InnoDB Buffer Pool (as written above, the feature also concerns the InnoDB Redo Log buffer, but I will only mention InnoDB Buffer Pool in the rest of this post for brevity) is not often required in core dumps, Marko Mäkelä, InnoDB Engineer at MariaDB.com, &lt;a href="https://github.com/MariaDB/server/pull/364#issuecomment-325307968" target="_blank" rel="noopener noreferrer"&gt;mentioned sometimes needing it&lt;/a&gt; to investigate deadlocks, corruption or race conditions. Moreover, I was recently asked, in a support ticket, to provide a core dump to understand a crash in MariaDB 10.2 (public bug report in &lt;a href="https://jira.mariadb.org/browse/MDEV-15608" target="_blank" rel="noopener noreferrer"&gt;MDEV-15608&lt;/a&gt;): it looks to me that the InnoDB Buffer Pool be useful here. Bottom line: having the InnoDB Buffer Pool (and Redo log buffer) in core dumps might not be regularly useful, but it is sometimes needed.&lt;/p&gt;
&lt;p&gt;To include the InnoDB Buffer Pool in core dumps, DBAs can install DEBUG binaries or they can use a debugger to call the buf_madvise_do_dump function (well thought Daniel for compensating the absence of a configuration parameter, but there are caveats described below). Both solutions are suboptimal in my humble opinion. For #2, there are risks and drawbacks of using a debugger on a live production database (when it works … see below for a war story). For #1 and unless I am mistaken, DEBUG binaries are not available from the &lt;a href="https://downloads.mariadb.org/" target="_blank" rel="noopener noreferrer"&gt;MariaDB download site&lt;/a&gt;. This means that they will have to be built by engineers of your favorite support provider, or that DBAs will have to &lt;a href="https://mariadb.com/kb/en/library/compiling-mariadb-for-debugging/" target="_blank" rel="noopener noreferrer"&gt;manually compile&lt;/a&gt; them: this is a lot of work to expect from either party. I also think that the usage of DEBUG binaries in production should be minimized, not encouraged (DEBUG binaries are for developers, not DBAs); so I feel we are heading in the wrong direction. Bottom line: I would not be surprised (&lt;a href="https://github.com/MariaDB/server/pull/333#issuecomment-295644884" target="_blank" rel="noopener noreferrer"&gt;and I am not alone&lt;/a&gt;) that a parameter might be added in a next release to ease investigations of InnoDB bugs.&lt;/p&gt;
&lt;p&gt;Out of curiosity, I checked the core dump sizes for some versions of MySQL and MariaDB with &lt;a href="https://github.com/datacharmer/dbdeployer" target="_blank" rel="noopener noreferrer"&gt;dbdeployer&lt;/a&gt; (if you have not tried it yet, you should probably spend time &lt;a href="https://www.percona.com/blog/2018/05/24/using-dbdeployer-to-manage-mysql-percona-server-and-mariadb-sandboxes/" target="_blank" rel="noopener noreferrer"&gt;learning how to use dbdeployer&lt;/a&gt;: it is very useful). Here are my naive first results with default configurations and freshly started mysqld:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;487 MB and 666 MB core dumps with MySQL 5.7.22 and 8.0.11 respectively,&lt;/li&gt;
&lt;li&gt;673 MB and 671 MB core dumps with MariaDB 10.2.15 and MariaDB 10.3.7 respectively.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I tried understanding where the inflation is coming from in MySQL 8.0.11 but I tripped on &lt;a href="https://bugs.mysql.com/bug.php?id=90561" target="_blank" rel="noopener noreferrer"&gt;Bug#90561&lt;/a&gt; which prevents my investigations. We will have to wait for 8.0.12 to know more…&lt;/p&gt;
&lt;p&gt;Back to the feature, I was surprised to see no shrinking between MariaDB 10.2 and 10.3. To make sure something was not wrong, I tried to have the InnoDB Buffer Pool in the core dump by calling the buf_madvise_do_dump function. I used the &lt;a href="https://archive.fosdem.org/2015/schedule/event/mysql_gdb/attachments/slides/595/export/events/attachments/mysql_gdb/slides/595/FOSDEM2015_gdb_tips_and_tricks_for_MySQL_DBAs.pdf" target="_blank" rel="noopener noreferrer"&gt;slides&lt;/a&gt; from the &lt;a href="https://archive.fosdem.org/2015/schedule/event/mysql_gdb/" target="_blank" rel="noopener noreferrer"&gt;gdb tips and tricks for MySQL DBAs&lt;/a&gt; talk by &lt;a href="https://mysqlentomologist.blogspot.com/" target="_blank" rel="noopener noreferrer"&gt;Valerii Kravchuk&lt;/a&gt; presented at FOSDEM 2015 (I hope a similar talk will be given soon at Percona Live as my gdb skills need a lot of improvements), but I got the following result:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ gdb -p $(pidof mysqld) -ex "call buf_madvise_do_dump()" -batch
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[...]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;No symbol "buf_madvise_do_dump" in current context.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;After investigations, I understood that the generic MariaDB Linux packages that I used with dbdeployer are compiled without the feature. A reason could be that there is no way to know that those packages will be used on a Linux 3.4+ kernel (without a recent enough kernel, the MADV_DONTDUMP argument does not exist for the madvise system call). To be able to test the feature, I would either have to build my own binaries or try packages for a specific distribution. I chose to avoid compilation but this was more tedious than I thought…&lt;/p&gt;
&lt;p&gt;By the way, maybe the buf_madvise_do_dump function should always be present in binaries and return a non-zero value when failing with a detailed message in the error logs. This would have spared me spending time understanding why it did not work in my case. I opened &lt;a href="https://jira.mariadb.org/browse/MDEV-16605" target="_blank" rel="noopener noreferrer"&gt;MDEV-16605: Always include buf_madvise_do_dump in binaries&lt;/a&gt; for that.&lt;/p&gt;
&lt;p&gt;Back to my tests and to see the feature in action, I started a &lt;a href="http://releases.ubuntu.com/16.04/" target="_blank" rel="noopener noreferrer"&gt;Ubuntu 16.04.4 LTS&lt;/a&gt; in AWS (it comes with a 4.4 kernel). But again, I could not call buf_madvise_do_dump. After more investigation, I understood that the Ubuntu and Debian packages are &lt;a href="https://sysadmin.compxtreme.ro/how-to-add-debug-symbols-for-mariadb-debianubuntu-packages/" target="_blank" rel="noopener noreferrer"&gt;not compiled with symbols&lt;/a&gt;, so calling buf_madvise_do_dump cannot be easily done on those (I later learned that there are &lt;em&gt;mariadb-server-10.3-dbgsym&lt;/em&gt; packages, but I did not test them). I ended-up falling back to Centos 7.5, which comes with a 3.10 kernel, and it worked ! Below are the core dump sizes with and without calling buf_madvise_do_dump:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;527 MB core dump on MariaDB 10.3.7 (without calling buf_madvise_do_dump),&lt;/li&gt;
&lt;li&gt;674 MB core dump on MariaDB 10.3.7 (with calling buf_madvise_do_dump).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I was surprised by bigger core dumps in MariaDB 10.3 than in MySQL 5.7, so I spent some time looking into that. It would have been much easier with the &lt;a href="https://dev.mysql.com/doc/mysql-perfschema-excerpt/5.7/en/memory-summary-tables.html" target="_blank" rel="noopener noreferrer"&gt;Memory Instrumentation&lt;/a&gt; from &lt;a href="https://dev.mysql.com/doc/refman/5.5/en/performance-schema.html" target="_blank" rel="noopener noreferrer"&gt;Performance Schema&lt;/a&gt;, but this is not yet available in MariaDB. There is a Jira ticket opened for that (&lt;a href="https://jira.mariadb.org/browse/MDEV-16431" target="_blank" rel="noopener noreferrer"&gt;MDEV-16431&lt;/a&gt;); if you are also interested in this feature, I suggest you vote for it.&lt;/p&gt;
&lt;p&gt;I guessed that the additional RAM used by MariaDB 10.3 (compared to MySQL 5.7) comes from the caches for the &lt;a href="https://mariadb.com/kb/en/library/myisam-storage-engine/" target="_blank" rel="noopener noreferrer"&gt;MyISAM&lt;/a&gt; and &lt;a href="https://mariadb.com/kb/en/library/aria-storage-engine/" target="_blank" rel="noopener noreferrer"&gt;Aria&lt;/a&gt; storage engines. Those caches, whose sizes are controlled by the &lt;a href="https://mariadb.com/kb/en/library/myisam-system-variables/#key_buffer_size" target="_blank" rel="noopener noreferrer"&gt;key_buffer_size&lt;/a&gt; and &lt;a href="https://mariadb.com/kb/en/library/aria-system-variables/#aria_pagecache_buffer_size" target="_blank" rel="noopener noreferrer"&gt;aria_pagecache_buffer_size&lt;/a&gt; parameters, are 128 MB by default in MariaDB 10.3 (more discussion about these sizes below). I tried shrinking both caches to 8 MB (&lt;a href="https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_key_buffer_size" target="_blank" rel="noopener noreferrer"&gt;the default value in MySQL since at least 5.5&lt;/a&gt;), but I got another surprise:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&gt; SET GLOBAL key_buffer_size = 8388608;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 rows affected (0.001 sec)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&gt; SET GLOBAL aria_pagecache_buffer_size = 8388608;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ERROR 1238 (HY000): Variable 'aria_pagecache_buffer_size' is a read only variable&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The &lt;a href="https://mariadb.com/kb/en/library/aria-system-variables/#aria_pagecache_buffer_size" target="_blank" rel="noopener noreferrer"&gt;aria_pagecache_buffer_size&lt;/a&gt; parameter is not dynamic ! This is annoying as I like tuning parameters to be dynamic, so I opened &lt;a href="https://jira.mariadb.org/browse/MDEV-16606" target="_blank" rel="noopener noreferrer"&gt;MDEV-16606: Make aria_pagecache_buffer_size dynamic&lt;/a&gt; for that. I tested with only shrinking the MyISAM cache and by modifying the startup configuration for Aria. The results for the core dump sizes are the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;527 MB core dump for the default behavior,&lt;/li&gt;
&lt;li&gt;400 MB core dump by shrinking the MyISAM cache from 128 MB to 8 MB,&lt;/li&gt;
&lt;li&gt;268 MB core dump by also shrinking the Aria cache from 128 MB to 8 MB.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We are now at a core dump size smaller than MySQL 5.7.22: this is the result I was expecting.&lt;/p&gt;
&lt;p&gt;I did some more tests with a larger InnoDB Buffer Pool and with a larger InnoDB Redo Log buffer while keeping MyISAM and Aria cache sizes to 8 MB. Here are the results of the sizes of the compact core dump (default behavior) vs the full core dump (using gdb):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;340 MB vs 1.4 GB core dumps when growing the InnoDB Buffer Pool from 128 MB to 1 GB,&lt;/li&gt;
&lt;li&gt;357 MB vs 1.7 GB core dumps when also growing the InnoDB Redo Log buffer from 16 MB to 128 MB.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I think the results above show the usefulness of the no InnoDB Buffer Pool in core dump feature.&lt;/p&gt;
&lt;h4 id="potential-improvements-of-the-shrinking-core-dump-feature"&gt;&lt;strong&gt;Potential Improvements of the&lt;/strong&gt; &lt;strong&gt;&lt;em&gt;Shrinking&lt;/em&gt;&lt;/strong&gt; &lt;strong&gt;Core Dump Feature&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;The end goal of excluding the InnoDB Buffer Pool from core dumps is to make generating and working with those files easier. As already mentioned above, the space and time taken to save core dumps are the main obstacles, and sharing them is also an issue (including leaking a lot of user data).&lt;/p&gt;
&lt;p&gt;Ideally, I would like to always run MySQL/MariaDB with core dump enabled on crashes (I see one exception when using &lt;a href="https://www.percona.com/blog/2016/04/08/mysql-data-at-rest-encryption/" target="_blank" rel="noopener noreferrer"&gt;database-level encryption&lt;/a&gt; for not leaking data). I even think this should be the default behavior, but this is another discussion that I will not start here. My main motivation is that if/when MySQL crashes, I want all information needed to understand the crash (and eventually report a bug) without having to change parameters, restart the database, and generate the same crash again. Obviously, this configuration is unsuitable for servers with a lot of RAM and with a large InnoDB Buffer Pool. MariaDB 10.3 makes a big step forward by excluding the InnoDB Buffer Pool (and Redo Log buffer) from core dumps, but what else could be done to achieve the goal of always running MySQL with core dump enabled ?&lt;/p&gt;
&lt;p&gt;There is a &lt;a href="https://github.com/MariaDB/server/pull/366" target="_blank" rel="noopener noreferrer"&gt;pull request to exclude the query cache from core dumps&lt;/a&gt; (also by Daniel Black, thanks for this work). When MariaDB is run with a large &lt;a href="https://mariadb.com/kb/en/library/query-cache/" target="_blank" rel="noopener noreferrer"&gt;query cache&lt;/a&gt; (and I know this is unusual, but if you know of a valid real world use case, please add a comment below), excluding it from core dumps is good. But I am not sure this is a generally needed improvement:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mysqlserverteam.com/mysql-8-0-retiring-support-for-the-query-cache/" target="_blank" rel="noopener noreferrer"&gt;MySQL 8.0 has retired the query cache&lt;/a&gt;,&lt;/li&gt;
&lt;li&gt;the &lt;a href="https://mariadb.com/kb/en/library/server-system-variables/#query_cache_type" target="_blank" rel="noopener noreferrer"&gt;query cache is disabled by default from MariaDB 10.1.7&lt;/a&gt;,&lt;/li&gt;
&lt;li&gt;and the default value for the &lt;a href="https://mariadb.com/kb/en/library/server-system-variables/#query_cache_size" target="_blank" rel="noopener noreferrer"&gt;query cache size was zero before MariaDB 10.1.7&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It looks like there is a consensus that the query cache is a very niche feature and otherwise should be disabled, so this work might not be the one that will profit most people. Still good to be done though.&lt;/p&gt;
&lt;p&gt;I would like similar work to be done on MyISAM, Aria, &lt;a href="https://mariadb.com/kb/en/library/tokudb/" target="_blank" rel="noopener noreferrer"&gt;TokuDB&lt;/a&gt; and MyRocks. As we saw above, there is an opportunity, for default deployments, to remove 256 MB from core dumps by excluding MyISAM and Aria caches. I think this work is particularly important for those two storage engines as they are loaded by default in MariaDB. By the way, and considering the relatively low usage of the MyISAM and Aria storage engine, maybe the default value for their caches should be lower: I opened &lt;a href="https://jira.mariadb.org/browse/MDEV-16607" target="_blank" rel="noopener noreferrer"&gt;MDEV-16607: Consider smaller defaults for MyISAM and Aria cache sizes&lt;/a&gt; for that.&lt;/p&gt;
&lt;p&gt;I cannot think of any other large memory buffers that I would like to exclude from core dumps. If you think about one, please add a comment below.&lt;/p&gt;
&lt;p&gt;Finally, I would like the shrinking core dump feature to also appear in Oracle MySQL and Percona Server, so I opened &lt;a href="http://bugs.mysql.com/bug.php?id=91455" target="_blank" rel="noopener noreferrer"&gt;Bug#91455: Implement core dump size reduction&lt;/a&gt; for that. For the anecdote, I was recently working on a Percona Server crash in production, and we were reluctant to enable core dumps because of the additional minutes of downtime needed to write the file to disk. In this case, the no InnoDB Buffer Pool in core dump would have been very useful !&lt;/p&gt;</content:encoded>
      <author>Jean-François Gagné</author>
      <category>core dump</category>
      <category>InnoDB</category>
      <category>InnoDB Buffer Pool</category>
      <category>MariaDB</category>
      <category>MySQL</category>
      <media:thumbnail url="https://percona.community/blog/2018/06/InnoDB-buffer-pool-size_hu_939b40b5bf42d00f.jpg"/>
      <media:content url="https://percona.community/blog/2018/06/InnoDB-buffer-pool-size_hu_84b541430fa9029.jpg" medium="image"/>
    </item>
    <item>
      <title>TiSpark: More Data Insights, No More ETL</title>
      <link>https://percona.community/blog/2018/06/18/tispark-data-insights-no-etl/</link>
      <guid>https://percona.community/blog/2018/06/18/tispark-data-insights-no-etl/</guid>
      <pubDate>Mon, 18 Jun 2018 13:41:53 UTC</pubDate>
      <description>When we released TiDB 2.0 in April, part of that announcement also included the release of TiSpark 1.0–an integral part of the TiDB platform that makes complex analytics on “fresh” transactional data possible. Since then, many people in the TiDB community have been asking for more information about TiSpark. In this post, I will explain the motivation, inner workings, and future roadmap of this project.</description>
      <content:encoded>&lt;p&gt;When we released &lt;a href="http://bit.ly/tidb_2_0" target="_blank" rel="noopener noreferrer"&gt;TiDB 2.0&lt;/a&gt; in April, part of that announcement also included the release of &lt;a href="https://github.com/pingcap/tispark" target="_blank" rel="noopener noreferrer"&gt;TiSpark&lt;/a&gt; 1.0–an integral part of the TiDB platform that makes complex analytics on “fresh” transactional data possible. Since then, many people in the TiDB community have been asking for more information about TiSpark. In this post, I will explain the motivation, inner workings, and future roadmap of this project.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;The motivation behind building TiSpark was to enable real-time analytics on TiDB without the delay and challenges of ETL. &lt;a href="https://en.wikipedia.org/wiki/Extract,_transform,_load" target="_blank" rel="noopener noreferrer"&gt;Extract, transform, and load (ETL)&lt;/a&gt;–a process to extract data from operational databases, transform that data, then load it into a database designed to supporting analytics–has been one of the most complex, tedious, error-prone, and therefore disliked tasks for many data engineers. However, it was a necessary evil to make data useful, because there hasn’t been good solutions on the market to render ETL obsolete–until now.&lt;/p&gt;
&lt;p&gt;With the emergence of open-source database solutions like TiDB, the promise of a hybrid transactional and analytical processing (HTAP) architecture, a term first coined by &lt;a href="https://www.gartner.com/doc/3599217/market-guide-htapenabling-inmemory-computing" target="_blank" rel="noopener noreferrer"&gt;Gartner&lt;/a&gt;, is fast becoming a reality. Whether you subscribe to HTAP or other similar terms, like hybrid operational and analytical workloads (HOAP) (by &lt;a href="https://451research.com/report-short?entityId=93844" target="_blank" rel="noopener noreferrer"&gt;451 Research&lt;/a&gt;) or “Translytical” (by &lt;a href="https://www.forrester.com/report/The+Forrester+Wave+Translytical+Data+Platforms+Q4+2017/-/E-RES134282" target="_blank" rel="noopener noreferrer"&gt;Forrester&lt;/a&gt;), it’s clear that the industry is calling for an end to the separation of the online transactional processing (OLTP) and online analytical processing (OLAP). No one wants to deal with ETL anymore.&lt;/p&gt;
&lt;p&gt;To make this possible, PingCAP and its open source contributors built &lt;a href="https://github.com/pingcap/tidb" target="_blank" rel="noopener noreferrer"&gt;TiDB&lt;/a&gt; and &lt;a href="https://github.com/pingcap/tispark" target="_blank" rel="noopener noreferrer"&gt;TiSpark&lt;/a&gt;, which was recognized in a recent report from &lt;a href="https://451research.com/report-short?entityId=95082" target="_blank" rel="noopener noreferrer"&gt;451 Research&lt;/a&gt; as an open source, modular NewSQL database that can be deployed to handle both operational and analytical workloads. TiSpark, which tightly integrates Apache Spark with &lt;a href="https://github.com/pingcap/tikv" target="_blank" rel="noopener noreferrer"&gt;TiKV&lt;/a&gt;, a distributed transactional key-value store on the TiDB platform, allows our customers to access operational data that was just recorded inside TiKV and run complex analytical queries on it right away. (If you are interested in experiencing an HTAP database on your laptop with TiDB + TiSpark, check out this &lt;a href="https://pingcap.com/blog/how_to_spin_up_an_htap_database_in_5_minutes_with_tidb_tispark/" target="_blank" rel="noopener noreferrer"&gt;5-minute tutorial&lt;/a&gt; to spin up a cluster using Docker-Compose!)&lt;/p&gt;
&lt;h2 id="so-how-does-tispark-work"&gt;So How Does TiSpark Work?&lt;/h2&gt;
&lt;p&gt;TiSpark leverages the power and popularity of &lt;a href="https://en.wikipedia.org/wiki/Apache_Spark" target="_blank" rel="noopener noreferrer"&gt;Spark&lt;/a&gt; with TiKV to enhance TiDB’s OLAP capabilities. Spark is a unified analytics engine that supports many big data use cases with a nice SQL interface (aka Spark SQL). TiDB from its very first day was built to be a relational SQL database with horizontal scalability; currently it’s compatible with MySQL.  While TiDB has a complex and powerful optimizer and coprocessor architecture to support ad-hoc OLAP queries using MySQL, it’s even better to leverage a feature-rich engine like Spark to complete the missing piece in the HTAP puzzle. Thus, TiSpark was born.&lt;/p&gt;
&lt;p&gt;TiSpark is a connector that supports the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Complex calculation pushdown: this feature produces better performance by pushing down complex calculations to TiKV&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Key-range pruning: examines the sorted keys in TiKV and only returns the results we need&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Index support for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Clustered index / non-clustered index&lt;/li&gt;
&lt;li&gt;Index only query optimization&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cost-based optimization for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Histogram support&lt;/li&gt;
&lt;li&gt;Index selection&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here’s high-level overview of TiSpark’s architecture inside TiDB:&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2018/06/tispark-architecture_hu_20c528e66a4e6abb.png 480w, https://percona.community/blog/2018/06/tispark-architecture_hu_e39e1b3a708dd82c.png 768w, https://percona.community/blog/2018/06/tispark-architecture_hu_b53d1fa8611d8518.png 1400w"
src="https://percona.community/blog/2018/06/tispark-architecture.png" alt="TiSpark Architecture inside TiDB" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As you can see from the architecture diagram, TiSpark works with &lt;a href="https://github.com/pingcap/pd" target="_blank" rel="noopener noreferrer"&gt;Placement Driver&lt;/a&gt; (PD), the metadata cluster of TiDB to retrieve snapshots of data location, drives the query plans into the coprocessor layer, and retrieves the data directly from TiKV, where data is actually stored and persisted.&lt;/p&gt;
&lt;p&gt;Before we go further, let’s have a better understanding of TiKV first. TiKV has a computing module called coprocessor, which can process most of the expression evaluations inside of  TiKV itself. And as your TiKV cluster grows, coprocessors expand as well. This is one of the most important reasons why TiDB as a whole scales so well both in terms of capacity and performance.&lt;/p&gt;
&lt;p&gt;For TiSpark to leverage these features inside TiKV, it makes use of Spark’s extension point called ‘ExperimentalMethods,’ because the current Spark data source API doesn’t give users the ability to execute complex calculations pushdown.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2018/06/experimental-methods_hu_92d4b1a06e0c11c8.png 480w, https://percona.community/blog/2018/06/experimental-methods_hu_6fe3171d5c00e66.png 768w, https://percona.community/blog/2018/06/experimental-methods_hu_98131f9a3216f309.png 1400w"
src="https://percona.community/blog/2018/06/experimental-methods.png" alt="experimental methods" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;These extension points expose SQL compiler’s optimization and planning details, thus allowing developers to configure the internal behaviors of almost every aspect of SQL compilation. They are at the core of TiSpark’s power. Now, we can inject our own rules and do extra work to push down more computations, such as predicates, aggregation pushdown, and Top-N pushdown (LIMIT clause with ORDER BY).&lt;/p&gt;
&lt;h2 id="tispark-in-action"&gt;TiSpark in Action&lt;/h2&gt;
&lt;p&gt;Let’s use an example to illustrate how TiSpark works in action. Suppose we have a &lt;code&gt;student&lt;/code&gt; table, and there are two indices associated with it: primary index (clustered index) on column &lt;code&gt;studentId&lt;/code&gt; and a secondary index on &lt;code&gt;school&lt;/code&gt; column. We want to run the following query on this table:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;select class, avg(score) from student
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;where school = ‘engineering’ and lottery(name) = ‘picked’
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;and studentId &gt;= 8000 and studentId &lt; 10100
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;group by class;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The above query contains two predicates, each of which matches an index. TiSpark will first analyze the predicates combination and “approximate” how many rows will be returned if a specific index is applied. The goal here is to find a way to access the table with minimum cost. The process of finding an access path will be explained later. For now, let’s first look at how predicates are processed.&lt;/p&gt;
&lt;h3 id="path-1-primary-index"&gt;Path 1: Primary Index&lt;/h3&gt;
&lt;p&gt;Assume we pick the &lt;code&gt;studentID&lt;/code&gt; index, the primary index, to access the table. The process is as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Transform the predicates “studentId &gt;= 8000 and studentId &lt; 10100” into a close-open interval on studentID: [8000, 10100);&lt;/li&gt;
&lt;li&gt;Prune the irrelavant ‘&lt;a href="https://pingcap.com/blog/2017-07-11-tidbinternal1/#region" target="_blank" rel="noopener noreferrer"&gt;regions&lt;/a&gt;’ according to the above interval and the internal data distribution information in TiKV. For the clustered index column, TiKV uses the column to split and distribute data among different TiKV nodes. If we have a value interval on ‘studentId,’ we can directly prune all the ‘regions’ that are outside of the interval.&lt;/li&gt;
&lt;li&gt;Convert the interval into coprocessor requests [8000, 10000), [10000, 10100), respectively, for both region 2 and 3 (and ignore region 1, as illustrated below) and get the data via a sequential scan.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2018/06/example-query1_hu_30ba30832253a925.png 480w, https://percona.community/blog/2018/06/example-query1_hu_c3fbfa27b9a5d1d8.png 768w, https://percona.community/blog/2018/06/example-query1_hu_985490e9cfe5975d.png 1400w"
src="https://percona.community/blog/2018/06/example-query1.png" alt="example query1" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="path-2-secondary-index"&gt;Path 2: Secondary Index&lt;/h3&gt;
&lt;p&gt;So what if we choose a different path by using the ‘school’ column index instead of the primary index? TiSpark will then go through a different procedure for secondary index.&lt;/p&gt;
&lt;p&gt;A secondary index in TiKV is encoded like main table data. (For more detailed info on how TiKV encodes data, please see this &lt;a href="https://pingcap.com/blog/2017-07-11-tidbinternal2/" target="_blank" rel="noopener noreferrer"&gt;post&lt;/a&gt;.) The difference is the split / sort key is not on primary key but on the index keys, and primary key is attached at the end for each index entry.&lt;/p&gt;
&lt;p&gt;TiSpark reads all index entries per value range “school = ‘engineering’” to retrieve all primary keys in the way illustrated below. We don’t directly search the main table via primary keys retrieved. Instead, we do a shuffle by regionID for each primary key, and then in each of the executor, TiSpark merges all the keys into continuous range. By doing so, TiSpark transforms point queries into range queries and improves performance. If there are cases where primary keys are sparse and scattered, then for that specific region, the system automatically adapts by downgrading a coprocessor request to a region scan to avoid performance hit.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2018/06/query1-explained_hu_350dfc332bf30208.png 480w, https://percona.community/blog/2018/06/query1-explained_hu_6a4ccb32fb2894d5.png 768w, https://percona.community/blog/2018/06/query1-explained_hu_d309fd0fc7254bb1.png 1400w"
src="https://percona.community/blog/2018/06/query1-explained.png" alt="query1 explained" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="so-which-path-does-tispark-choose"&gt;So Which Path Does TiSpark Choose?&lt;/h3&gt;
&lt;p&gt;TiSpark relies on a histogram built within TiDB to estimate cost and pick which path is the best way forward. Histogram is a common technique supported and applied in many relational databases. Consider TiDB’s histogram bar chart on column values below. The width of the bar is the value ranges for a specific column and height is the row count for that range. For predicates that matches an index, TiSpark estimates the total row to be returned if we apply it, but row count is not the cost. We introduced two different approaches to access a table, primary index or secondary index, and in this case the former one is far cheaper since it reads the table just once and always via sequential scan. In this scenario, even if ‘studentId’ predicate returns 200 more rows than ‘school’ column predicate, TiSpark would pick primary index as the better path.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2018/06/tspark-query-path_hu_5b5500aba9347bda.png 480w, https://percona.community/blog/2018/06/tspark-query-path_hu_82d8c85a7d9558e2.png 768w, https://percona.community/blog/2018/06/tspark-query-path_hu_2fc66c1ad0b0ede8.png 1400w"
src="https://percona.community/blog/2018/06/tspark-query-path.png" alt="TiSpark query path selection" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="aggregation-pushdown"&gt;Aggregation Pushdown&lt;/h3&gt;
&lt;p&gt;Another optimization we’ve implemented is aggregation pushdown. TiSpark will rewrite the aggregation plan and push down the partial aggregation to the coprocessor, if possible. This only happens if the underlying predicates and enclosed expression are all computable by the coprocessor. Coprocessor then calculates the aggregations for each of the region involved and typically returns less row to TiSpark as results, thus reducing the cost of serialization.&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;img sizes="100vw" srcset="https://percona.community/blog/2018/06/tispark-aggregation-pushdown_hu_215d6f884062e80b.png 480w, https://percona.community/blog/2018/06/tispark-aggregation-pushdown_hu_3490b418d5455233.png 768w, https://percona.community/blog/2018/06/tispark-aggregation-pushdown_hu_df4aa84dd4288803.png 1400w"
src="https://percona.community/blog/2018/06/tispark-aggregation-pushdown.png" alt="TiSpark aggregation pushdown" /&gt;&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="why-use-tispark"&gt;Why Use TiSpark?&lt;/h2&gt;
&lt;p&gt;Because TiDB as a whole is a distributed NewSQL database, storing data sizes that are far larger than what can be stored in a single machine, it’s natural to layer a distributed compute engine like Spark on top of it. Without TiSpark, you would need do things the old way: do a daily dump of all your data into a Hadoop/Hive cluster or another data warehouse before you can analyze it–a situation many of our customers like &lt;a href="https://www.pingcap.com/blog/Use-Case-TiDB-in-Mobike/" target="_blank" rel="noopener noreferrer"&gt;Mobike&lt;/a&gt; avoided by adopting TiDB with TiSpark. If you want to run queries on “fresh” data, not stale ones that are at least one day old, then TiSpark shines. Plus, you no longer need to manage and maintain any ETL pipelines, saving your team lots of time, resources, and headaches.&lt;/p&gt;
&lt;h2 id="whats-next"&gt;What’s Next?&lt;/h2&gt;
&lt;p&gt;Although we released TiSpark 1.0 not that long ago, we are already busy working on new features. Here is a list of the important features we’d like to build in 2018:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Compatibility with Spark 2.3 (right now TiSpark supports 2.1)&lt;/li&gt;
&lt;li&gt;Batch Write Support (writing directly in TiKV native format)&lt;/li&gt;
&lt;li&gt;JSON Type support (since TiDB already supports JSON as well)&lt;/li&gt;
&lt;li&gt;Partition Table support (both Range and Hash)&lt;/li&gt;
&lt;li&gt;Join optimization based on range and partition table&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you are interested in helping us build any of these features, please &lt;a href="https://github.com/pingcap/tispark" target="_blank" rel="noopener noreferrer"&gt;contribute&lt;/a&gt;! TiSpark is open-sourced.&lt;/p&gt;
&lt;h2 id="try-it-out"&gt;Try it Out!&lt;/h2&gt;
&lt;p&gt;Lastly, seeing is believing. You can easily try out the TiDB + TiSpark combo by following a &lt;a href="https://www.pingcap.com/blog/how_to_spin_up_an_htap_database_in_5_minutes_with_tidb_tispark/" target="_blank" rel="noopener noreferrer"&gt;5-minute tutorial&lt;/a&gt; our team recently put together, to spin up a cluster on your laptop using Docker-Compose. If you want to deploy this HTAP solution in a production environment, please &lt;a href="https://pingcap.com/contact-us/" target="_blank" rel="noopener noreferrer"&gt;contact us&lt;/a&gt;, and our team would be happy to help you!&lt;/p&gt;
&lt;h3 id="about-the-author"&gt;About the Author&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Shawn Ma&lt;/strong&gt; is a Tech Lead at PingCAP and TiSpark team lead. Previously, he was an infrastructure engineer at Netease and Quantcast. He received his Masters in Computer Science from University of California-Irvine.&lt;/p&gt;</content:encoded>
      <author>PingCAP</author>
      <category>TiDB</category>
      <category>ETL</category>
      <category>NewSQL</category>
      <category>Open Source</category>
      <media:thumbnail url="https://percona.community/blog/2018/06/tspark-query-path_hu_d6cd4df5a5aff582.jpg"/>
      <media:content url="https://percona.community/blog/2018/06/tspark-query-path_hu_7a607728d0ff201b.jpg" medium="image"/>
    </item>
    <item>
      <title>Character Sets: Migrating to utf8mb4 with pt_online_schema_change</title>
      <link>https://percona.community/blog/2018/06/12/character-sets-migrating-utf8mb4-pt_online_schema_change/</link>
      <guid>https://percona.community/blog/2018/06/12/character-sets-migrating-utf8mb4-pt_online_schema_change/</guid>
      <pubDate>Tue, 12 Jun 2018 11:27:57 UTC</pubDate>
      <description>Modern applications often feature the use of data in many different languages. This is often true even of applications that only offer a user facing interface in a single language. Many users may, for example, need to enter names which, although using Latin characters, feature diacritics; in other cases, they may need to enter text which contains Chinese or Japanese characters. Even if a user is capable of using an application localized for only one language, it may be necessary to deal with data from a wide variety of languages.</description>
      <content:encoded>&lt;p&gt;Modern applications often feature the use of data in many different languages. This is often true even of applications that only offer a user facing interface in a single language. Many users may, for example, need to enter names which, although using Latin characters, feature diacritics; in other cases, they may need to enter text which contains Chinese or Japanese characters. Even if a user is capable of using an application localized for only one language, it may be necessary to deal with data from a wide variety of languages.&lt;/p&gt;
&lt;p&gt;Additionally, increased use of mobile phones has lead to changes in communications behaviour; this includes a vastly increased use of standardized characters intended to convey emotions, often called “emojis” or “emoticons.” Originally, such information was conveyed using ASCII text, such as “:-)” to indicate happiness - but, as noted, this has changed, with many devices automatically converting such sequences into single character “emojis.” Such emojis are not typically presented as a a graphic; instead, such emojis are now a standard part of Unicode encoding.&lt;/p&gt;
&lt;p&gt;Since Unicode is a long established standard, and since MySQL has had support for Unicode for quite some time, one would imagine it would be seamless and easy to include them in your application.&lt;/p&gt;
&lt;p&gt;Unfortunately, there are several problems that may complicate that path for many users - first, though, let’s discuss some background, so that we can fully understand the problem.&lt;/p&gt;
&lt;h2 id="what-is-encoding"&gt;What is encoding?&lt;/h2&gt;
&lt;p&gt;“Encoding,” as you may already be aware, refers to the mapping of characters to binary values - or “code points”. One of the oldest standard still in use is ASCII; in this encoding, the binary sequence “100 0001” is equivalent to the uppercase character “A”. Many characters cannot be encoded into US-ASCII; in fact, since it uses only seven bytes per character, it can store only 128 different code points. Some of these code points are characters - like the “A” already mentioned, and others carry alternative meanings, such as for formatting.&lt;/p&gt;
&lt;p&gt;For example, “000 1001” represents a “tab” in US-ASCII. Later, ASCII coding was replaced with various 8-bit encodings, which could hold more different code points - but it was ultimately a standard called Unicode which dethroned ASCII. Unicode actually encompasses a number of different encodings - but it is UTF8 which is the most important, and that’s what we will discuss in this post.&lt;/p&gt;
&lt;p&gt;“Collation” is a related concept; this refers to how characters are sorted. This may, at first, seem simple and logical. However, in practice, it can be more complicated. For example, some poorly programmed systems inadvertently sort in a “case sensitive manner” when “case insensitive” would be more appropriate. Such a system may sort “b,a,B,A,c” as “A,B,a,b,c” - whereas it may be more desirable to sort it as “A,a,B,b,c.” This is an example of differing collations. In languages other than English, there may be more than one reasonable way to sort a list of strings; this is particularly true in languages that do not use an alphabet, such as Chinese or Japanese.&lt;/p&gt;
&lt;h2 id="why-can-encoding-be-a-problem-in-mysql"&gt;Why can encoding be a problem in MySQL?&lt;/h2&gt;
&lt;p&gt;Unicode adoption was by no means universal, and by no means quick. For a very long time, MySQL’s default encoding was latin1; this supports basic English text and common punctuation reasonably well. However, it has limited support for other languages, and it does not support modern emoji characters. Eventually, MySQL very reasonably changed it’s default to UTF8 - which, one would imagine, fixed the issue for many people… except that existing databases were not converted, and many databases still, to this day, have some, or even all, tables encoded as latin1 - not as a conscious choice, but simply as a relic of an older time.&lt;/p&gt;
&lt;p&gt;Additionally, “utf8” encoding in MySQL does not, in fact, mean standard UTF8. Standard UTF8 encoding involves a variable number of bytes per character, with a maximum of four bytes per character; most characters, however, use three or fewer. MySQL, for legacy technical reasons, supports a maximum of three bytes - which, regretably, means that MySQL’s “utf8” encoding does not work with four byte characters, which include Emojis and some mathematical symbols.&lt;/p&gt;
&lt;p&gt;As a result, many databases are using MySQL’s “utf8” encoding or it’s older “latin1” default. In both cases, you may receive vexing “Incorrect string value: " errors when users attempt to enter non-support characters.&lt;/p&gt;
&lt;h2 id="changing-encoding-and-collations"&gt;Changing encoding and collations&lt;/h2&gt;
&lt;p&gt;Both encoding and collation can be set on a per-column level in MySQL. You can also set this value on a per-table level, which sets the default for new columns; further, you can set it on the database level, which sets the default for new tables. Finally, you can set it at the server level, which specifies a default for new databases.&lt;/p&gt;
&lt;p&gt;Let’s walk through changing the encoding and collation for the MySQL sample database “sakila”. You can download this database at the following URL:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/index-other.html" target="_blank" rel="noopener noreferrer"&gt;https://dev.mysql.com/doc/index-other.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;First, let’s start by examining the “actor” table:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; SHOW CREATE TABLE actorG
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Table: actor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Create Table: CREATE TABLE `actor` (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`first_name` varchar(45) DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`last_name` varchar(45) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRIMARY KEY (`actor_id`),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;KEY `idx_actor_last_name` (`last_name`)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As we can see here, the encoding on this table is set to UTF8; all of the VARCHAR columns listed are also encoded as UTF8. If one of them was encoded with a different encoding, it would be listed as part of it’s column definition, e.g. “&lt;code&gt;first_name&lt;/code&gt; varchar(45) CHARACTER SET latin1 DEFAULT NULL” instead of “&lt;code&gt;first_name&lt;/code&gt; varchar(45) DEFAULT NULL”.&lt;/p&gt;
&lt;p&gt;To change the encoding and collation for a particular column, we can use the CHANGE COLUMN command:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE actor MODIFY COLUMN first_name VARCHAR(45) CHARACTER SET utf8mb4;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This, unsurprisingly enough, changes the character set to utf8mb4 - meaning this column can now support emojis and other 4 byte characters. Let’s see what that does to our table defintion:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; show create table actorG
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Table: actor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Create Table: CREATE TABLE `actor` (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`first_name` varchar(45) CHARACTER SET utf8mb4 DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`last_name` varchar(45) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRIMARY KEY (`actor_id`),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;KEY `idx_actor_last_name` (`last_name`)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We can see that the “first_name” column has been changedto utf8mb4; however, the “last_name” column is still using the default character set, utf8. We can use the following command to set the default charset and convert all of the individual columns to our new character set:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE actor CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Note that the above command has a COLLATE clause; although we are focusing on changing encodings in this post, you can have either a CHARACTER SET cause, a COLLATE clause, or both in all of the commands we’ve mentioned - allowing you to change either the encoding or the collation or both at once.&lt;/p&gt;
&lt;p&gt;Let’s see what this command does to our table definition:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;show create table actorG
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;*************************** 1. row ***************************
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Table: actor
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Create Table: CREATE TABLE `actor` (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`first_name` varchar(45) DEFAULT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`last_name` varchar(45) NOT NULL,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;PRIMARY KEY (`actor_id`),
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;KEY `idx_actor_last_name` (`last_name`)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8mb4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (0.00 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As noted, MySQL only displays per-column encodings in table definitions if they are different from the default. We can see, therefore, that all of the columns are now in utf8mb4 encoding. Additionally, it only displays table level collations if they are different from the default - and since utf8mb4_general_ci is the default collation for utf8mb4, it won’t display it either at the table level or the column level. (If we had changed it to a different collation - say, utf8mb4_bin or utf8mb4_unicode_ci - it would, in fact, show up.)&lt;/p&gt;
&lt;p&gt;At this point, we’ve successfully converted a single table to utf8mb4. However, this approach seems onerous for a large database - is there a better way?&lt;/p&gt;
&lt;h2 id="converting-a-database-at-a-time-with-mysql_change_database_encoding"&gt;Converting a database at a time with mysql_change_database_encoding&lt;/h2&gt;
&lt;p&gt;For the purposes of this blog, I’ve encapsulated the logic to run the relevant commands for an entire database into a short Ruby script. You can download and install as follows:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git clone git@github.com:djberube/mysql_change_database_encoding.git
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd mysql_change_database_encoding
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bundle&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This command will use MySQL’s INFORMATION_SCHEMA engine to get a list of all tables, and migrate them:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;MYSQL_DATABASE=sakila MYSQL_USER=some_mysql_user MYSQL_PASSWORD=some_mysql_password ruby mysql_change_database_encoding.rb --collation utf8mb4_unicode_ci --encoding utf8mb4 --dir
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ect --no-osc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Connecting to sakila
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Processing database settings.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Setting database global settings.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Running SQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER DATABASE `sakila` CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; 0.0009s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Migrating without OSC
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Running SQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE `actor` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; 0.0036s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Migrating without OSC
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Running SQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE `address` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; 0.0670s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Migrating without OSC
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Running SQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE `category` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; 0.0293s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Migrating without OSC
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Running SQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE `city` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; 0.0400s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Migrating without OSC
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Running SQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE `country` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; 0.0239s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Migrating without OSC
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Running SQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER TABLE `customer` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; 0.0607s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.. snip...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I’ve cut the output down a bit for brevity. First, this script sets the default encoding and collation for the entire DB; then it sets it for each table using the ALTER TABLE .. CONVERT TO CHARACTER SET  command.&lt;/p&gt;
&lt;p&gt;You may have noticed the “migrating without OSC” lines in the output; OSC, or online schema change, is a technique for reducing the impact of database migrations on production installations. A typical technique for doing this is to create a duplicate of your table, set up triggers to keep that duplicate up to date, change the new table, and then swap them - this is sufficiently complicated that it’s nontrivial to DIY, and so there’s a few very nice tools available to do this. By using one of these tools, we can run schema changes in production environments while reducing the performance impact - having to lock a large table while converting it to UTF8MB4 may, indeed, take a large system down.&lt;/p&gt;
&lt;h4 id="pt-online-schema-change"&gt;pt-online-schema-change&lt;/h4&gt;
&lt;p&gt;Percona Toolkit has a great tool for OSC, called pt-online-schema-change; the script mentioned above has builtin support for pt-online-schema-change. You can download it from here:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.percona.com/doc/percona-toolkit/LATEST/pt-online-schema-change.html" target="_blank" rel="noopener noreferrer"&gt;https://www.percona.com/doc/percona-toolkit/LATEST/pt-online-schema-change.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;We can re-run our script using pt-online-schema-change by removing the “–no-osc” option and replacing it with, logically enough, a “–osc” option:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# MYSQL_DATABASE=sakila MYSQL_USER=some_mysql_user MYSQL_PASSWORD=some_mysql_password ruby mysql_change_database_encoding.rb --collation utf8mb4_unicode_ci --encoding utf8mb4 --direct --osc
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Connecting to sakila
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Processing database settings.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-- Setting database global settings.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Running SQL:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ALTER DATABASE `sakila` CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; 0.0007s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;This SQL will be run using pt-online-schema-change:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;The following command will be run:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;No slaves found. See --recursion-method if host spacepancake has slaves.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Operation, tries, wait:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;analyze_table, 10, 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;copy_rows, 10, 0.25
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;create_triggers, 10, 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;drop_triggers, 10, 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;swap_tables, 10, 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;update_foreign_keys, 10, 1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Child tables:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;`sakila`.`film_actor` (approx. 5462 rows)
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.. snip...&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Note that pt-online-schema-change can only be run against tables with a primary key; the mysql_change_database_encoding.rb  script will automatically fall back to directly running MySQL commands if the –direct flag is set.&lt;/p&gt;
&lt;p&gt;If you encounter any issues with the above script, please let me know via &lt;a href="http://berubeconsulting.com/" target="_blank" rel="noopener noreferrer"&gt;http://berubeconsulting.com/&lt;/a&gt; or via Github. Pull requests are welcome.&lt;/p&gt;
&lt;h2 id="potential-problems"&gt;Potential problems&lt;/h2&gt;
&lt;p&gt;Of course, there are several issues which may occur when changing your encoding or collation.&lt;/p&gt;
&lt;h4 id="mysql-version"&gt;MySQL Version&lt;/h4&gt;
&lt;p&gt;Firstly, note that utf8mb4 support is only available in MySQL 5.5.2 or later; earlier than that, and you’re limited to MySQL’s nonstandard UTF8 implementation, with a maximum of three bytes per codepoint. In this case, it is generally advisable to upgrade to a recent version of MySQL - though you could, if desired, use the above approach to migrate your database to utf8 encoding.&lt;/p&gt;
&lt;h4 id="applications-that-need-variable-encoding"&gt;Applications that need variable encoding&lt;/h4&gt;
&lt;p&gt;The second issue is that the approach detailed above - where a script automatically migrates all of the different tables - will result in every table having it’s encoding and/or collation changed to the same destination encoding and collation. That’s not necessarily a problem - but some applications do, indeed, make use of varying encodings for different tables and, in some cases, different columns in the same table. If so, you’d do well to use the above SQL examples as a guide, and manually create a SQL script - or a shell script that repeatedly calls pt-online-schema-change - which will do the migration for you. However, in many cases, a single encoding is both possible and desirable.&lt;/p&gt;
&lt;h4 id="key-length"&gt;Key Length&lt;/h4&gt;
&lt;p&gt;Addditionally, note that maximum key lengths may be an issue for MySQL 5.6 and earlier installations. This is because earlier installations have a maximum key size limitation on indices; compared to utf8 columns, utf8mb4 columns have a higher maximum length on disk per character, and it’s easy to bump into once you switch to utf8mb4. For example, many schemas have VARCHAR(255) columns - those are created by Ruby on Rails by default if one does not specify a column length - and VARCHAR(255) columns trigger this limitation. You could write a script that automatically resizes these indices or their associated columns, but I would recommend either upgrading to 5.7 or, if running 5.5 or later, enabling the innodb_large_prefix setting, which allows larger indices.&lt;/p&gt;
&lt;h4 id="false-positives"&gt;False positives&lt;/h4&gt;
&lt;p&gt;Finally, note that for some legacy installations, the mere fact of a column, table, or database being marked as “latin1” encoded or “utf8” encoded may not, in fact, mean that the data is actually encoded in that way; this may be because an application incorrectly marked the encoding of it’s data. In that case, recovery may be complex or impossible, and will certainly be situation dependant - particularly since this issue may not effect all rows.&lt;/p&gt;
&lt;p&gt;Of course, to ensure that you particular application works without incident on a new encoding - and, to a lesser extent, collation - it’s wise to thoroughly test any changes in a staging environment; if feasible, it’s likely wise to test on a copy of the production environment as well.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Unicode support is no longer an arcane, unapproachable topic; it’s both possible and highly advisable to ensure that your application works well for international users and for more users using emojis. Such is quickly becoming not merely a value-add, but an expected part of an application’s featureset, and implementing full support in your MySQL application is relatively straightforward.&lt;/p&gt;
&lt;p&gt;If you’ve found this post useful, feel free to let me know at &lt;a href="mailto:djberube@berubeconsulting.com"&gt;djberube@berubeconsulting.com&lt;/a&gt;, or via &lt;a href="http://berubeconsulting.com" target="_blank" rel="noopener noreferrer"&gt;http://berubeconsulting.com&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Questions, comments, and reports of any inaccuracies are welcome.&lt;/p&gt;</content:encoded>
      <author>David Berube</author>
      <category>MySQL</category>
      <category>Toolkit</category>
      <category>Migration</category>
      <category>Collation</category>
      <category>Encoding</category>
      <category>Conversion</category>
      <media:thumbnail url="https://percona.community/blog/2018/04/problem_hu_916934d37793c00c.jpg"/>
      <media:content url="https://percona.community/blog/2018/04/problem_hu_331fddc605e8b5f1.jpg" medium="image"/>
    </item>
    <item>
      <title>Enabling KMS encryption for a running Amazon RDS instance</title>
      <link>https://percona.community/blog/2018/06/08/enabling-kms-encryption-running-amazon-rds-instance/</link>
      <guid>https://percona.community/blog/2018/06/08/enabling-kms-encryption-running-amazon-rds-instance/</guid>
      <pubDate>Fri, 08 Jun 2018 11:40:02 UTC</pubDate>
      <description>Since summer 2017, Amazon RDS supports encryption at rest using AWS Key Management Service (KMS) for db.t2.small and db.t2.medium database instances, making the feature now available to virtually every instance class and type.</description>
      <content:encoded>&lt;p&gt;Since summer 2017, Amazon RDS supports &lt;a href="https://aws.amazon.com/about-aws/whats-new/2017/06/amazon-rds-enables-encryption-at-rest-for-additional-t2-instance-types/" target="_blank" rel="noopener noreferrer"&gt;encryption&lt;/a&gt; at rest using AWS Key Management Service (KMS) for db.t2.small and db.t2.medium database instances, making the feature now available to virtually every instance class and type.&lt;/p&gt;
&lt;p&gt;Unless you are running &lt;a href="https://aws.amazon.com/rds/previous-generation/" target="_blank" rel="noopener noreferrer"&gt;Previous Generation DB Instances&lt;/a&gt; or you can only afford to run a db.t2.micro, every other instance class now supports native encryption at rest using KMS. As for the Amazon documentation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Encryption on smaller &lt;a href="http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html" target="_blank" rel="noopener noreferrer"&gt;T2 database instances&lt;/a&gt; is useful for development and test use cases, where you want the environment to have identical security characteristics as the planned production environment. You can also run small production workloads on T2 database instances, to save money without compromising on security.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id="how-to-encrypt-a-new-instance"&gt;How to encrypt a new instance&lt;/h2&gt;
&lt;p&gt;Enabling encryption at rest for a new RDS instance is simply a matter of setting one extra parameter in the create instance request. For example using the &lt;a href="http://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html" target="_blank" rel="noopener noreferrer"&gt;CLI create-db-instance&lt;/a&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-0" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-0"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[--storage-encrypted | --no-storage-encrypted]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;or a check-box in the console. But what about existing instances? &lt;strong&gt;There is no direct way to modify the encryption of a running RDS instance.&lt;/strong&gt;&lt;/p&gt;
&lt;h3 id="snapshot-approach"&gt;&lt;strong&gt;Snapshot approach&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;The simplest way to have an encrypted MySQL instance is to terminate the existing instance with a final snapshot or take a snapshot in a read only scenario.&lt;/p&gt;
&lt;p&gt;With the encryption option of &lt;a href="http://docs.aws.amazon.com/cli/latest/reference/rds/copy-db-snapshot.html" target="_blank" rel="noopener noreferrer"&gt;RDS snapshot copy&lt;/a&gt;, it is possible to convert an unencrypted RDS instance into encrypted simply by starting a new instance from the encrypted snapshot copy, for example:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-1" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-1"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws rds copy-db-snapshot --source-db-snapshot-identifier --target-db-snapshot-identifier --kms-key-id arn:aws:kms:us-east-1:******:key/016de233-693e-4e9c-87e8-**********&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;where the kms-key-id  is the KMS encryption key.&lt;/p&gt;
&lt;p&gt;Unfortunately this is simple but requires a significant downtime as you will not be able to write to your RDS instance from the moment that you take the first snapshot until the time the new encrypted instance is available. A matter of minutes or hours, according to the size of your database.&lt;/p&gt;
&lt;h3 id="what-about-limited-downtime"&gt;What about limited downtime?&lt;/h3&gt;
&lt;p&gt;There are at least two more options on how to encrypt the storage for an existing RDS instance:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Use &lt;a href="https://aws.amazon.com/dms/" target="_blank" rel="noopener noreferrer"&gt;AWS Database Migration Service&lt;/a&gt;: the source and target will have the same engine and the same schema but the target will be encrypted. However, this is usually not suggested for homogeneous engines as in our scenario.&lt;/li&gt;
&lt;li&gt;Use a native MySQL read replica with a similar approach to the one documented by AWS to &lt;a href="https://d0.awsstatic.com/whitepapers/RDS/Moving_RDS_MySQL_DB_to_VPC.pdf" target="_blank" rel="noopener noreferrer"&gt;move RDS MySQL Databases from EC2 classic to VPC&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="encrypting-and-promoting-a-read-replica"&gt;Encrypting and promoting a read replica&lt;/h3&gt;
&lt;p&gt;Let’s see how we can leverage MySQL native replication to convert an unencrypted RDS instance to an encrypted RDS instance with reduced down time. All the tests below have been performed on a MySQL 5.7.19 (the latest available RDS MySQL) but should work on any MySQL 5.6+ deployment.&lt;/p&gt;
&lt;p&gt;Let’s assume the existing instance is called test-rds01 and a master user rdsmaster&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;We create a RDS read replica &lt;em&gt;test-rds01-not-encrypted&lt;/em&gt; of the existing instance &lt;em&gt;test-rds01&lt;/em&gt;.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-2" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-2"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;aws rds create-db-instance-read-replica --db-instance-identifier test-rds01-not-encrypted --source-db-instance-identifier test-rds01&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the read replica is &lt;em&gt;available&lt;/em&gt;, we stop the replication using the RDS procedure “CALL mysql.rds_stop_replication;”  Note that not having a super user on the instance, the procedure is the only available approach to stop the replication.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-3" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-3"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ mysql -h test-rds01-not-encrypted.cqztvd8wmlnh.us-east-1.rds.amazonaws.com -P 3306 -u rdsmaster -pMyDummyPwd --default-character-set=utf8 -e "CALL mysql.rds_stop_replication;"
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Message |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Slave is down or disabled |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+---------------------------+&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can now save the the binary log name and position from the RDS replica that we will need later on calling:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-4" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-4"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Relay_Master_Log_File: mysql-bin-changelog.275872
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Exec_Master_Log_Pos: 3110315&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And create a snapshot &lt;em&gt;test-rds01-not-encrypted&lt;/em&gt; of the RDS replica &lt;em&gt;test-rds01-not-encrypted&lt;/em&gt; as the replication is stopped.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-5" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-5"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ aws rds create-db-snapshot --db-snapshot-identifier test-rds01-not-encrypted --db-instance-identifier test-rds01-not-encrypted&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And once the snapshot &lt;em&gt;test-rds01-not-encrypted&lt;/em&gt; is available, copy the content to a new encrypted one &lt;em&gt;test-rds01-encrypted&lt;/em&gt; using a new KMS key or the region and account specific default one:&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-6" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-6"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ aws rds copy-db-snapshot --source-db-snapshot-identifier test-rds01-not-encrypted --target-db-snapshot-identifier test-rds01-encrypted --kms-key-id arn:aws:kms:us-east-1:03257******:key/016de233-693e-4e9c-87e8-******&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Note that our original RDS instance &lt;em&gt;test-rds01&lt;/em&gt; is still running and available to end users, we are simply building up a large Seconds_Behind_Master. Once the copy is completed, we can start a new RDS instance &lt;em&gt;test-rds01-encrypted&lt;/em&gt; in the same subnet of the original RDS instance &lt;em&gt;test-rds01&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-7" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-7"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;$ aws rds restore-db-instance-from-db-snapshot --db-instance-identifier test-rds01-encrypted --db-snapshot-identifier test-rds01-encrypted --db-subnet-group-name test-rds&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After waiting for the new instance to be available, let us make sure that the new and original instances share the same security group and that that TCP traffic for MySQL is enabled inside the security group itself. Almost there.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can now connect to the new encrypted standalone instance &lt;em&gt;test-rds01-encrypted&lt;/em&gt; reset the external master to make it a MySQL replica of the original one.&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-8" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-8"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; CALL mysql.rds_set_external_master (
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; ' test-rds01.cqztvd8wmlnh.us-east-1.rds.amazonaws.com'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; , 3306
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; ,'rdsmaster'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; ,'MyDummyPwd'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; ,'mysql-bin-changelog.275872'
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; ,3110315
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; ,0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;-&gt; );
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Query OK, 0 rows affected (0.03 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And we can finally start the encrypted MySQL replication on &lt;em&gt;test-rds01-encrypted&lt;/em&gt;&lt;/p&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-9" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-9"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; CALL mysql.rds_start_replication;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Message |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Slave running normally. |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (1.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can now check the Slave_IO_State calling show slave status. Once the database catches up —Seconds_Behind_Master is down to zero — we have finally a new encrypted &lt;em&gt;test-rds01-encrypted&lt;/em&gt; instance in sync with the original unencrypted &lt;em&gt;test-rds01&lt;/em&gt; RDS instance.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can now restart the replica on the unencrypted RDS read replica &lt;em&gt;test-rds01-not-encrypted&lt;/em&gt; that is still in a stopped status in the very same way to make sure that the binary logs on the master get finally purged and do not keep accumulating.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="code-block"&gt;
&lt;div class="code-block__header"&gt;&lt;button class="code-block__copy" type="button" data-copy-target="codeblock-10" aria-label="Copy code to clipboard"&gt;
&lt;span class="code-block__copy-default"&gt;Copy&lt;/span&gt;
&lt;span class="code-block__copy-success" aria-hidden="true"&gt;Copied!&lt;/span&gt;
&lt;/button&gt;
&lt;/div&gt;
&lt;div class="code-block__content" id="codeblock-10"&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mysql&gt; CALL mysql.rds_start_replication;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Message |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;| Slave running normally. |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;+-------------------------+
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;1 row in set (1.01 sec)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ol start="12"&gt;
&lt;li&gt;It is is time to promote the read replica and have our application switching to the new encrypted &lt;em&gt;test-rds01-encrypted&lt;/em&gt; instance. Our downtime starts here and as a very first step we want to make &lt;em&gt;test-rds01-encrypted&lt;/em&gt; a standalone instance calling the RDS procedure:
&lt;code&gt;CALL mysql.rds_reset_external_master&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;We can now point our application to the new encrypted &lt;em&gt;test-rds01-encrypted&lt;/em&gt; or we can alternatively rename our RDS instances to minimize the changes. Let’s go with the swapping cname approach:
&lt;code&gt;aws rds modify-db-instance --db-instance-identifier test-rds01 --new-db-instance-identifier test-rds01-old --apply-immediately&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;and once the instance is in available state (usually 1-2 minutes) again:
&lt;code&gt;$aws rds modify-db-instance --db-instance-identifier test-rds01-encrypted --new-db-instance-identifier test-rds01 --apply-immediately&lt;/code&gt;
We are now ready for the final cleanup, starting with the now useless &lt;em&gt;test-rds01-not-encrypted&lt;/em&gt; read replica.&lt;/li&gt;
&lt;li&gt;Before deleting the old not encrypted &lt;em&gt;test-rds01-old&lt;/em&gt;, make sure you don’t need to keep the backups anymore: on switching the instance your N days retention policy on automatic backups is now gone. It is usually better to stop (not delete) the old unencrypted &lt;em&gt;test-rds01-old&lt;/em&gt; instance until the N days are passed and the new encrypted &lt;em&gt;test-rds01&lt;/em&gt; instance has the same number of automatic snapshots.&lt;/li&gt;
&lt;li&gt;Done! You can now enjoy your new encrypted RDS instance &lt;em&gt;test-rds01&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="to-recap"&gt;To recap&lt;/h2&gt;
&lt;p&gt;Downtime is not important? Create an encrypted snapshot and create a new RDS instance. Otherwise you can use MySQL replication to create the encrypted RDS while your instance in running and swap them when you are ready.&lt;/p&gt;</content:encoded>
      <author>Renato Losio</author>
      <category>Amazon RDS</category>
      <category>AWS</category>
      <category>Encryption</category>
      <category>MySQL</category>
      <category>Labs</category>
      <media:thumbnail url="https://percona.community/blog/2018/04/safety-2890768_640_hu_3fc24fb4164c53f5.jpg"/>
      <media:content url="https://percona.community/blog/2018/04/safety-2890768_640_hu_e5772f4f9236f511.jpg" medium="image"/>
    </item>
  </channel>
</rss>
