Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?
Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.
Explanations:
While CloudWatch metrics like connections, CPU, and disk space are useful for general monitoring, they do not directly identify specific queries causing performance degradation. This approach will not efficiently pinpoint the problematic query.
Launching an EC2 instance and installing an open-source monitoring tool adds unnecessary complexity and overhead. It also won’t provide the real-time query-level insights needed for this specific issue.
Modifying the logging parameter to log queries related to locking might capture some useful information, but it’s not the most efficient or comprehensive method for identifying slow queries, especially if locks are not the primary issue.
Enabling Amazon RDS Performance Insights provides detailed query-level performance data and allows identifying specific queries causing spikes in resource usage. This is the most efficient and appropriate method to pinpoint problematic queries.