Project

General

Profile

Bug #87723

Updated by Christoph Lehmann almost 6 years ago

While analysing the slow.log with    pt-query-digest i see  

 <pre> 
 # Query 1: 0.17 QPS, 0.11x concurrency, ID 0xD52AD4861404CBA7 at byte 195329 
 # This item is included in the report because it matches --limit. 
 # Scores: V/M = 0.00 
 # Time range: 2019-02-15T10:34:39 to 2019-02-15T10:34:51 
 # Attribute      pct     total       min       max       avg       95%    stddev    median 
 # ============ === ======= ======= ======= ======= ======= ======= ======= 
 # Count            0         2 
 # Exec time       64        1s     636ms     687ms     661ms     687ms      36ms     661ms 
 # Lock time        0     104us      46us      58us      52us      58us       8us      52us 
 # Rows sent        0         2         1         1         1         1         0         1 
 # Rows examine    92     2.69M     1.35M     1.35M     1.35M     1.35M         0     1.35M 
 # Query size       0       172        86        86        86        86         0        86 
 # String: 
 # Databases      dbname 
 # Hosts          localhost 
 # Users          dbname 
 # Query_time distribution 
 #     1us 
 #    10us 
 # 100us 
 #     1ms 
 #    10ms 
 # 100ms    ################################################################ 
 #      1s 
 #    10s+ 
 # Tables 
 #      SHOW TABLE STATUS FROM `dbname` LIKE 'sys_log'\G 
 #      SHOW CREATE TABLE `dbname`.`sys_log`\G 
 # EXPLAIN /*!50100 PARTITIONS*/ 
 SELECT COUNT(`error`) FROM `sys_log` WHERE (`tstamp` >= 0) AND (`error` IN (-1, 1, 2))\G 
 </pre> 

 The query is done for the error counter in the topbar in backend. 600ms for a query is fatal. It may be It's okay to have big tables leading to longer such long execution time, but queries should be optimized  

 I suggest using a further index on that table: 

 <pre> 
 CREATE INDEX errorcount ON sys_log (tstamp,error) 
 </pre>

Back