0

I have a query which takes about 19 seconds to run, which is waaaay to long time.

This is the result from my slow log:

# Query_time: 19.421110  Lock_time: 0.000171 Rows_sent: 6  Rows_examined: 48515488
use c3_xchngse;
SET timestamp=1398891560;
SELECT *
                 FROM rates
                 WHERE id IN (
                    SELECT Max(id)
                    FROM rates
                    WHERE LOWER(`currency`) = LOWER('eur')
                    GROUP BY bankId
                );

I have tried to add indexes such as:

ALTER TABLE  `c3_xchngse`.`rates` ADD INDEX  `searchIndex` (  `id` ,  `currency` ,  `bankId` )

But it doesn't seem to optimize the queries, they keep taking way to long time. I have also tried added seperate index for each column addressed in the query above, but no help there either.

This is my table, which contains today about 7000 rows:

CREATE TABLE IF NOT EXISTS `rates` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `bankId` int(11) NOT NULL,
  `currency` varchar(3) NOT NULL,
  `buy` double NOT NULL,
  `sell` double NOT NULL,
  `addDate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `since` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  PRIMARY KEY (`id`),
  UNIQUE KEY `Unique` (`bankId`,`currency`,`since`),
  KEY `bankId` (`bankId`),
  KEY `currency` (`currency`),
  KEY `searchIndex` (`id`,`currency`,`bankId`)
) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COMMENT='The rates' AUTO_INCREMENT=6967 ;

How can I either optimize my query or optimize the table and get the exact same results, but faster?

2 Answers 2

2

Using LOWER(currency) is rendering your index useless.

Normalize the data in the table:

UPDATE rates SET currency = LOWER(currency) WHERE 1;

And make sure that any arguments passed into the query are put into lowercase before it hits the query.

Additionally, you can make the currency field an ENUM type to help with internal indexing: https://dev.mysql.com/doc/refman/5.0/en/enum.html

Sign up to request clarification or add additional context in comments.

1 Comment

Follow-up: Any time you pass a field through a function, every row must be evaluated instead of the engine being able to utilize the index.
1

This is a rewrite of the query using a join (which can sometimes be beneficial):

select r.*
from rates r join
     (select max(id)
      from rates
      where `currency` = 'Eur'
      group by bankId
     ) rmax
     on r.id = rmax.id;

Note the removal of lower() as Noah suggested. If you have a case sensitive collation, make sure you have the case correct. Don't put a function around currency; that precludes the use of an index. The index that you want for this is rates(currency, bankId, id).

You are trying to find all the rate information for the biggest id for each bank for eur. You can also express this using not exists:

select r.*
from rates r
where not exists (select 1
                  from rates r2
                  where r2.bankid = r.bankid and
                        r2.currency = 'Eur' and
                        r2.id > r.id
                 );

With an index on rates(bankid, currency, id) this might perform better.

1 Comment

Thanks! I'll try to update both my code and my table this evening.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.