2

Lets say I have several InnoDB tables:

1. table_a 20Gb
2. table_b 10Gb
3. table_c 1Gb
4. table_d 0.5Gb

And a server with limited memory (8Gb)

I want fast access to table_c and table_d, and can allow slower access to table_a and table_b.

Is there a way to direct MySQL to cache c,d in memory, and NOT a,b?

(I'd move a,b to a different servers, but sometimes I require a join on a,c)

2
  • You can probably do with index and/or view instead of caching the whole table. Commented Aug 2, 2017 at 7:45
  • No. Need to run statistic calculation on the whole data. Commented Aug 2, 2017 at 13:16

2 Answers 2

2

InnoDB doesn't have any option to direct certain tables to stay in memory and other tables to stay out of memory. But it's kind of unnecessary.

InnoDB reads tables by loading them page-by-page into the buffer pool. Your usage of the tables guides InnoDB to keep pages in memory.

Reading a page once in a while is unlikely to kick out pages that you need to stay in memory. InnoDB keeps an area of the buffer pool reserved for recently-accessed pages. There's an algorithm for "promoting" pages into this reserved area, and pages that aren't promoted tend to get kicked out first.

Read details here: https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool.html

If you really need to ensure that certain tables are not cached in the InnoDB buffer pool, the only certain way is to alter the storage engine for those tables. Non-InnoDB tables (e.g. MyISAM) are never cached in the InnoDB buffer pool. But this is probably not a good enough reason to switch storage engine.

Sign up to request clarification or add additional context in comments.

Comments

1

Answer to question asked: No.

Answer to implied question: Probably. The implied question is "how can I make the queries run faster. This may or may not have anything to do with what is cached.

If you fetch one row using an index, especially the PRIMARY KEY, then the query will be very fast, even if nothing is cached. If, on the other hand, you do a "table scan" of table_a, it will blow out the cache multiple times to scan through the 20GB.

So... Find out which query is the slowest, then let's focus on making it faster. It may be as simple as adding a "composite" index. Or maybe reformulating the query. Or maybe something else.

VIEWs will not help; they are syntactic sugar around a SELECT. Recomputing the statistics is not a 'real' fix.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.