0

I have a Quad Core Processor based 16 GiG RAM and 4 TB HDD for DB and 4 TB for OS. The bufferpool allocated is 6 GB. Tried with 70% of memory too but result is same. The memory is getting filled up very quickly. When I am restarting the service it is freed but in say less than 20 minutes 10-15% is getting filled.

My my.conf config:

[client]
socket=/db/mysql/mysql.sock

[mysqld]
datadir=/db/mysql
socket=/db/mysql/mysql.sock

innodb_max_dirty_pages_pct = 0
innodb_file_per_table = 1
innodb_buffer_pool_size = 6G
innodb_buffer_pool_instances = 6
thread_cache_size = 4
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
query_cache_type = 0
innodb_fast_shutdown=0

[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

Attaching my memory usage graph too. enter image description here

enter image description here

enter image description here

Important References:

Show Variables - pastebin.com/wzAhva55 Show global status - pastebin.com/8G1N3g78

9
  • Some more information would be useful, e.g. buffer pool/memory section from SHOW ENGINE InnoDB STATUS. Does the slow query log contains some information, e.g. slow queries using sort buffer? Any long running processes in SHOW PROCESSLIST ? Commented Oct 27, 2016 at 5:41
  • InnoDB Satus - pastebin.com/Hxi6S5f8 Show ProcessList - pastebin.com/ftTiDSrC Output was too long, so had to use pastebin. Commented Oct 27, 2016 at 7:34
  • What is the meaning and units of the Y axix on the graphs? Commented Oct 27, 2016 at 22:13
  • Are other apps running on the same server? Commented Oct 27, 2016 at 22:17
  • Please provide (via pastebin) SHOW VARIABLES; and SHOW GLOBAL STATUS; after it has been up several hours. Commented Oct 27, 2016 at 22:20

1 Answer 1

4

Observations:

Version: **5.5.50-MariaDB**
**16 GB** of RAM
Uptime = **3d 05:16:26**
You are **not** running on Windows.
Running **64-bit** version
You appear to be running entirely (or mostly) **InnoDB**.
20 issues flagged, out of 145 computed Variables/Status/Expressions checked

The More Important Issues

  • Since you have a "smallish" 6G for the buffer_pool, it is puzzling why the RAM is so full.
  • No replication is involved, correct?
  • Why is innodb_max_dirty_pages_pct set to zero? Normally it is 75 (%). I would expect you to suffer a lot of I/O.
  • The iblogs are rotating extremely rapidly. This is because of high activity, together with an exremely tiny log_file_size = 5M. Change that to 1G; but caution: Changing it is complicated and involves a restart.
  • 6K tmp tables created per second. 3K table scans/sec. These imply that some indexes or query tuning might be useful.
  • You seem to have a very low turnover in connections, so the comments below about threads and connections are probably not important.
  • Aborted_connects / Connections = 98.8% . (Up to 20% is 'ok'.)

Do you have a RAM disk? How big? I typically argue against such. However, if you have a RAM disk, that would explain the high memory usage, high query rate, and yet seeming low I/O problem.

Details and other observations

( innodb_buffer_pool_size / _ram ) = 6,442,450,944 / 16384M = 37.5% -- % of RAM used for InnoDB buffer_pool

( open_files_limit ) = 1,024 -- ulimit -n -- To allow more files, change ulimit or /etc/security/limits.conf or in sysctl.conf (kern.maxfiles & kern.maxfilesperproc) or something else (OS dependent) On the other hand, Open_table* values are very low, so it seems you are using very few tables.

( innodb_max_dirty_pages_pct ) = 0 -- When buffer_pool starts flushing to disk -- Are you experimenting?

( Innodb_log_writes ) = 26,966,874 / 278186 = 97 /sec

( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 253,091,451,392 / (278186 / 3600) / 2 / 5M = 312 -- Ratio

( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 278,186 / 60 * 5M / 253091451392 = 0.096 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf. -- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size.

( Questions ) = 889,470,233 / 278186 = 3197 /sec -- Queries (outside SP) -- "qps" -- >2000 may be stressing server

( Queries ) = 85,684,886,985 / 278186 = 308012 /sec -- Queries (including inside SP) -- >3000 may be stressing server

( (Queries-Questions)/Queries ) = (85684886985-889470233)/85684886985 = 99.0% -- Fraction of queries that are inside Stored Routines. -- (Not bad if high; but it impacts the validity of some other conclusions.)

( Created_tmp_tables ) = 1,674,262,702 / 278186 = 6018 /sec -- Frequency of creating "temp" tables as part of complex SELECTs.

( Select_scan ) = 837,131,930 / 278186 = 3009 /sec -- full table scans -- Add indexes / optimize queries (unless they are tiny tables)

( Select_scan / Com_select ) = 837,131,930 / 2511393099 = 33.3% -- % of selects doing full table scan. (May be fooled by Stored Routines.) -- Add indexes / optimize queries

( Com_insert + Com_delete + Com_delete_multi + Com_replace + Com_update + Com_update_multi ) = (837130735 + 0 + 0 + 0 + 0 + 0) / 278186 = 3009 /sec -- writes/sec -- 50 writes/sec + log flushes will probably max out I/O write capacity of normal drives

( expire_logs_days ) = 0 -- How soon to automatically purge binlog (after this many days) -- Too large (or zero) = consumes disk space; too small = need to respond quickly to network/machine crash. (Not relevant if log_bin = OFF)

( long_query_time ) = 10.000000 = 10 -- Cutoff (Seconds) for defining a "slow" query. -- Suggest 2

( Aborted_connects / Connections ) = 11,455 / 11594 = 98.8% -- Perhaps a hacker is trying to break in?

( thread_cache_size ) = 4 -- How many extra processes to keep around (Not relevant when using thread pooling) (Autosized as of 5.6.8; based on max_connections)

More

Select_scan / Com_select is so close to 1/3 as to be suspicious that you have a strong pattern in the queries performed. Perhaps there is repetition that could be avoided? Similarly, Com_set_option/Queries is very close to 1/6.

Handler_read* values are extremely small compared to the number of queries.

Hmmm... Select_scan/sec, Com_call_procedure/sec, and Com_insert/sec are all very similar (3009/sec). Only one procedure, and it is called a lot?

Innodb_rows_inserted = 622 /sec.

Sorry, much of this analysis aims at performance, not memory. I did come up with one memory thought. I would guess the ramdisk is 4-5G?

Sign up to request clarification or add additional context in comments.

6 Comments

As I increased buffer pool to 10G from 6G memory usage rose to 98%. So again reverted back to 6G. Made innodb_max_dirty_pages_pct to default. Cannot change innodb_log_file_size = 1G in my.cnf as it is giving error.
Do you have a ramdisk?
From df -h there are tmpfs as 7.7 GB. Attached screenshot along with main question. But I have not created any of this. it was all there by default. (sdc is my DB drive)
Assuming that /dev/shm is a ramdisk, then that 7.7GB is not available as RAM for caches, such as the buffer_pool. Hence 6G is about the limit in (16 - 7.7)GB. Put another way: You are using (7.7 + 6 + miscellany) GB, hence the big graphs. Why does the graph drop to near zero? Maybe tmpfs was unmounted then?
The graphs drops to zero when i do a manual restart of mariadb service. How to get around the RAM disk thing? Is it really necessary for the OS ?
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.