2

I am working on a issue in which the Cassandra server has crashed. According to the Cassandra log[1] the problem may be an OutOfMemory in Apache cassandra. I think we have to tune Cassandra parameters to solve this. Is there any other way to solve this issue ? How to tune Cassandra parameters to have optimum memory usage ?

  1. log

    INFO 16:32:17,353 QpidKeySpace.NodeQueues 0,0 WARN 16:32:17,353 Heap is 0.9997729675985393 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically WARN 16:32:17,353 Flushing CFS(Keyspace='QpidKeySpace', ColumnFamily='MessageCountDetails') to relieve memory pressure INFO 16:32:17,761 MessagingService shutting down server thread. ERROR 16:38:08,647 Exception in thread Thread[ReadStage:186,5,main] java.lang.OutOfMemoryError: Java heap space at java.nio.ByteBuffer.wrap(ByteBuffer.java:350) at java.nio.ByteBuffer.wrap(ByteBuffer.java:373) at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:391) at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:84) at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:73) at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:370) at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.fetchMoreData(IndexedSliceReader.java:325) at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:151) at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:48) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:90) at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:171) at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:154) at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:143) at org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:122) at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:96) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:157) at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:136) at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:84) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:293) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126) at org.apache.cassandra.db.Table.getRow(Table.java:347) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:70) at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)

2
  • 1
    Have you tried increasing memory using something like -Xms512m -Xmx512m? Commented Dec 19, 2013 at 10:13
  • How much ram do you have and what size heap are you using? Commented Dec 19, 2013 at 22:57

1 Answer 1

1

The first step in the tuning process will be to take a heap dump and analyze it with eclipse Memory Analyzer or another tool of your choosing.

You don't mention the version of cassandra you are using. The version will determine some aspects of the tuning course of action since newer versions of cassandra have moved certain things off heap.

If you don't already have a favorite JMX client, you can download jmxsh from http://code.google.com/p/jmxsh/. Copy the jar to your node where you want to take the heap dump.

To take the heap dump using jmxsh, enter the following command:

echo 'jmx_invoke -m com.sun.management:type=HotSpotDiagnostic dumpHeap /path/to/heapdump.hprof false' | java -jar jmxsh-R5.jar -h localhost -p 7199

Sign up to request clarification or add additional context in comments.

1 Comment

I also think that I can solve this issue by tuning Cassandra GC paramters. I am working on that and post the result ASAP I got. Thanks for your valuable answer

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.