I'm trying to get the data from a text file into a HashMap. The text-file has the following format:

it has something like 7 million lines... (size: 700MB)
So what I do is: I read each line, then I take the fields in green and concatenate them into a string which will the HashMap key. The Value will be the fild in red.
everytime I read a line I have to check in the HashMap if there is already an entry with such key, if so, I just update the value summing the value with the red; If not, a new entry is added to the HashMap.
I tried this with text-files with 70.000 lines, and it works quite well.
But now with the 7 Million line text-file I get a "java heap space" issue, like in the image:

Is this due to the HashMap ? Is it possible to optimize my algorithm ?
HashMapit will be replaced by the new value.