Here is code that will go through the data character by character and replace it if it finds a mapping. This assumes though that each data that needs to be replaced is absolutely unique.
def replacer(instring, mapping):
item = ''
for char in instring:
item += char
yield item[:-5]
item = item[-5:]
if item in mapping:
yield mapping[item]
item = ''
yield item
old_values = ('0000}', '0000J', '0000K', '0000L', '0000M', '0000N')
new_values = (' -0', ' -1', ' -2', ' -3', ' -4', ' -5')
value_map = dict(zip(old_values, new_values))
file_snippet = '00000000000000010000}0000000000000000000200002000000000000000000030000J0000100000000000000500000000000000000000000' # each line is >7K chars long and there are over 6 gigs of text data
result = ''.join(replacer(file_snippet, value_map))
print result
On your example data this gives:
0000000000000001 -0000000000000000000020000200000000000000000003 -10000100000000000000500000000000000000000000
A faster way would be to split the data into 5-character chunks, if the data fits that way:
old_values = ('0000}', '0000J', '0000K', '0000L', '0000M', '0000N')
new_values = (' -0', ' -1', ' -2', ' -3', ' -4', ' -5')
value_map = dict(zip(old_values, new_values))
file_snippet = '00000000000000010000}0000000000000000000200002000000000000000000030000J0000100000000000000500000000000000000000000' # each line is >7K chars long and there are over 6 gigs of text data
result = []
for chunk in [ file_snippet[i:i+5] for i in range(0, len(file_snippet), 5) ]:
if chunk in value_map:
result.append(value_map[chunk])
else:
result.append(chunk)
result = ''.join(result)
print result
This results in no replacements in your example data, unless you remove a leading zero, and then you get:
000000000000001 -0000000000000000000020000200000000000000000003 -10000100000000000000500000000000000000000000
Same as above.