Chances are the process will be IO bound not CPU bound so it probably wont matter much and if it does it will be because of the decode function, which isn't given in the question.
In theory you have two trade situations, which will determine if (1) or (2) is faster.
The assumption is that the decode is fast and so your process will be IO bound.
If by reading the whole file into memory at once you are doing less context switching then you will wasting less CPU cycles on those context switches so then reading the whole file is faster.
If by reading in the file char by char you don't prematurely yield your time to a CPU then in theory you could use the IO waiting CPU cycles to run the
decode so then ready char by char will be faster.
Here are some timelines
read char by char good case
TIME -------------------------------------------->
IO: READ CHAR --> wait --> READ CHAR --> wait
DECODE: wait ------> DECODE --> wait ---> DECODE ...
read char by char bad case
TIME -------------------------------------------->
IO: READ CHAR --> YIELD --> READ CHAR --> wait
DECODE: wait ------> YIELD --> DECODE ---> wait DECODE ---> ...
read whole file
TIME -------------------------------------------->
IO: READ CHAR ..... READ CHAR --> FINISH
DECODE: -----------------------------> DECODE --->
If your decode was really slow then a producer consumer model would probably be faster. Your best bet is to use a BufferedReader will do as much IO as it can while waisting/yielding the least amount of CPU cycles.
large file