From the problem description (and from the fact that the proposed awk solution has been accepted), it seems clear that although the file itself is large, each JSON document is relatively small, or at least small enough to fit in memory. If that is indeed the case, then a straightforward solution using jq would have similar performance characteristics to a sed or awk solution, but without the potential complications. Here therefore is such a solution:
jq '.kruxSegmentIds |= with_entries(.key |= if .=="0" then "zero" elif .=="1" then "one" else . end)'
If jq empty hugefile fails because of the file's size, then jq might still be useful because of its streaming parser, which is designed precisely for such cases.
Variations
In the comments, the OP posted another example, so it might be useful to define a filter for performing the key-to-key transformation:
def twiddle:
with_entries(.key |= if .=="0" then "zero" elif .=="1" then "one" else . end);
With this, the solution to the original problem is:
.kruxSegmentIds |= twiddle
and the solution to the variant is:
(.users.L3AVIcqaDpZxLf6ispK.kruxSegmentIds) |= twiddle
Generalizing even further, if the task is to perform the transformation on all objects, wherever they occur, the solution is:
walk(if type == "object" then twiddle else . end)
If your jq does not have walk pre-defined, then you can snarf its def from https://raw.githubusercontent.com/stedolan/jq/master/src/builtin.jq
jqand not a non-syntax aware parser