String representations of data:
- require text encode/decode (which can be cheap, but is still an extra step)
- requires complex parse code, especially if there are human-friendly rules like "must allow whitespace"
- usually involves more bandwidth - so more actual payload to churn - due to embedding of things like names, and (again) having to deal with human-friendly representations (how to tokenize the syntax, for example)
- often requires lots of intermediate string instances that are used for member-lookups etc
Both text-based and binary-based serializers can be fast and efficient (or slow and horrible)... just: binary serializers have the scales tipped in their advantage. This means that a "good" binary serializer will usually be faster than a "good" text-based serializer.
Let's compare a basic example of encoding the integer 42:
- JSON:
{"id":42} (9 bytes assuming ASCII/UTF-8 and no whitespace)
- XML:
<id>42</id> (11 bytes assuming ASCII/UTF-8, no whitespace and no namespace noise)
- protobuf:
0x08 0x2a (2 bytes)
Now imagine writing a general purpose XML or JSON parser, and all the ambiguities and scenarios you need to handle just at the text layer, then you need to map the text token "id" to a member, then you need to do an integer parse on "42".
In protobuf, the payload is smaller, plus the math is simple, and the member-lookup is an integer (so: suitable for a very fast switch/jump).
foothen this name is not part of the message. In XML and JSON you will includefooas a literal string for each occurrence of the propertyfooin the message. The result is that protocol buffer messages are very compact compared to the same messages in XML or JSON.