I want to point out here that async serial communications relies on both ends providing a clock (there is no transmitted clock signal). Thus the sender clocks out bits at the "baud rate" (eg. 9600 bits per second) and the receiver clocks them in at the same rate.
With a start bit and 8 data bits therefore you can tolerate some difference in the clocks between sender and receiver (and there will be some difference). Effectively you could probably tolerate around a 10% difference in clock frequencies, as even if they are out by 10% the sample at the receiver will still land inside the data bit of the sender.
Now if you want to transmit 64 bits you have reduced this error margin by a factor of 8. So let's say (ball park) that you can now only tolerate a difference of 1% between sender and receiver.
You now need to be pretty sure that the clocks on the respective boards are pretty accurate. The Uno, for example, has a resonator CSTCE16M0V53-R0 which appears to have a tolerance of 0.5% and a frequency stability of 0.2%.
So, if the sender is running 0.5% slow, and the receiver 0.5% fast, you will almost certainly get errors.
However, when it becomes unsyncde, some new 0 bit should be treated as the startbit, in which case it should not be printed. But i don't see any misisng zero. It's as if the actual data just got shifted.
How will it get unsynced? I don't think you are following how serial comms works. If you don't have any delay between bytes (or batches of bytes in your case) then it is impossible to resync, because if you miss a start bit, all the following bits will appear valid, they'll just be shifted. This is exactly what you are describing.
Really, to ensure syncing you need to have a gap between bytes (or batches-of-bytes in your case) longer than a byte might be, otherwise you can get this mis-sync happening. So if you make your batches longer, the gap has to be longer, effectively removing the slight advantage of larger batches.
Imagine you have an incoming stream of bits on the receiver:
011110101101100011110101101101101100110101100101100110101111001010101110111001010101111
1101001010100101101001010100111011111001000010111100100011011001001001001011001001010
And imagine that you missed that first start bit. Without a gap between bytes (which would appear as: 111111111) you have no way of knowing which is a start bit and which is a data bit. Valid data would have a start bit, so you don't mistake valid data for the inter-byte gap.
In other words, with 8-bit bytes, data will always be 0xxxxxxxx1 and thus a gap of 1111111111 must be the gap and not data because there is no zero there. But with any shorter gap, and if the receiver samples in the middle of byte, it certainly might think that the next 0 in the stream is a start bit (wrongly).