Building on fge's answer...
Your observation is occurring because new String(ebcdicByte) and ebcdicString.getBytes() use the platform's default charset.
ISO-8859-1 and windows-1252 are one-byte charsets. In those charsets, one byte always represents one character. So in AIX and Windows, when you do new String(ebcdicByte), you will always get a String whose character count is identical to your byte array's length. Similarly, converting a String back to bytes will use a one-to-one mapping.
But in UTF-8, one character does not necessarily correspond to one byte. In UTF-8, bytes 0 through 127 are single-byte representations of characters, but all other values are part of a multi-byte sequence.
However, not just any sequence of bytes with their high bit set is a valid UTF-8 sequence. If you give an UTF-8 decoder a sequence of bytes that isn't a properly encoded UTF-8 byte sequence, it is considered malformed. new String will simply map malformed sequences to a special default character, usually "�" ('\ufffd'). That behavior can be changed by explicitly creating your own CharsetDecoder and calling its onMalformedInput method, rather than just relying on new String(byte[]).
So, the ebcdicByte array contains this EBCDIC representation of "ABC12345":
C1 C2 C3 F1 F2 F3 F4 F5
None of those are valid UTF-8 byte sequences, so ebcdicString ends up as "\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd" which is "��������".
Your last line of code calls ebcdicString.getBytes(), which again does not specify a character set, which means the default charset will be used. Using UTF-8, "�" gets encoded as three bytes, EF BF BD. Since there are eight of those in ebcdicString, you get 3×8=24 bytes.