Unicode is logically a 21-bit code. As modern computers don’t conveniently work with such units, there are various solutions: use 32 bits (4 bytes), wasting a lot of bits, especially if your data is dominantly in English; use a special scheme that uses one or two 16-bit units per character; and use a variable number of 8-bit bytes per character. These are known as UTF-32, UTF-16, and UTF-8 transfer encodings.
Windows uses internally UTF-16, whereas UTF-8 dominates e.g. on the Web, so you often need to convert between them. This is nontrivial but usually made with suitable library routines, maybe implicitly, depending on programming environment. UTF-32 is rarely used.
Technically, UTF-16 is very simple for all characters that fit into the 16-bit subspace of Unicode, Basic Multilingual Plane (BMP)—quite possibly all characters you ever heard of. UTF-8 is more complex but has been design with Western emphasis: all Ascii characters are represented as single bytes in UTF-8, so any file that contains dominantly Ascii is of almost the same size in UTF-8 as in Ascii. This is opposite to UTF-16, which always uses two bytes per Ascii character.