GNU does have a Unicode string library, called libunistring, but it doesn’t handle anything nearly as well as ICU’s does.
For example, the GNU library doesn’t even give you access to collation, which is the basis for all string comparison. In contrast, ICU does. Another thing that ICU has that GNU doesn’t appear is Unicode regexes. For that, you might like to use Phil Hazel’s excellent PCRE library for C, which can be compiled with UTF-8 support.
However, it might be that the GNU library is enough for what you need. I don’t like its API much. Very messy. If you like C programming, you might try the Go programming language, which has excellent Unicode support. It’s a new language, but small and clean and fun to use.
On the other hand, the major interpreted languages — Perl, Python, and Ruby — all have varying support for Unicode that is better than you’ll ever get in C. Of those, Perl’s Unicode support is the most developed and robust.
Remember: it isn’t enough to support more characters. Without the rules that go with them, you don’t have Unicode. At most, you might have ISO 10646: a large character repertoire but no rules. My mantra is “Unicode isn’t just more characters; it’s more characters plus a whole bunch of rules for handling them.”
strlendoesn’t work at all if there are U+0000 code points in the string, which are completely legal. It is disingenuous to say that it tells the the “length” of the string. It doesn’t. It tells you the number of bytes only, and not the number of code points, which is what you would want.strlendoesn't work for ASCII strings that contain the ASCII code NUL either. But we don't go around saying it doesn't work for ASCII strings, do we?