You mainly need to be aware of two things: the data you're representing, and any intermediate steps in your calculations.
It certainly makes sense to have age be unsigned int, because we usually don't consider negative ages. But whenthen you mention subtracting one age from another. If we just blindly subtract one integer from another, then it's definitely possible to end up with a negative number, even if we previously agreed that negative ages don't make sense. So in this case you would want your calculation done with a signed integer.
In regards to whether unsigned values are bad or not, I would say that it is a huge generalization to say unsigned values are bad. Java doesn't have unsigned values, as you mentioned, and it constantly annoys me. A byte can have a value from 0-255 or 0x00-0xFF. But if you want to instantiate a byte larger than 127(0x7F), you either have to write it as a negative number or cast an integer to a byte. You end up with code that looks like this:
byte a = 0x80; // Won't compile!
byte b = (byte) 0x80;
byte c = -128; // Equal to b
The above annoys me to no end. I'm not allowed to have a byte have a value of 197, even though that's a perfectly valid value for most sane people dealing with bytes. I can cast the integer or I can find the negative value (197 == -59 in this case). Also consider this:
byte a = 70;
byte b = 80;
byte c = a + b; // c == -106
So as you can see, adding two bytes with valid values, and ending up with a byte with a valid value, ends up changing the sign. Not only that but it's not immediately obvious that 70 + 80 == -106. Technically this is an overflow, but in my mind (as a human being) a byte shouldn't overflow for values under 0xFF. When I do bit arithmetic on paper, I don't consider the 8th bit being a sign bit.
I work with a lot of integers on the bit level, and having everything be signed usually makes everything less intuitive and harder to deal with, because you have to remember that right-shifting a negative number gives you new 1s in your number. Whereas right-shifting an unsigned integer never does that. For example:
signed byte b = 0b10000000;
b = b >> 1; // b == 0b1100 0000
b = b & 0x7F;// b == 0b0100 0000
unsigned byte b = 0b10000000;
b = b >> 1; // b == 0b0100 0000;
It just adds extra steps that I feel shouldn't be necessary.
While I used byte above, the same applies to 32-bit and 64-bit integers. Not having unsigned is crippling and it shocks me that there are high level languages like Java that don't allow them at all. But for most people this is a non-issue, because many programmers don't deal with bit-level arithmetic.
In the end, it's useful to use unsigned integers if you're thinking of them as bits, and it's useful to use signed integers when you're thinking of them as numbers.