I'm currently learning "numeric" data type on MSDN and have encountered the following phrase.
Converting from decimal or numeric to float or real can cause some loss of precision. Converting from int, smallint, tinyint, float, real, money, or smallmoney to either decimal or numeric can cause overflow.
I dont really understand the reason behind loss of precision/overflow when converting "decimal or numeric" data type. Could someone please explain it to me?