Looking at an MIT note on propagation of errors, the error of a product $m\cdot v$ works out to the sum of the percentage errors, so we can report the range of values as $m \cdot v =m\cdot v ~(1 \pm ~(\frac{\delta m}{m} + \frac{\delta v}{v})),$ in which $\delta m$, for example, is the absolute uncertainty in mass.
Here is my question. In textbook problems on the uncertainty principle, we may be given $\Delta x$, uncertainty in position, given a mass $m,$ and asked to find uncertainty in velocity $\Delta v,$ on the premise that $\Delta p = m\cdot \Delta v,$ and of course $\Delta x \cdot \Delta p \geq \hbar /2.$
This idea is given explicitly in Harris, Modern Physics, 2d Ed., p. 49: $\Delta v = \frac{\Delta p}{m}.$
By the same logic we would I think have $\Delta m = \frac{\Delta p}{v}$.
Since the mass is much greater than its uncertainty, and the velocity greater than its uncertainty, I am not at all sure why we can use (in this case) $m$ as a proxy for $\delta m.$ It seems we would need to know the uncertainty in $m$, not $m.$
Can someone explain this?