1

I am a bit confused by formatting the strings when I want to print a floating point number. For example when I have %6.2f I expect the print function to give me a number with 6 digits including 2 digits in its decimal part. So, when I write the following statements:

mRa = 569.34643865
print("result %6.2f" %mRa)

The outcome will be result 569.35.

But when I change the second line of the code to print("result %1.0f" %mRa), the result is result 569, while I expected to see only the first digit of the number. Additionally, if I change that line of the code to print("result %3.0f" %mRa), the result would be exactly the same. I would appreciate if someone can explain to me why do we have such a (conflicting) observation?

1
  • 1
    As an aside, for modern code bases, you really should be using either .format or f-strings. the %, printf-style formatting is all but deprecated, and indeed, explicitly advised against in the documentation. Commented Jul 7, 2019 at 22:54

1 Answer 1

5

Per the documentation, the 1 in %1.0f is the minimum field width (in the “printf-style String Formatting” section, item 4 in the list). It is the minimum number of characters to use for the field.

The minimum field width is intended to help programs format tables—a program that has correctly calculated the maximum space it needs can use that as the field width, and lines with different values will appear aligned when printed in succession. But if a value exceeds the minimum field width, the correct value will be displayed (by using more characters, which messes up the table alignment). If a program does not have a sufficient field width, it is generally preferred to have a messy display than an incorrect display, so more characters are used for the field rather than clipping the number.

To clip a field so that it shows only certain digits, you must manipulate strings yourself.

Clipping digits on the right has been considered acceptable since the result of such clipping changes the displayed value by small amounts. (Nonetheless, the resulting incorrect displays of numbers have confused generations of programmers about how floating-point arithmetic works.) So the precision field (the 2 in %6.2f) specifies how many digits to use for the precision, not the minimum number of digits to use.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.