Try setting the Page Segmentation Mode (PSM) to mode 6 which will set the OCR to detect a single uniform block of text.
Specifically, do:
bal = pytesseract.image_to_string(balIm, config='--psm 6')
This should give you what you need. In fact, I tried running this on your image and it gives me what I'm looking for. Note that I downloaded your image that you provided above first and read in the image offline on my local machine:
In [8]: import pytesseract
In [9]: from PIL import Image
In [10]: balIm = Image.open('wC62s.png')
In [11]: pytesseract.image_to_string(balIm, config='--psm 6')
Out[11]: '0.03,'
As a final note to you, if you see that Tesseract doesn't quite work for you out of the box, consider trying one of their Page Segmentation Modes to help increase accuracy: https://tesseract-ocr.github.io/tessdoc/ImproveQuality#page-segmentation-method. For completeness, I'll make this available to you below.
0 Orientation and script detection (OSD) only.
1 Automatic page segmentation with OSD.
2 Automatic page segmentation, but no OSD, or OCR.
3 Fully automatic page segmentation, but no OSD. (Default)
4 Assume a single column of text of variable sizes.
5 Assume a single uniform block of vertically aligned text.
6 Assume a single uniform block of text.
7 Treat the image as a single text line.
8 Treat the image as a single word.
9 Treat the image as a single word in a circle.
10 Treat the image as a single character.
11 Sparse text. Find as much text as possible in no particular order.
12 Sparse text with OSD.
13 Raw line. Treat the image as a single text line,
bypassing hacks that are Tesseract-specific.
When you run image_to_string, specify an input parameter config that takes in a PSM you want to operate in. Try some of these until you get it to work for your image. Make sure you use --psm in the config parameter prior to executing.