The classification report is about key metrics in a classification problem.
You'll have precision, recall, f1-score and support for each class you're trying to find.
The recall means "how many of this class you find over the whole number of element of this class"
The precision will be "how many are correctly classified among that class"
The f1-score is the harmonic mean between precision & recall
The support is the number of occurence of the given class in your dataset (so you have 37.5K of class 0 and 37.5K of class 1, which is a really well balanced dataset.
The thing is, precision and recall is highly used for imbalanced dataset because in an highly imbalanced dataset, a 99% accuracy can be meaningless.
I would say that you don't really need to look at these metrics for this problem , unless a given class should absolutely be correctly determined.
To answer your other question, you cannot compare the precision and the recall over two classes. This only means you're classifier is better to find class 0 over class 1.
Precision and recall of sklearn.metrics.precision_score or recall_score should not be different. But as long as the code is not provided, this is impossible to determine the root cause of this.