You can use url in read_csv() but it has no method to gives you status code. It simply raises error when it has non-200 status code and you have to use try/except to catch it. You have example in other answer.
But if you have to use requests then you can later use io.StringIO to create file-like object (file in memory) and use it in read_csv().
import io
import requests
import pandas as pd
response = requests.get("https://people.sc.fsu.edu/~jburkardt/data/csv/addresses.csv")
print('status_code:', response.status_code)
#if response.status_code == 200:
if response.ok:
df = pd.read_csv( io.StringIO(response.text) )
else:
df = None
print(df)
The same way you can use io.StringIO when you create web page which gets csv using HTML with <form>.
As I know read_csv(url) works in similar way - it uses requests.get() to get file data from server and later it uses io.StringIO to read data.
read_csvfails with a URL?read_csv()- it simply raise error when it can't read it. You have to userequests.get()to check status and get data from url and later useread_csv( io.StringIO( text ) ). Or you should usetry/exceptto catch error when it can't read data.read_csv()so I'd assume their goal with this feature was to replace any need forrequests.read_csv()but this function doesn't have method to gives you status code. It simply raise error when it can't read url.