The code below works fine on Ubuntu 20.04. It checks the .csv file which contains URLs in column A. Every single address URL is in a new row.
To use it you need to run the script by typing:
bash script.sh file_with_urls.csv response_code
for example: bash script.sh urls-to-check.csv 200
#!/usr/bin/env bash
while read -r link; do
response=$(curl --output /dev/null --write-out %{http_code} "$link")
if [[ "$response" == "$2" ]]; then
echo "$link"
fi
done < "$1"
If I use it on Windows 10 with WSL Ubuntu 20.04 distribution I'm getting "curl: (3) URL using bad/illegal format or missing URL" error.
I'm a bit stuck with this...
echothem before you invoke thecurlcommand or print them out to a file after a successful call... Once you have the URL/culprit, then you can see what's wrong with it (to see if it's missing something or it's illegal in some way). Without any additional information, there is no easy way for us to help you other than by guessingread -r linkis reading the entire line (not just the first field) intolink. See BashFAQ #1: "How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?" The CSV file might also have DOS/Windows line endings, which adds another pile of potential confusion. Addingset -xas the second line of the script (just after the shebang) will print an execution trace that'll help show problems like this.+ read link ++ curl --output /dev/null --silent --write-out '%{http_code}' {full_url_here}/\r' + response=000 [[ 000 == \4\0\4 ]]When I do sed like dan shows the script work normally. Appreciate that you pointed the sources so I can understand what exacly happen and why