I'm using below script to retrieve HTML from an URL.
string webURL = @"https://nl.wiktionary.org/wiki/" + word.ToLower();
using (WebClient client = new WebClient())
{
string htmlCode = client.DownloadString(webURL);
}
The variable word can be any word. In case there is no WIKI page for the "word" be retrieved the code is ending in error with code 404, while retrievng the URL with a browser opens a WIKI page, saying there is no page for this item yet.
What I want is that the code always gets the HTML, also when the WIKI page says there is no info yet. I do not want to avoid the error 404 with a try and catch.
Does anyone has an idea why this is not working with a Webclient?
HttpClientinstead ofWebClient