I need to parse a large number of URLs to retrieve a guid using urllib/python3. Some urls contain a fragment which causes problems with returning the parameters.
import urllib
url = "https://zzz.com/index.html#viewer?guid=6a755e6d-4eae&Link=true&psession=true")
parse_response = urllib.parse.urlsplit(self.url)
self.logger.info("The parsed url components = {}".format(parse_response))
The parsed url components = SplitResult(scheme='https', netloc='abc.com', path='/index.html', query='', fragment='viewer?guid=6a755e6d-4eae&Link=true&psession=true')]
So urllib rightly sees the "#" and stores the rest of the URL as a fragment, and will not return the parameters. What is the best way to process the URL's with and without fragments?