As already commented, it depends on the website/server if you can only request a part of the page. Since it is a website I would think it's not possible.
If the website is really really big, the only way I can currently think of to make the search faster is to process the data just in time. When you call requests.get(link), the site will be downloaded before you can process the data. You maybe could try to call
r = requests.get(link, stream=True)
instead. And then iterate through all the lines:
for line in r:
if ('data=sold' in line):
print("hooray")
Of course you could also analyze the raw stream and just skip x bytes, use the aiohttp library, ... maybe you need to give some more information about your problem.