My dynamic search pages are marked as duplicate content by Google (Alternate page with proper canonical tag) even though the content on the pages is different. Each page is on a single URL with structure:
/search?description=&location=
Each unique page has unique meta tags. As my site's search requires a location input for the searches to work, any pages without a location parameter have the noindex property. Currently the canonical URL is set to just /search without any parameters, which will always display an error page.
The dynamic content on the different pages loads correctly by Googlebot user agent and the unique meta tags seem to be recognized. Does anyone know why google might think search pages which display unique results are marked as duplicate? Sorry if I'm not making much sense, I am still learning.
disallow: /searchin robots.txt if you don't want your site penalized. Google never wants to index search results, no matter how unique they are. See Search results in search results by Matt CuttsDisallow: /resultsin robots.txt which prevents their search results from getting crawled and indexed.https://www.yelp.com/search?find_desc=viagra&find_loc=Atlanta%2C+GAGoogle has penalized sites for that.