Selenium with Python 2.7 on Windows 10 running the Chrome driver (as a script and from the python REPL) fails to find an element by partial link text, and I'm not sure why. When I look at the source code of the webpage in question, there is only one instance of WO20 in the entire page, and it's in a link, but selenium returns no such element.
Here's an example (the WO20 comes just after the href):
<a id="resultTable:0:resultListTableColumnLink" name="resultTable:0:resultListTableColumnLink" href="detail.jsf?docId=WO2015102036&recNum=1&office=&queryString=FP%3A%28JP2014005719%29&prevFilter=&sortOption=Pub+Date+Desc&maxRec=3" target="_self"><span class="notranslate"> WO/2015/102036</span></a>
The page has a few other links, but the one I need is the only one with the character combination WO20, so in theory this should be easy to identify. My guess is that selenium isn't recognizing this as a link, which is why partial_link_text isn't working. I've tried using xpath (successfully), but the problem is that that returns the n-th link in the table, and the links I need (I'm processing multiple documents) aren't in a fixed position.
elem = driver.find_element_by_xpath('''//*[@id="resultTable:0:resultListTableColumnLink"]''')... I think. I just replaced it in the script with the other answerer's suggestion, so it's gone. My original code worked, in that it identified an element, but only by position in a table, and not by the content of the link.