0

I got into automating tasks on the web using python. I have tried requests/urllib3/requests-html but they don't get me the right elements, because they get only the html (not the updated version with javascript). Some recommended Selenium, but it opens a browser with the webdriver. I need a way to get elements after they get updated, and maybe after they get updated for a second time. The reason I don't want it to open a browser is I'm running my script on a hosting-scripts service.

2
  • Please can you share an minimal reproducible example. Where is some code and a test URL/HTML ? Commented Nov 25, 2018 at 17:32
  • A little late, but I found woob.tech interesting, full browser without webdriver / local browser install. Commented Sep 17, 2021 at 17:49

2 Answers 2

2

Here is my Solution to your problem.

Beautiful Soup doesn't mimic a client. Javascript is code that runs on the client. With Python, we simply make a request to the server, and get the server's response, along of course with the javascript, but it's the browser that reads and runs that javascript. Thus, we need to do that. There are many ways to do this. If you're on Mac or Linux, you can setup dryscrape... or we can just do basically what dryscrape does in PyQt4.

    import sys
    from PyQt4.QtGui import QApplication
    from PyQt4.QtCore import QUrl
    from PyQt4.QtWebKit import QWebPage
    import bs4 as bs
    import urllib.request

    class Client(QWebPage):

        def __init__(self, url):
            self.app = QApplication(sys.argv)
            QWebPage.__init__(self)
            self.loadFinished.connect(self.on_page_load)
            self.mainFrame().load(QUrl(url))
            self.app.exec_()

        def on_page_load(self):
            self.app.quit()

    url = 'https://pythonprogramming.net/parsememcparseface/'
    client_response = Client(url)
    source = client_response.mainFrame().toHtml()
    soup = bs.BeautifulSoup(source, 'lxml')
    js_test = soup.find('p', class_='jstest')
    print(js_test.text)

Just in case you wanted to make use of dryscrape:

    import dryscrape

   sess = dryscrape.Session()
   sess.visit('https://pythonprogramming.net/parsememcparseface/')
   source = sess.body()

   soup = bs.BeautifulSoup(source,'lxml')
   js_test = soup.find('p', class_='jstest')
   print(js_test.text)
Sign up to request clarification or add additional context in comments.

3 Comments

I already tried PyQt4, but there's always No module named PyQt4 error.
pip install PyQt4 Works like a charm for me.
2

I would recommend that you look into the --headless option in webdriver, but that will probably not work for you, since this still requires the browser installed so webdriver can make use of the browsers rendering engine ("headless" means it does not start the UI). Since your hosting service will probably not have the browser executables installed this will not work.

Without a rendering engine you will not get the rendered (and JS-enhanced) web page, that simply does not work in pure python.

On option would be a service like saucelabs (I am not affiliated, but I am a happy user) who run browsers on their infrastructure and allow you to control them via their API. So you can run selenium scripts that get the HTML/JS content via RemoteWebDriver and process the results on your own server.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.