0

I have got a script which I use to scrape off data from the websites using selenium.

    Sub Body_Building()
    Dim driver As New WebDriver, post As Object

    With driver
        .Start "chrome", "http://www.bodybuildingwarehouse.co.uk"
        .Get "/optimum-nutrition?limit=all"
    End With

    On Error Resume Next
    For Each post In driver.FindElementsByClass("grid-info")
        i = i + 1: Cells(i, 1) = post.FindElementByClass("product-name").Text
        Cells(i, 2) = post.FindElementByXPath(".//span[@class='regular-price']//span[@class='price']|.//p[@class='special-price']//span[@class='price']").Text
    Next post
End Sub

Would it be possible to scrape off data from this website using the same or similar technique so the outcome would be like below in the snapshot?

enter image description here

Please see the VBA working so it matched the desired outcome. Thank you SMth80

Sub optigura_scraper_v2()
    Dim driver As New ChromeDriver
    Dim elems As Object, post As Object

    driver.Get "https://www.optigura.com/uk/product/gold-standard-100-whey/"
    [A1:D1].Value = [{"Name","Flavor","Size","Price"}]

    Set elems = driver.FindElementsByXPath("//span[@class='img']/img")
    i = 2

    For n = 1 To elems.Count
        driver.FindElementsByXPath("//span[@class='img']/img")(n).Click
        driver.Wait 1000
        For Each post In driver.FindElementsByXPath("//div[@class='colright']//ul[@class='opt2']//label")
            Cells(i, 1) = driver.FindElementByXPath("//h1[@itemprop='name']").Text
            Cells(i, 2) = post.Text
            Cells(i, 3) = Split(driver.FindElementByXPath("//li[@class='active']//span[@class='img']/img").Attribute("alt"), "-")(1)
            Cells(i, 4) = driver.FindElementByXPath("//span[@class='price']").Text
            i = i + 1
        Next post
    Next n
End Sub
0

1 Answer 1

2

Check it out. This is certainly not the best technique. However, it will serve your purpose. Btw, the scraper will parse exactly how the data is displayed in that page.

Sub optigura_scraper()
    Dim driver As New ChromeDriver
    Dim elems As Object, post As Object

    driver.Get "https://www.optigura.com/uk/product/gold-standard-100-whey/"
    [A1:D1].Value = [{"Name","Price","Size","Flavor"}]

    Set elems = driver.FindElementsByXPath("//span[@class='img']/img")
    i = 2

    For N = 1 To elems.Count
        driver.FindElementsByXPath("//span[@class='img']/img")(N).Click
        driver.Wait 1000
        Cells(i, 1) = driver.FindElementByXPath("//h1[@itemprop='name']").Text
        Cells(i, 2) = driver.FindElementByXPath("//span[@class='price']").Text
        Cells(i, 3) = Split(driver.FindElementByXPath("//li[@class='active']//span[@class='img']/img").Attribute("alt"), "-")(1)
        For Each post In driver.FindElementsByXPath("//div[@class='colright']//ul[@class='opt2']//label")
            Cells(i, 4) = post.Text
            i = i + 1
        Next post
    Next N
End Sub
Sign up to request clarification or add additional context in comments.

9 Comments

I'm just thinking would it be possible to return size for each item? In general it looks very good.
Also what do you use to read web page source? F12 on Chrome or Ctrl+U option?
Clicking right button and selecting inspect element.
Hey, Martin. Check the update script. Size included. It was a bit difficult to deal with.
Thank you very much for helping me out with selenium. I have amended the VBA a little so it matched the desired outcome.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.