1

I am practing to Scrape the websites so I choosed a website https://www.dunzo.com/bangalore/nilgiris-supermarket-koramangala-ejipura

Here is the code that I am using

url="https://www.dunzo.com/bangalore/nilgiris-supermarket-koramangala-ejipura"
r=requests.get(url)
htmlcontnent=r.content 
soup=BeautifulSoup(htmlcontnent,'html.parser')
elem=soup.select('.hozIhp')
print(elem)

Now I am getting output as:

[<p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Britannia Sweet Bread</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Britannia Sweet Bun</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Nilgiris Cheese Garlic Bread</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Nilgiris Fruit Bread</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Nilgiris Pav Bun</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Nilgiri's Broken Wheat Bread</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Nilgiri's Garlic Bread</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Nilgiri's Multi Grain Bread</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Nilgiri's Whole Wheat Bread</p>, <p class="sc-1gu8y64-0 dlNpIS sc-1twyv6b-1 hozIhp">Nilgiri's Whole Wheat Brown Bread</p>]

Hence the output came in the form of list. Now I want to extract the item names such as Britannia Sweet Bread, Britannia Sweet Bun, Nilgiris Cheese Garlic Bread etc. I tried some method such as adding .text with soup but it didn't worked. Can someone Please help me how to do that ??

4 Answers 4

3

Try this:

url="https://www.dunzo.com/bangalore/nilgiris-supermarket-koramangala-ejipura"
r=requests.get(url)
htmlcontnent=r.content 
soup=BeautifulSoup(htmlcontnent,'html.parser')
elem=soup.select('.hozIhp')
print(*[el.text for el in elem], sep="\n")

Output:

Britannia Sweet Bread
Britannia Sweet Bun
Nilgiris Cheese Garlic Bread
Nilgiris Fruit Bread
Nilgiris Pav Bun
Nilgiri's Broken Wheat Bread
Nilgiri's Garlic Bread
Nilgiri's Multi Grain Bread
Nilgiri's Whole Wheat Bread
Nilgiri's Whole Wheat Brown Bread
Sign up to request clarification or add additional context in comments.

1 Comment

Thanks for the solution. if you visit the website, you can see that there are many more items in the same list such as milk, eggs etc. Can you please guide me extracting those items name
2
url="https://www.dunzo.com/bangalore/nilgiris-supermarket-koramangala-ejipura"
r=requests.get(url)
htmlcontnent=r.content 
soup=BeautifulSoup(htmlcontnent,'html.parser')
elem=soup.select('.hozIhp')
#add to your code
for item in elem:
    print(item.text)

2 Comments

Thanks for the solution. if you visit that website, you can see that there are many more items in the same list such as milk, eggs etc. Can you please guide me extracting those items name
I have added another answer as the post is too long for comment
2

The issue you are having; the page is loading dynamically and requests is unable to load the full page

To fix this, you'll need a little more code first, install selenium using pip install selenium download a compactible Google Chrome webdriver from https://chromedriver.chromium.org/downloads (you must have Google chrome installed on your computer) extract the web driver in the same folder as your python script

Then run this code

from bs4 import BeautifulSoup
from selenium import webdriver
import time
 
browser = webdriver.Chrome(executable_path="chromedriver")

url="https://www.dunzo.com/bangalore/nilgiris-supermarket-koramangala-ejipura"
browser.get(url)

#the browser will scroll down for 7 times to load the remaining contents
for i in range (0,6):
    browser.execute_script("window.scrollTo(0, document.body.scrollHeight)")
    #waits for 5 seconds the content to load(You can adjust value depending on your internet speed)
    time.sleep(5)

html = browser.page_source
#r=requests.get(url)
#htmlcontnent=r.content 
soup=BeautifulSoup(html,'html.parser')
elem=soup.select('.hozIhp')
for item in elem:
    print(item.text)
    
browser.close()

output

Britannia Sweet Bread
Britannia Sweet Bun
Nilgiris Cheese Garlic Bread
Nilgiris Fruit Bread
Nilgiris Pav Bun
Nilgiri's Broken Wheat Bread
Nilgiri's Garlic Bread
Nilgiri's Multi Grain Bread
Nilgiri's Whole Wheat Bread
Nilgiri's Whole Wheat Brown Bread
Nilgiri's Milk Bread
Nilgiri's Sandwich Bread
Bajaj White Eggs Gold Pack
Suguna Healthy Eggs
Eggs
Nandini - Shubham Pasteurized Standardized Milk
Nandini Good Life Slim Milk
Nilgiris Lite Milk
Nilgiris Double Toned Milk
Nilgiris Full Cream Milk
Nilgiri's Rich Milk
Amul Premium Dahi
Amul Cheese Slices A+
Cavin's Curd Pouch
Epigamia Mishti Doi
Id Natural Curd
Milky Mist Mango Yogurt
Nilgiris Curd Lite
Nilgiris Low Fat Probiotic Curd
Nilgiris Paneer
Nestle A+ Nourish Dahi
Nilgiris Natural Curd Set
Nilgiris Butter Milk
Nilgiri's Toned Milk Curd Pouch
Nilgiri's Lite Curd Pouch
Nilgiri's Malai Paneer
Soulfull Choco And Vanilla Fills - Ragi Bites
Soulfull Choco Fills - Ragi Bites
Soulfull Vanilla Fills - Ragi Bites
Soulfull Strawberry Fills - Ragi Bites
Soulfull Diet Millet Muesli
Soulfull Fruit & Nut Millet Muesli
Soulfull Crunchy Millet Muesli
Soulfull Baked Desi Muesli - Chatpata
Soulfull Baked Desi Muesli - Masala
Kellogg's Corn Flakes
Fortune Mini Soya Chunks
Kellogg's Chocos Moon And Stars
Soulfull Millet Smoothix - Cocoa Lite Protein Drink Sachets
Soulfull Millet Smoothix - Almond Protein Drink Sachets
Soulfull Millet Smoothix - Almond Protein Drink Sachets
Soulfull Millet Smoothix - Cocoa Lite Protein Drink Sachets

Comments

0

As explained in the documentation, you can use get_text() to extract the text from a document or a tag

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.