I want to scrape the full page to get links of account but problem is:
I need to click
Load morebutton many time to get full list of accounts to scrapeThere is a popup which comes occasionally so how do I detect it and click cancel button
If possible then I prefer to scrape the full page with request only. Since I have to click buttons so thought of using selenium.
Here is my code:
import time
import requests
from bs4 import BeautifulSoup
import lxml
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://society6.com/franciscomffonseca/followers')
time.sleep(3)
try: driver.find_element_by_class_name('bx-button').click() #button to remove popup
except: print("no popups")
driver.find_element_by_class_name('loadMore').click #to click load more button
I am using a test page which has 10K followers and want to scrape their followers account link. I have already code the scraper so just need to see full webpage
https://society6.com/franciscomffonseca/followers
Scraping code just in case:
r2 = requests.get('https://society6.com/franciscomffonseca/followers')
print(r2.status_code)
r2.raise_for_status
soup2 = BeautifulSoup(r2.content, "html.parser")
a2_tags = soup2.find_all(attrs={"class": "user"})
#attrs={"class": "user-list clearfix"}
follow_accounts = []
for a2 in a2_tags:
follow_accounts.append('https://society6.com'+a2['href'])
print(follow_accounts)
print("number of accounts scraped: " + str(len(follow_accounts)))
Html of load more button:
<button class="loadMore" onclick="loadMoreFollowers();">Load More</button>
forloop and increase page number inpage=1by1on each iteration