1

I'm trying to scrape job listings on a career page. I'm trying to click the load more button, but cant seem to figure it out. i was wondering if someone could help me out -- I keep getting an error saying "Element is not clickable at point x. Other element would receive the click".

this is the link: https://www.bain.com/careers/find-a-role/ the code to change would be the try block:

def button_company_name(driver, page, outer_loop_break, all_links):
    """
    Handles pagination by clicking the "next page"/"load more" button.

    Parameters:
    driver (WebDriver): The Selenium WebDriver instance.
    page (int): The current page number. (optional)
    outer_loop_break (bool): Flag to indicate when to stop scraping.
    all_links (list): List of all links on the current page.

    Returns:
    tuple: Updated page number and outer_loop_break flag.
    """

    try:
        # Define the XPath
        load_more_button_xpath = '//*[@id="role-search-page-react"]/div/div/div[3]/a'

        # Wait for the load more button to be clickable
        load_more_button = WebDriverWait(driver, 10).until(
            EC.element_to_be_clickable((By.XPATH, load_more_button_xpath))
        )

        # Trying to bring button into view before clicking on it
        driver.execute_script("arguments[0].scrollIntoView();", load_more_button)
        load_more_button.click()

        # Wait for the page to load
        time.sleep(5)


    except Exception as e:
        print(f"Error occurred while trying to click the 'next page' button: {e}")
        outer_loop_break = True
    return page + 1, outer_loop_break
1
  • this website has a cookies banner at the bottom. When you scroll to the button, it scrolls until it is just within the viewport, leaving it obscured by the banner. Using selenium's click() function emulates an actual click and so you get this error. The common workaround would be to use JavaScript to click the element--but I would go with Andrej's approach Commented Jun 13, 2024 at 3:34

3 Answers 3

0

You can use their pagination API to get more results, e.g.:

import requests

url = "https://www.bain.com/en/api/jobsearch/keyword/get?start={}&results=10&filters=&searchValue="

for page in range(0, 3): # <-- increase number of pages here
    data = requests.get(url.format(page)).json()
    for r in data["results"]:
        print(r["JobTitle"])

Prints:

Expert Senior Manager, Machine Learning Engineer
Expert Manager, Machine Learning Engineer
Lead, Machine Learning Engineer
Senior Machine Learning Engineer
Expert Senior Manager, Machine Learning Engineer
Senior Machine Learning Engineer
Senior Software Engineer
Director, Global Financial Accounting
Director, Global Financial Accounting
Analyst /Sr. Analyst, APAC Finance & Staffing (FP&A)
Analyst, IT Support
Assistant, Office Services (m/w/d) in München
Assistant, Office Services (m/w/d) in Wien (Teilzeit)
Facilities Assistant (Contract/Temporary Full-Time)
Associate - Data Engineer
Associate - Data Science
Associate - ENR
Associate - Human Resources Business Partner
Associate - S&M Transformation CoE
Associate - Tools Specialist, Financial Services COE
Associate – Advanced Manufacturing & Industrial Services
Associate (B2C) - Pricing CoE
Associate (Data Engineering) – Pyxis, Data Business CoE
Associate (Supply Chain) - Performance Improvement CoE
Associate / Coordinator Payroll
Associate Consultant
Associate Consultant Intern
Associate Consultant Trainee
Business Presentation Designer (Associate), Global Business Services KL
IT Help Desk Support Associate
Sign up to request clarification or add additional context in comments.

1 Comment

note that you can use set results=100, or any number to get all the results in one call.
0
Element is not clickable at point x. Other element would receive the click

The reason you are getting above exception is because Load More button is over shadowed by another element which is not allowing your script to click the targeted button. And this over lapping element is the cookies pop-up. You need to get rid of it before clicking on the Load More button.

Here is the working code with explanation to click on Load More in a loop until all the job listings are loaded.

import time
from selenium import webdriver
from selenium.common import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.maximize_window()
driver.get("https://www.bain.com/careers/find-a-role/")
wait = WebDriverWait(driver, 10)

# Below 3 lines will switch into the IFRAME, click on the Accept All cookies button and come out of IFRAME
wait.until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, "//iframe[@title='TrustArc Cookie Consent Manager']")))
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='ACCEPT ALL COOKIES']"))).click()
driver.switch_to.default_content()

# Below loop will click on the Load More until all jobs are loaded
while True:
    try:
        loadMore_btn = wait.until(EC.visibility_of_all_elements_located((By.XPATH, "//a[text()='Load more']")))
        if len(loadMore_btn) == 1:
            loadMore_btn[0].click()
            time.sleep(2)
        else:
            break
    except TimeoutException:
        break

# Now that all the job listings are loaded into the page, put your scraping code here

time.sleep(20)

Comments

0

You can try to go through JS to click

self.driver.execute_script("arguments[0].click();", WebDriverWait(self.driver, 5).until(
            EC.presence_of_element_located((By.XPATH, your_xpath))))

This method can avoid click failures caused by element occlusion

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.