Using the following packages: require(stringr) require(RCurl) require(XML)
I am able to connect to the desired web page, and extract information needed.
> url="https://www.realtor.com/realestateagents/33415/pg-1" doc =
> getURLContent(url, verbose = TRUE) #gets the doc , verbose = show me
> me what you are doing) doc = htmlParse(doc)
> # name = getNodeSet(doc, "//div[@itemprop = 'name']") name = sapply(name, xmlValue)
> # phone = getNodeSet(doc, "//div[@itemprop= 'telephone']") phone = sapply(phone, xmlValue)
I generated a list of urls
urlList = c("https://www.realtor.com/realestateagents/33415/pg-1",
"https://www.realtor.com/realestateagents/33415/pg-2")
urlList = as.list(urlList)
I would like to loop over each url, capture the same nodes and place the results in one data frame consisting of columns called Name and Phone.
I tried the following with no success
Reduce(function(...) merge(..., all=T),
lapply(urls_list, function(x) {
data.frame(urlList=x,
# d<- htmlParse(getURLContent(x))
d<-htmlParse(d)
d1 = getNodeSet(d, "//div[@itemprop = 'name']")
name = sapply(name, xmlValue)
})) -> results
Thank you for all your help