4

Let's say I have a series of URL strings that I've imported into R.

url = c("http://www.mdd.com/food/pizza/index.html", "http://www.mdd.com/build-your-own/index.html",
        "http://www.mdd.com/special-deals.html", "http://www.mdd.com/find-a-location.html")

I want to parse through these url's to identify what page they are. I want to be able to map url[3] to special deals page. For this example, let's say I have the following 'types' of pages.

xtype = c("deals","find")
dtype = c("ingrediants","calories","chef")

Given these types, I want to take the url variable and map them together.

So I should end up with:

> df
                                           url  site
1     http://www.mdd.com/food/pizza/index.html dtype
2 http://www.mdd.com/build-your-own/index.html dtype
3        http://www.mdd.com/special-deals.html xtype
4      http://www.mdd.com/find-a-location.html xtype

I began looking into this project by thinking that I'd need to use strsplit to strip apart each url. However, the following doesn't work to split apart the url. Splitting apart the url's would allow me to put together some if-else statements for performing this task. Efficient? No, but as long as it get's the job done.

Words = strsplit(as.character(url), " ")[[1]]
Words

Here are my main questions:

1. Is there a package to do URL parsing in R?
2. How can I go about identifying the page which is viewed from a large url string?

EDIT:

What I'm asking is this: How can I figure out the 'specific page' from a url string. So if I have "http://www.mdd.com/build-your-own/index.html" I want to know how I can extract just build-your-own.

4
  • It's not at all clear to me what you're trying to do. I think sapply(strsplit(as.character(url), "\\"), "[[", 1) may be what you're aprtially after. Commented Feb 25, 2014 at 15:31
  • In addition to Tyler Rinker's comment, it might be easier to use basename(url) to organize/index the pages. Commented Feb 25, 2014 at 15:54
  • What is the expected output for http://www.mdd.com/special-deals.html? Commented Feb 25, 2014 at 16:18
  • special-deals or special-deals.html Commented Feb 25, 2014 at 16:23

3 Answers 3

6

There's also the urltools package now which is infinitely faster than most other methods:

url <- c("http://www.mdd.com/food/pizza/index.html", 
         "http://www.mdd.com/build-your-own/index.html",
         "http://www.mdd.com/special-deals.html", 
         "http://www.mdd.com/find-a-location.html")

urltools::url_parse(url)

##   scheme      domain port                      path parameter fragment
## 1   http www.mdd.com          food/pizza/index.html                   
## 2   http www.mdd.com      build-your-own/index.html                   
## 3   http www.mdd.com             special-deals.html                   
## 4   http www.mdd.com           find-a-location.html                   
Sign up to request clarification or add additional context in comments.

Comments

5

You can use the parse_url function from the httr package to parse URLs. Regular expressions can be used to extract the relevant substring:

sub("(.+?)[./].+", "\\1", sapply(url, function(x) parse_url(x)$path, 
                                 USE.NAMES = FALSE))

# [1] "food"            "build-your-own"  "special-deals"   "find-a-location"

Comments

2

It's not exactly clear where you're headed with this, but here are a few ways to parse urls.

Use the basename function

sapply(url, basename)
  http://www.mdd.com/food/pizza/index.html http://www.mdd.com/build-your-own/index.html 
                              "index.html"                                 "index.html" 
     http://www.mdd.com/special-deals.html      http://www.mdd.com/find-a-location.html 
                      "special-deals.html"                       "find-a-location.html" 

Use a prefix and strsplit

prefix <- "http://www.mdd.com/"
unlist(strsplit(url, prefix))
[1] ""                          "food/pizza/index.html"     ""                         
[4] "build-your-own/index.html" ""                          "special-deals.html"       
[7] ""                          "find-a-location.html"  

Use gsub

gsub(prefix, "", url)
[1] "food/pizza/index.html"     "build-your-own/index.html" "special-deals.html"       
[4] "find-a-location.html"     

To find which type of url you're dealing with, you can use grep

xtype <- c("deals", "find")

> sapply(xtype, function(x) grep(x, url))

 deals  find 
     3     4 

And to find the specific page(s) from xtype:

> url[sapply(xtype, function(x) grep(x, url))]
 [1] "http://www.mdd.com/special-deals.html"   "http://www.mdd.com/find-a-location.html"

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.