I'm trying to curl/wget a list of directories/files names available in a directory listing of a webserver.
For example from (randomly chosen) http://prodata.swmed.edu/download/, I'm trying to download:
bin
dev
etc
member
pub
usr
usr1
usr2
cUrl (curl http://prodata.swmed.edu/download/) gets me the whole HTML page, which I'd need to parse manually for all file/directory entries.
Is there a way to download the names of the available files/directories only, with curl/wget, without installing additional parser?