In theory, you could use just JavaScript for such a thing. On page load (or document ready, or whatever), the script could:
- Use
getElementsByTagName() to get all links on the page
- Do an
XMLHttpRequest for the href of each same-domain link it finds.
- Look through each response for
<img> tags, and preload each one.
- Look through each response for
<a> tags, and repeat.
However, it'd be more efficient to maintain or generate the list of image URLs used throughout the site on the server, and include that list as a JavaScript array in the script, to avoid all the crazy AJAX image finding.
You'd want to include this script on every page on the site (as you never know what page a user will start on on your site), but save a cookie when it's run (so that it doesn't re-run on every page a given user visits).
I'd think this through carefully though. Some users could be on internet connections where their data use is limited, or charged per megabyte. Are you sure all users want to download every image on your site just from visiting one page?
new Image()), or do you simply want to get the initialsrcproperties of a bunch of<img>tags?