wget
Daniel Armstrong
dwarmstrong-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Fri May 26 20:15:10 UTC 2006
I am trying out wget to do unattended downloads by attempting to
retrieve a series of video files, but it fails because it first tries
to download the index.html file which - in this particular case - the
directory does not have.
The situation:
If I point Firefox to http://ocw.mit.edu/ans7870/7/7.012/f04/video/ I
get a blank page... I take this to mean there is no index.html file at
this location?
If I use wget to download a single video file from this location:
wget \ http://ocw.mit.edu/ans7870/7/7.012/f04/video/ocw-7.012-lec-mit-10250-22oct2004-1000-220k.rm
...it works as expected.
But I would like to know how to use wget to download *all* the video
files of a certain compression size with a single command. I checked
the manpage and used the "-A" option to specify a filetype, using this
command:
wget -A "*220k.rm" http://ocw.mit.edu/ans7870/7/7.012/f04/video/
...which returns the following error...
--16:10:53-- http://ocw.mit.edu/ans7870/7/7.012/f04/video/
=> `index.html'
Resolving ocw.mit.edu... 209.123.81.89, 209.123.81.96
Connecting to ocw.mit.edu|209.123.81.89|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
16:10:53 ERROR 404: Not Found.
How do I manage to setup wget to ignore the fact that there is no
index.html at this location, and just download the *.rm files I
requested? wget would be a perfect tool for downloading a series of
files like this unattended vs. downloading each file by hand
one-by-one... Thanks in advance for any help.
--
Daniel W. Armstrong
::: build it yourself biology http://biohackery.com :::
--
The Toronto Linux Users Group. Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
More information about the Legacy
mailing list