You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
More often than not I try recursively downloading a webpage using wget, only to have it download a single index.html.gz then stop. Obviously wget can't read gzipped files so it fails to find any links for recursive downloading... I ended up using this wget fork that was last updated 10 years ago and it works fine, however I find it odd that such a basic feature never made it into mainline wget.
Please add a feature for automatically detecting and uncompressing gzipped webpages before crawling them.
The text was updated successfully, but these errors were encountered:
More often than not I try recursively downloading a webpage using wget, only to have it download a single
index.html.gz
then stop. Obviously wget can't read gzipped files so it fails to find any links for recursive downloading... I ended up using this wget fork that was last updated 10 years ago and it works fine, however I find it odd that such a basic feature never made it into mainline wget.Please add a feature for automatically detecting and uncompressing gzipped webpages before crawling them.
The text was updated successfully, but these errors were encountered: