• LoamImprovement@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    11 months ago

    Hold up, does someone know how to save an entire site? I would really like to get the 5e wikidot archived in case Hasbro or whoever wants to shut it down for good.

    • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      11 months ago

      Probably a browser extension these days. I had one back in the late 90’s or early 2000’s that would simply download the page you were on, as well as every page, image, audio file, etc. on every recursive link on that page.

      This was back when most websites had a table of contents link somewhere, though. There are plenty of sites now that don’t link to every page contained on the domain and are only accessible if you manually enter the URL or use dynamically created pages that only exist upon request.

    • jherazob@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      There’s software that browses to the homepage of a site and starts traversing it all, saving it all in the process

    • anton@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      It won’t save everything, but if a script follows every link recursively, most content should be reached that way. That’s kind of what Google does but for one site instead of the internet.

      If there is a search function try very simple queries.

      The alternative of brute forcing links would be unfeasible, even if you are not rate limited by the site, due to the exponential complexity.

      If you want to do something please look into api/scraping etikette like exponential back off.

    • zzz@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      Link? And where can I upload a PDF* of the site to share with you? tmpfiles.org’s short duration probably won’t cut it…

      *Although I’m certain The Saver™️ would only do full webarchive zips, for us casuals, the PDF export shall do (and be easier in day to day use)