Whenever a forum dies, a lot of answered questions are lost that would have been useful to people googling it in the future.
Reddit is like the mother of all forums, and it also has had a lot of internet history being made on it.
I really think we need a Reddit archive that is availabel for random people from Google. The best case scenario is that Reddit just limps on for years, therefore doing this conservation work better than anyone else could.
The entire text of Reddit has been archived up through February 2023. Compressed it’s close to 2TB and you can get it here: https://the-eye.eu/redarcs/
webarchive has a chunk im sure a couple of the index sites have copies at various points in time.
now?
there have been a lot of people actively removing and altering thier content, as it stands today, reddit itself is a terrible archive.
Lately, when I’m looking for answers and my googling gives me a Reddit link, I pull up the actual reddit page in The Wayback Machine. Admittedly my sample size is small but it hasn’t failed me yet.
Some day, though, if Reddit goes down completely or otherwise becomes unavailable to search engines, it will be much harder to find Reddit content by Googling for it.
The other thing I think is a hidden gem of useful info on Reddit is the wikis. It seemed that even sites like Libreddit (when it worked) didn’t provide access to those.
There is The Archive Team. They still seem to be actively archiving Reddit (probably via web scraping, not any particular API.) I’m not sure if/how the results of Archive Team scrapes are made available to others, though.
I wonder if there’s a browser extension to automatically try to load way back links for specific domains
I forgot about all the wikis and helpful links. This is such a loss.
For some reason, many of the reddit posts on web archive seem to have no comments because they haven’t been archived recently.😕
Pushshift has zip files containing every post and comment on reddit in a given year
deleted by creator
What do people think about scraping Reddit for some specific content and rehosting it over here on Lemmy? For example the stories from r/BestofRedditorUpdates. It would make Lemmy feel more like home and add much needed content.
Consent is important in the Fediverse. Scraping user content without their consent and posting it elsewhere is generally viewed as a big no-no.
This. Scraping and re-uploading would be absolutely wrong.
Thanks for the honest feedback.
I think that’s a terrible idea. I came here to get away from reddit. Let’s meet our own content.
deleted by creator
deleted by creator
What do people think about scraping Reddit for some specific content and rehosting it over here on Lemmy? For example the stories from r/BestofRedditorUpdates. It would make Lemmy feel more like home and add much needed content.