At the time of writing, Lemmyworld has the second highest number of active users (compared to all lemmy instances)
Also at the time of writing, Lemmyworld has >99% uptime.
By comparison, other lemmy instances with as many users as Lemmyworld keep going down.
What optimizations has Lemmyworld made to their hosting configuration that has made it more resilient than other instances’ hosting configurations?
See also Does Lemmy cache the frontpage by default (read-only)? on [email protected]
It’s known in the industry as the throw-hardware-at-it optimization. It’s often effective and what’s needed to buy time for software optimization to come in.
As someone who got burnt out on one of their last businesses due to optimizing too early - Yes!!!
Doing it “properly” with “stateless servers” and “autoscaling” with “Kubernetes” costs a hell of a lot more money than a 64 Core server with 256 GB of RAM
Completely OT but it’s so nice to recognise usernames you’ve often seen around your “neighbourhood” on Reddit.
Thank you for all the compliments.
This ride reminds me of Mastodon.world in November. Details on that are here: https://blog.mastodon.world/and-then-november-happened
So I started lemmy.world on a 2CPU/4GB VPS. Keeping an eye on the performance. Soon I decided to double that. And after the first few thousand of users joined, doubled it again to 8CPU/16GB. That also was the max I could for that VPS type.
But, already I saw some donations come in, without really asking. That reminded me of the willingness to donate on Mastodon, which allowed me to easily pay for a very powerful server for mastodon.world, one of the reasons it grew so fast. Other (large) servers crashed and closed registrations, I (mainly) didn’t.
So, I decided to buy the same large server (32cpu/64threads with 128GB RAM) as for masto (but that masto one has double the RAM). With the post announcing that, I also mentioned the donation possibilities. That brought a lot of donations immediately, already funding this server for at least 2 months. (To the anonymous person donating $100 : wow!).
Now next: to solve the issue with post slowness. That’s probably a database issue.
And again: migration took 4 minutes downtime, and that could have been less if I wasn’t eating pizza at the same time. So if any server wants to migrate: please do! If you have the userbase, you’ll get the donations for it. Contact me if you have questions.
Nice job, thanks very much for the write up.
Out of curiosity are you cloud hosting or do you own a server on a rack somewhere? Scaling with Kubernetes or VMs or just running bare-metal?
deleted by creator
is it a good idea to host on my home internet ?
I bet you could do it if your instance didn’t pull in a lot of traffic.
If it did… I reckon that you might be able to pull it off to a certain extent so long as your internet package was good enough, but if you got hit with a Reddit-level flood of incoming users, your network almost certainly wouldn’t be able to keep up.
Even if it could, if you were consistently eating through all the upload bandwidth, I reckon you’d draw the eyes of your ISP and they might send you a letter kindly and respectfully telling you that if you don’t upgrade to a commercial line they’re not renewing your contract.
As someone “in the business”, but not nearly as technical as you… How far can a single instance scale? Can a load balancer spread it over mulitple front-ends to handle user load? Can the back-end be re-worked to handle hundreds of millions of user operations per second? Can it work with a CDN? Can a single “Lemmy.World” site exist as a distributed site - with hundreds of servers spread across dozens of sites across the globe?
I expect this entire line of thought is antithetical to the entire Lemmy philosophy of distributed operation. I expect that the “correct” way is to spin off “NA.Lemmy.World”, EMEA. Lemmy.World", APAC.Lemmy.World", etc. as separate servers. Is that correct?
Thanks.
And this kindness and willingness to help is why I’ve already fallen in love with Lemmy. Thank you good sir, thank you dearly for helping the next generation of internet denisens :D
Interesting, I’m new on Lemmy (and fediverse itself), but when you said server does it means the backend that handles frontend traffic or database that stores all the data? Seems the next optimization step is distributing the traffic to multiple servers.
Also (again I don’t know about the lemmy system itself), maybe you can get away with just upgrading CPU cores only or RAM only (depends on what bottlenecking the system). From my experience, the RAM requirement is scaling slower compared to CPU
Hey, you rock. This place is pretty cool.
What does it cost per month to operate your servers, namely this one?
They just upgraded to a dedicated server for 180€/month today.
Wow. That’s so much more than I expected!
really, more?. We’re taking a dedicated server hosting thousands of users posting content
What kind of pizza was it?
Salami
Based
Based on what? Salami?
Am I able to use the same account to login to mastodon.world? Or do I need to make an account there too? Never used mastodon but vining the fediverse stuff
Unless I’m very mistaken you can not use the same account.
You can, you just must log in to the server/website in which you made the account, then browse over to the server you wish to contribute to or use.
It’s a bit weird, I know. If you’ve got any questions I’ll try my best to answer them :D
deleted by creator
Does anyone have any recommendations for where you could host a cheap instance? Under 100 users?
deleted by creator
Friggen excellent reply! Thank you! Saving this!
Having to seperate kbin from the rest of the fediverse is really limiting, and makes the experience more fractured.
[This comment has been deleted by an automated system]
Kbin is (slightly) better looking though.
[This comment has been deleted by an automated system]
I played around with the Stylus browser extension and made a custom script with adjustments to widths, padding, font sizes, line heights, etc and Lemmy started to feel a lot better and more familiar. I’m sure there are really talented people working on ideas to make it better.
🤞 Can’t wait to see how things develop!
Talk about dumb luck! I chose this server (apparently 2 days after launch) because docmentation suggested choosing a less populated server to spread the load. Now I’m on one of the biggest and most stable. Me so happy!
I didn’t choose this server but I can still join and post. Greetings from Lemm.ee!
I chose this server at complete random(I didn’t even understand the multiple servers thing). me so happy too
Same, it felt funny when I heard we had went up to fast in numbers the other day. But I won’t switch I like our admin and how he takes care of this instance
Looks like the guy who runs it runs a lot of fediverse servers, I guess he knows what he’s doing: https://lemmy.world/u/ruud
@ruud runs a top 10 mastodon server.
Yes. And I’m asking him to share his tweaks here with the community so that others instance admins can shore-up their servers :)
Fwiw, he has been providing quite a lot of transparency in his posts to this community. He’s shared his hardware config in detail, posted maintenance posts with brief descriptions of what he’s doing, and replied to comments around specific config tweaks. I haven’t catalogued a list of links, but I’ve seen him do all of these things in the last 48h. It’s easy to imagine that all these things could be compiled in real time into a how-to, but it’s a pretty big deal just to keep the lights on right now, and pretty difficult to understand whether tweaks that helped your setup are generally applicable or only situationally useful and happen to perform well for your specific setup.
I’m sure we will see more high-performance Lemmy guides in the future, but at this point no one has more than 36h of experience with high-performance Lemmy. Give them a minute to catch up.
Likely experience and knowledge improving the quality of deployment. Most instances are likely underspecced, are on hosts that aren’t easy to scale up with, or are maxed out in their current offering tier (lemmy.ml comes to mind there)
I wouldn’t be surprised if it has more to do with caching than throwing hardware at it.
Looking at ruud’s post, he moved the instance to a pretty beefy server - it sounds like a large part of the stability is coming from overestimating performance requirements.
* correctly estimating
🙂
Correct. Lemmy is a monolithic application, so there’s only so much a server upgrade can do.
Lemmy is a monolithic application, so there’s only so much a server upgrade can do.
This is sort of true, but not really true. The default docker setup is comprised of 4 containers. I’ve seen admins report that two of those containers (
lemmy
andlemmy-ui
) can be horizontally scaled just fine. Thepict-rs
andpostgres
containers can currently only be vertically scaled, but Postgres natively supports scaling read load.at least through read-replicas, and there’s an incomplete proposal to support scaling reads through separate db connections.All of which is to say, it’s possible to throw 4-6 machines at a Lemmy install. It’s not truly a single-procees monolith. Would the Lemmy code be able to productively use all that hardware? I dunno. It’s scaled better tombig hardware on
lemmy.world
than I would have predicted last week, maybe it can fully utilize a 6 machine setup, or maybe the db falls over first and you need to fix performance bugs because sn instance can scale to the user counts necessary to support bigger hardware setups.
Just subscribed at Patreon to support the cause! 👍
I’m not an admin, but have followed the sizing discussions around the lemmyverse as closely as I can from my position of lacking first-hand knowledge:
lemmy.ml
is the biggest instance by user count, but runs on incredibly modest 8-cpu hardware. Their cloud provider doesn’t provide any easy scale up options for them, so they can’t trivially restart on a bigger VM with their db and disk in place. I suspect this means that instance is going to suffer for a bit as they figure out what to do next.lemmy.world
on the other hand was running on a box at least twice as big aslemmy.ml
at last count, and I believe they can go quite a bit bigger if they need to.- The
lemmy.world
admins also runmastodon.world
and lived through the twitterpocalypse, seeing peak user registrations rates of 4k per hour. So this is not their first rodeo in terms of explosive growth, I’m sure that experience gives them some tricks up their sleeve. - The admin team is pretty clearly technically strong. If I recall correctly, ruud is a professional database admin. One of the spooky parts of Lemmy performance-wise is the db. If ruud or others on the admin team custom-tuned their pg setup based on their own analysis of how/why it’s slow, they may be getting more performance per CPU cycle than other instances running more stock configs or that are cargo-culting tweaks that aren’t optimal for their setup without understanding what makes them work.
I’m surprised that
sh.itjust.works
isn’t growing faster. They also have a hefty hardware setup and seemingly the technical admins to handle big user counts. I wonder if it’s a branding problem, wherelemmy.world
sounds inviting and plausibly serious wheresh.itjust.works
sounds like clowntown even though it’s run by a capable and serious team.I wonder if it’s a branding problem, where
lemmy.world
sounds inviting and plausibly serious wheresh.itjust.works
sounds like clowntownThat was my thought process when choosing an instance tbh. I’m not a tech person, I looked at the list and lemmy.world was the first ‘safest feeling’ instance that had open sign up. I saw sh.itjust.works and didn’t even check their sign up process, there was too many periods in the strange name and it just looks weird to me as someone not used to these things. Edit: spelling
nah, I’m bit regretting not signing up on their instance. sh.itjust.works is a cool name and can be a brag point. lol. lemmy world is a bit too generalist, but I won’t migrate there as ruud (the admin of lemmyworld) is doing a good job managing the instance. I appreciate that. :)
For what it is worth, I looked at sh.itjust.works . Reason I choose beehaw.org was they were more local, and had more local content and users. Plus the server focus and values seemed to fit me better. Yes their domain is a bit odd but that was not a factor for me.
I definitely second the motion on it being a branding problem. Stuff like sh.itjust.works seem to me like something that dark basement tech nerds would come up with that is “edgy” and really only used by them and other people like them.
I’m not really into the ironic “edgy” aesthetic and part of the struggle with this transition for me has been orienting myself in the space because I don’t want to commit to some “sketchy” edgelord URL
something that dark basement tech nerds would come up with that is “edgy” and really only used by them and other people like them.
That’s exactly what it is and why I love it. The whole thing about this federated networking is that it doesn’t matter where you signed up.
Where you sign up entirely determines your local feed.
Just like with reddit, I don’t use defaults.
The least useful of the three feeds
I originally signed up with sh.itjust.works, but I wanted to be on the instance with the majority of migrants.
Also, it sounds dumb, but I think the sh.itjust.works domain is just kinda weird, technically has a “curse word” in it (not that I personally care), and they don’t support NSFW content (which isn’t just used for porn). So, it didn’t make sense to have that as my home instance. 🤷♂️
Edit: Also, this is my first comment on here! Hello world! 👋
Yeah, I get it. Naming optics aside, it seems an instance with a lot of headroom relative to others, with a capable team. Would be near the top of my word-of-mouth options in spite of the idiosyncratic name.
It’s been running a little slow today though so maybe not as much headroom as you think
I had a very similar thought process when choosing my instance. lemmy.world seemed like it would be more open to new users than an instance named sh.itjust.works. Idk why that was my thought process but I’m here now
I’m now going to start incorporating “Sounds like clowntown” into my everyday conversations - that’s funny!
Mind you, it can sound a lot like clown world which is a phrase Nazis and other groups against progress love to use.
“clown world” was at least initially a reference to how the CIA meddles in the affairs of the world (Clowns In America).
Can confirm… I didnt sign up for sh.itjust.works solely because of the name… I dont particularly want that attached to every post I make.
Guess we’re just different kinds of people…
lemmy.ml just migrated to bare metal https://lemmy.ml/post/1234235
Can none of this scale horizontally? Every mention of scaling has been just “throw a bigger computer at it”.
We’re already running into issues with the bigger servers being unable to handle the load. Spinning up entirely new instances technically works, but is an awful user experience and seems like it could be exploited.
It’s important to recall that last week the biggest lemmy server in the world ran on a 4-core VM. Anybody that says you can scale from this to reddit overnight with “horizontal scaling” is selling some snake oil. Scaling is hard work and there aren’t really any shortcuts. Lemmy is doing pretty well on the curve of how systems tend to handle major waves of adoption.
But that’s not your question, you asked if Lemmy can horizontally scale. The answer is yes, but in a limited/finite way. The production docker-compose file that many lemmy installs are based on has 5 components. From the inside out, they are:
- Postgres: The database, stores most of the data for the other components. Exposes a protocol to accept and return SQL queries and responses.
- Lemmy: The application server, exposes websockets and http protocols for lemmy clients… also talks to the db.
- Lemmy-ui: Talks to Lemmy over websockets (for now, they’re working to deprecate that soon) and does some fancy dynamic webpage construction.
- Nginx: Acts as a web proxy. Does https encryption, compression over the wire, could potentially do some static asset caching of images but I didn’t see that configured in my skim of the config.
- Pict-rs: Some kind of image-hosting server.
So… first off… there’s 5 layers there that talk to each other over the docker network. So you can definitely use 5 computers to run a lemmy instance. That’s a non-zero amount of horizontal scaling. Of those layers, I’m told that lemmy and lemmy-ui are stateles and you can run an arbitrary number of them today. There are ways of scaling nginx using round-robin DNS and other load-balancing mechanisms. So 3 out of the 5 layers scale horizontally.
Pict-rs does not. It can be backed by object storage like S3, and there are lots of object storage systems that scale horizontally. But pict-rs itself seems to still need to be a single instance. But still, that’s just one part of lemmy and you can throw it on a giant multicore box backed by scalable object storage. Should take you pretty far.
Which leaves postgres. Right now I believe everyone is running a single postgres instance and scaling it bigger, which is common. But postgres has ways to scale across boxes as well. It supports “read-replicas”, where the “main” postgres copies data to the replicas and they serve reads so the leader can focus on handling just the writes. Lemmy doesn’t support this kind of advanced request routing today, but Postgres is ready when it can. In the far future, there’s also sharding writes across multiple leaders, which is complex and has its downsides but can scale writes quite a lot.
All of which is to say… lemmy isn’t built on purely distributed primitives that can each scale horizontally to arbitrary numbers of machines. But there is quite a lot of opportunity to scale out in the current architecture. Why don’t people do it more? Because buying a bigger box is 10x-100x easier until it stops being possible, and we haven’t hit that point yet.
I hope lemmy.ml can upgrade at some point. A lot of the slowness I’m running into is trying to browse/discovery communities that happen to live on that instance.
That’s actually awesome for users of
sh.itjust.works
. Like myself.