A reminder that this constant advice people blindly parrot to install and flock to smaller instance has now created something like 1000 new servers in 50 days that are poorly run and already going offline as quickly as they went online.
And this will always… always be the biggest problem in the FOSS community.
“I dont like X, so I’m going to leave and make my own version of X”
So userbases get spread thin, manpower gets spread thin, developers get spread thin, and the user experiences degrades for everyone until it pushes them back to the bullshit websites.
Sometimes I question why people not in favor of the decentralization are commenting on a Fediverse platform.
Why not go to Tildes, Squabbles or another centralized alternative? There is plenty of fish in the sea.
For the rest of your post, I don’t know what that has to do with people aggreating on LW.
And, factually, the project leaders telling everyone to create 1000 new instances and shutting down sign-up on Lemmy.ml caused more performance problems.
They had a bug in their PostgreSQL TRIGGER logic where 1500 instances were updating + 1 comment and +1 post counting instead of WHERE site_id = 1, a single database row. So each new Lemmy server that went online made the table larger and crashes more frequent on lemmy.ml
The irony in you saying that, and posting second-hand recounts by other people. They aren’t closed. If they are closed, you wouldn’t even be able to submit a registration application.
Man reading this thread, you’re kind of a dumbass. Especially if you think rewording your answer here from the last reply to reframe to current time period vs what was being talked about would throw off the scent
If they are closed, you wouldn’t even be able to submit a registration application.
Show me in the code. Because I have closed the registration on my Lemmy server, and it does not turn off the “Sign Up” link or HTML input fields. But you sure like lies and deception
The developers of Lemmy seem to make every effort they can to avoid using Lemmy itself to discuss their !postgresql@lemmy.ml learning 101. They have made massive mistakes in SQL TRIGGER logic that they avoided to such a degree that their social motives are in question now. Github Issue 2910 was opened June 4, almost a month before the Reddit API deadline, and they ignored it. Just like they hang out on Matrix Chat and don’t use Lemmy their own self to discuss code.
They have cultivated a kind of voodoo attitude towards PostgreSQL where people using Lemmy won’t actually scrutinize the Rust code or PostgreSQL tuning parameters.
And this will always… always be the biggest problem in the FOSS community.
“I dont like X, so I’m going to leave and make my own version of X”
So userbases get spread thin, manpower gets spread thin, developers get spread thin, and the user experiences degrades for everyone until it pushes them back to the bullshit websites.
This is exactly what federation is meant to solve: power in numbers without the centralization. Is that so hard to understand?
Sometimes I question why people not in favor of the decentralization are commenting on a Fediverse platform. Why not go to Tildes, Squabbles or another centralized alternative? There is plenty of fish in the sea.
This is another big problem in the FOSS
“If you dare offer valid criticism, then why are you even here? get out and go somewhere else!”
Your criticism is nonsensical. It’s literally criticizing the purpose of the project.
Oh dear, are you gonna start another tissue shortage with all these tears?
Pot calling the kettle black. I offer criticism of your criticism and you throw a hissy fit. Poor child.
Wow, I havent gotten a genuine, unironic “No, U!” since like kindergarten.
Wooosh
Your answer didn’t justify lemmy.world being treated the same as Lemmy as a whole. It’s just a bunch of people who don’t understand federation.
And, factually, the project leaders telling everyone to create 1000 new instances and shutting down sign-up on Lemmy.ml caused more performance problems.
They had a bug in their PostgreSQL TRIGGER logic where 1500 instances were updating + 1 comment and +1 post counting instead of WHERE site_id = 1, a single database row. So each new Lemmy server that went online made the table larger and crashes more frequent on lemmy.ml
The amount of disk writing by lemmy was ignored
They’ve neither told people to create 1000 new instances, nor have they closed signups on lemmy.ml.
Again, you should really stop revolving your entire life over one GitHub issue, and go touch grass.
You know what is easy to do, lie and make-up facts. It is so much easier to bullshit to be popular around here.
It’s disgusting the people who lie like you do and believe liars: https://lemmy.ml/post/2421636
The irony in you saying that, and posting second-hand recounts by other people. They aren’t closed. If they are closed, you wouldn’t even be able to submit a registration application.
Man reading this thread, you’re kind of a dumbass. Especially if you think rewording your answer here from the last reply to reframe to current time period vs what was being talked about would throw off the scent
They’ve not closed sign ups. Requiring approval of sign ups, is not closing sign ups. How am I wrong?
Currently no. But they had, which was clearly stated as it was talking on the past.
Show me in the code. Because I have closed the registration on my Lemmy server, and it does not turn off the “Sign Up” link or HTML input fields. But you sure like lies and deception
The developers of Lemmy seem to make every effort they can to avoid using Lemmy itself to discuss their !postgresql@lemmy.ml learning 101. They have made massive mistakes in SQL TRIGGER logic that they avoided to such a degree that their social motives are in question now. Github Issue 2910 was opened June 4, almost a month before the Reddit API deadline, and they ignored it. Just like they hang out on Matrix Chat and don’t use Lemmy their own self to discuss code.
They have cultivated a kind of voodoo attitude towards PostgreSQL where people using Lemmy won’t actually scrutinize the Rust code or PostgreSQL tuning parameters.