• 1 Post
  • 156 Comments
Joined 1 year ago
cake
Cake day: April 23rd, 2023

help-circle
  • Webtoon is still shitty in other ways. When they adapt a property, they want it their way, regardless of the author’s original vision. I’ve seen several stories that originated on Royal Road get Webtoon adaptations, and the adaptations always seem to change or leave out important parts of the story, making characters look stupid or just completely replacing entire sets of characters, forcing the story to diverge substantially when inevitably something they got rid of turns out to have been critically important to where the author was taking things. They turn great stories into middling slop every single time.





  • The purpose of this plant is in fact not long-duration storage, but secondary functions as you mentioned, and it’s also meant to be a proof-of-concept. Per an article from CNESA’s English site when the plant’s construction began in June 2023:

    This project represents China’s first grid-level flywheel energy storage frequency regulation power station and is a key project in Shanxi Province, serving as one of the initial pilot demonstration projects for “new energy + energy storage.” The station consists of 12 flywheel energy storage arrays composed of 120 flywheel energy storage units, which will be connected to the Shanxi power grid. The project will receive dispatch instructions from the grid and perform high-frequency charge and discharge operations, providing power ancillary services such as grid active power balance.





  • Router-level VPN is going to be more difficult to configure and cause more problems than just having it on all your devices. There are some games where online play just refuses to work if connecting through a VPN. Some mobile apps are the same. When a website blocks your currently selected server, and the usual solution is switching to another server, that’s going to be more difficult and more tedious when it’s configured at the router level. In addition, if you do something like using a self-hosted VPN in order to connect remotely to a media server on your home network, that becomes more difficult if your home router is on a different VPN.

    If you’re trying to keep local devices in the building from phoning home and being tracked, a PiHole or router-level firewall might be a better solution. I think if you’re running a pfsense or opnsense router and are a dab hand with VLANs then maybe you could get what you’re looking for with router-level VPN, but it’s a huge hassle otherwise. Just put Mullvad on your computers and phones and call it a day.


  • Unfortunately I can’t even test Llama 3.1 in Alpaca because it refuses to download, showing some error message with the important bits cut off.

    That said, the Alpaca download interface seems much more robust, allowing me to select a model and then select any version of it for download, not just apparently picking whatever version it thinks I should use. That’s an improvement for sure. On GPT4All I basically have to download the model manually if I want one that’s not the default, and when I do that there’s a decent chance it doesn’t run on GPU.

    However, GPT4All allows me to plainly see how I can edit the system prompt and many other parameters the model is run with, and even configure multiple sets of parameters for the same model. That allows me to effectively pre-configure a model in much more creative ways, such as programming it to be a specific character with a specific background and mindset. I can get the Mistral model from earlier to act like anything from a very curt and emotionally neutral virtual intelligence named Jarvis to a grumpy fantasy monster whose behavior is transcribed by a narrator. GPT4All can even present an API endpoint to localhost for other programs to use.

    Alpaca seems to have some degree of model customization, but I can’t tell how well it compares, probably because I’m not familiar with using ollama and I don’t feel like tinkering with it since it doesn’t want to use my GPU. The one thing I can see that’s better in it is the use of multiple models at the same time; right now GPT4All will unload one model before it loads another.


  • I have a fairly substantial 16gb AMD GPU, and when I load in Llama 3.1 8B Instruct 128k (Q4_0), it gives me about 12 tokens per second. That’s reasonably fast enough for me, but only 50% faster than CPU (which I test by loading mlabonne’s abliterated Q4_K_M version, which runs on CPU in GPT4All, though I have no idea if that’s actually meant to be comparable in performance).

    Then I load in Nous Hermes 2 Mistral 7B DPO (also Q4_0) and it blazes through at 50+ tokens per second. So I don’t really know what’s going on there. Seems like performance varies a lot from model to model, but I don’t know enough to speculate why. I can’t even try Gemma2 models, GPT4All just crashes with them. I should probably test Alpaca to see if these perform any different there…




  • PCIe gen 5 is for the PCIe slots and NVMe storage slots, but they’re backwards compatible; you can put a gen 3 component in a gen 5 slot and it will work at gen 3 speeds. Similarly, if you put a gen 5 component in a gen 4 slot, it will be limited to gen 4 speeds. Right now there’s very little appreciable difference between gen 4 and gen 5 unless you’re spending a lot of money on the component (GPU/storage). Another thing to note is that Gen 5 requires that both the CPU and motherboard support it; a CPU with gen 4 support in a gen 5 motherboard will limit all the slots to gen 4 speeds.

    RAM is a totally different standard that must be matched exactly for what the motherboard has; if it’s a DDR5 motherboard then you have to use DDR5 RAM or it won’t even fit in the slots. You can get a PCIe gen 5 motherboard and just use gen 4 SSDs or GPUs, that’s perfectly fine and leaves you room to upgrade later.


  • Seems mostly fine to me, I game all the time on Linux (Bazzite gang 🤘) with a 3900X + 7900GRE, haven’t had any significant issues aside from needing to make sure clock speeds were configured correctly on the GPU. Two ram sticks is the way to go with these systems as sometimes they don’t support 4 sticks at full speed.

    You’re right that GPU passthrough is definitely more for tinkering or advanced users with very specific needs (usually professionals who need Windows/Nvidia and choose to run it in a VM rather than dual-boot), with a budget to match. For a gamer couple, having fully separate systems is going to be much less hassle and more resilient against failure.

    The one thing I would recommend changing is the power supply, it’s unironically the most important component in the computer because if it fails it can kill everything else, and the System Power 10 is known enough for being low-quality that discussions of that come up in web searches. Poor quality power supplies can damage your hardware and otherwise cause weird, intermittent issues even if everything seems to work fine most of the time, and will fail and shut off the computer when a good power supply would have just kept on chugging. Seasonic and Corsair are considered the best brands and have 10 year warranties - they’re more expensive, but they’re worth it. You want 80+ Gold or better these days, this is a buy once, cry once component.

    If you don’t have a UPS, I would also recommend getting one at some point, either one big shared unit (if they’ll be close together) or two individual units. Having backup power will allow you to shut down the computers gracefully during a power outage, and prevents the worst-case scenario where the power goes out while the computer is installing updates and it turns into a brick.