• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • Somewhat recently I caused a failed kernel update by accident:

    Ran system update in tmux session (local session on desktop). But problem was that tmux itself got also updated, which crashed the tmux session and as a result crashed the kernel update. Only realized it upon the following reboot (which no longer worked).

    Your described solution re “live ISO, chroot, run system update once more, reboot” was also what got me out of that situation. So certainly something worth learning for “general troubleshooting” purposes re system updates.


  • Have you ever learned about the following in VIM:

    • H, M, L, 22H, …,: vertical cursor placement
    • zt, z0, zb: vertical scroll positioning
    • 0, $, gm, gM: horizontal cursor placement
    • w, e, b: word based cursor movement

    Simply holding j or k at times also works, even more so with a decently high key repeat rate.

    Of course there’s a lot more: https://vimhelp.org/motion.txt.html

    The trick is to only learn a couple new movement mappings at a time and use them during one’s workflow for a while, up until they feel ingrained. Then repeat, iteratively building up one’s movement skills in VIM.

    One can say many things about VIM, but not that learning it’s movement mappings will make your required APM (let alone mouse clicks) go up to “get stuff done”. Honestly, once a basic set of these movements has been learned, any other editor without them will feel like a drag.




  • I went through setting up netdata for a sraging (in progression for a production) server not too long ago.

    The netdata docs were quite clear on that fact that the default configuration is a “showcase configuration”, not a “production ready configuration”!

    It’s really meant to show off all features to new users, who then can pick what they actually want. Great thing about disabling unimportant things is that one gets a lot more “history” for the same amount of storage need, cause there are simply less data points to track. Similar with adjusting the rate which it takes data points. For instance, going down from default 1s internal to 2s basically halfs the CPU requirement, even more so if one also disables the machine learning stuff.

    The one thing I have to admit though is that “optimizing netdata configs” really isn’t that quickly done. There’s just a lot of stuff it provides, lots of docs reading to be done until one roughly gets a feel for configuring it (i.e. knowing what all could be disabled and how much of a difference it actually makes). Of course, there’s always a potential need for optimizations later on when one sees the actual server load in prod.


  • bellsDoSing@lemm.eetoLinux@lemmy.mlKDE 6 FOR ARCH LINUX IS HEREEEEEEE
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Same here! Been using manjaro for more than 5 years by now on all my dev machines and I really like not being overrun by updates.

    Once you form the habit of checking latest “stable update” forum thread (the eqivalent of checking the arch frontpage before an upgrade) and check for potential “manual interventions” (if any), then it gives you suprisingly good stability. But it’s still rolling release and “pretty current”.

    And stability simply becomes more of a factor once your metaphorical “plate” becomes choke full and the last thing you want from your underlying OS is to act up on its own due to an update.


  • Coincidentally, I happen to have been reading into SEO more in depth this week. Specifically official SEO docs by google:

    https://developers.google.com/search/docs/fundamentals/seo-starter-guide

    To be clear, SEO isn’t about tricking search engines per se. First and foremost it’s about optimizing a given website so that the crawling and indexing of the website’s content is working well.

    It’s just that various websites have tried various “tricks” over time to mislead the crawling, indexing and ultimately the search engine ranking, just so their website comes up higher and more often than it should based on its content’s quality and relevancy.

    Tricks like:

    • keyword stuffing
    • hidden content just visible to crawlers

    Those docs linked above (that link is just part of much more docs) even mention many of those “tricks” and explicitely advise against them, as it will cause websites to be penalized in their ranking.

    Well, at least that’s what the docs say. In the end it’s an “arms race” between search engines and trickery using websites.



  • Depends on the specific plugin. I’ve been doing music production on Linux for several years now. Back then things looked a lot worse than now. Most popular bridge solution for Windows plugins on Linux is yabridge atm. The README is well worth a closer read, cause it will answer many questions on how to get even more modern plugins to display correctly (i.e. JUCE based ones).



  • bellsDoSing@lemm.eetoLinux@lemmy.mlMake a Linux App
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    Just looked it up a bit: https://microsoft.github.io/monaco-editor/

    AFAIU, monaco is just about the editor part. So if an electron application doesn’t need an editor, this won’t really help to improve performance.

    Having gone through learning and developing with electron myself, this (and the referenced links) was a very helpful resource: https://www.electronjs.org/docs/latest/tutorial/performance

    In essence: “measure, measure, measure”.

    Then optimize what actually needs optimizing. There’s no easy, generic answer on how to get a given electron app to “appear performant”. I say “appear”, because even vscode leverages various strategies to appear more performant than it might actually be in certain scenarios. I’m not saying this to bash vscode, but because techniques like “lazy loading” are simply a tool in the toolbox called “performance tuning”.

    BTW: Not even using C++ will guarantee a performant application in the end, if the application topic itself is complex enough (e.g. video editors, DAWs, etc.) and one doesn’t pay attention to performance during development.

    All it takes is to let a bunch of somewhat CPU intensive procedures pile up in an application and at some point it will feel sluggish in certain scenarios. Only way out of that is to measure where the actual bottlenecks are and then think about how one could get away with doing less (or doing less while a bunch of other things are going on and then do it when there’s more of an “idle” time), then make resp. changes to the codebase.


  • Yeah, that browser zoom. And I too used / use Firefox. I’m not saying these kind of sites are common, but nevertheless I’ve encountered them occasionally. Back then, the most pragmatic workaround was to use desktop zooming of Xfce.

    My intention on the previous comment was simply to give some examples of desktop zooming that go beyond the typical accessibility viewpoint (e.g. vision impairment).



  • Yeah, AFAIR, the issue of “windows messing up grub” could happen when it’s installed on the same disk (e.g. on a laptop with one disk). Something about it overwriting the “MBR sector”. At least that was a problem back before UEFI.

    I too have been dual booting Windows 10 and Linux for many years now, each having their own physical disk, Linux one always being first in boot order. Not once did a Windows 10 update mess up grub for me with this setup.


  • Not the same as “on demand zooming”, which let’s one stick with a high, native resolution, but zoom in when required (e.g. websites with small text that can’t be zoomed via browser’s font size increase; e.g. referencing some UI stuff during UI design, without having to take a screenshot and pasting + zooming it in e.g. GIMP).


  • You didn’t mention how big those volumes are and how frequently the data changes.

    Assuming it’s not that much data:

    • use tar to archive each volume first, while using proper options to preserve permissions and whatever else is important for your usecase
    • use restic to backup those archives
    • use a proper pruning strategy to not let your backups get too big:
      • I’m not that familiar with restic, but maybe you can backup those archives separately and apply a more aggressive pruning strategy just for them
      • simply might be needed, cause deduplication (AFAIK) might not be that great with backing up archives
      • but maybe if the volume data and the resulting archive doesn’t change that often, deduplication would be sufficient even with a not so aggressive pruning strategy

  • Honestly, if all you’ve ever experienced in regards to terminals is windows CMD, then you really haven’t seen much. I mean that possitively. Actually, it will give you a far worse impression on what using a Linux / Unix terminal can be like (speaking as someone who spent what feel’s like years in terminals, of which the least amount in windows CMD).

    I suggest to simply play around with a Linux terminal (e.g. install VirtualBox,.then use it to install e.g. Ubuntu, then follow some simple random “Linux terminal beginner tutorial” you can find online).


  • On top of that, 20 kHz is quite the theoretical upper limit.

    Most people, be it due to aging (affects all of us) or due to behaviour (some way more than others), can’t hear that far up anyway. Most people would be suprised how high up even e.g. 17 kHz is. Sounds a lot closer to very high pitched “hissing” or “shimmer”, not something that’s considered “tonal”.

    So yeah, saying “oh no, let me have my precious 30 kHz” really is questionable.

    At least when it comes to listening to finished music files. The validity of higher sampling frequencies during various stages in the audio production process is a different, way less questionable topic,


  • Nobody can tell you in advance how far your interest in game dev will take you. Only one way to find out: start small (some tutorials, build some crappy first) and see if your interest sticks around as you up the challange.

    Maybe game dev in Godot will end up being a significant chapter in your life, maybe it will just be a small sidequest. But once you’ve given it an honest try, no matter the outcome, you at least will know if it’s something for you or not. That in itself is already worth something.

    And who knows: maybe Godot is just your entry gateway to something else you discover along the way, which you wouldn’t have discovered if you hadn’t taken on the challange in the first place.