• 0 Posts
  • 55 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • There’s no reason to be rude and insulting. It doesn’t make the other person look lazy; it just makes you look bad, especially when you end up being wrong because you didn’t do any research either. The article is garbage. It’s obviously written by someone who wants to talk about why they don’t like bcachefs, which would be fine, but they make it look like that’s why Linus wanted to remove bcachefs, which is a blatant lie.

    Despite this, it has become clear that BcacheFS is rather unstable, with frequent and extensive patches being submitted to the point where [Linus Torvalds] in August of last year pushed back against it, as well as expressing regret for merging BcacheFS into mainline Linux.

    But if we click on the article’s own source in the quote we see the message (emphasis mine):

    Yeah, no, enough is enough. The last pull was already big.

    This is too big, it touches non-bcachefs stuff, and it’s not even remotely some kind of regression.

    At some point “fix something” just turns into development, and this is that point.

    Nobody sane uses bcachefs and expects it to be stable, so every single user is an experimental site.

    The bcachefs patches have become these kinds of "lots of development during the release cycles rather than before it", to the point where I’m starting to regret merging bcachefs.

    If bcachefs can’t work sanely within the normal upstream kernel release schedule, maybe it shouldn’t be in the normal upstream kernel.

    This is getting beyond ridiculous.

    Stability has absolutely nothing to do with it. On the contrary, bcachefs is explicitly expected to be unstable. The entire thing is about the developer, Kent Overstreet, refusing to follow the linux development schedule and pushing features during a period where strictly bug fixes are allowed. This point is reiterated in the rest of the thread if anyone is having doubts about whether it is stated clearly enough in the above message alone.







  • <package>.install scripts which don’t have to be explicitly mentioned in the PKGBUILD if it shares the same name as the package.

    Can you show a reproducible example of this? I couldn’t get a <package>.install included in a test package I made without explicitly adding it as install=<package>.install.

    Most people claim they read the PKGBUILD (which I don’t believe tbh)

    If you don’t trust people to read PKGBUILD’s I’m curious which form of software installation (outside of official repositories) you find safe.






  • So the SSD is hiding extra, inaccessible, cells. How does blkdiscard help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells with blkdiscard?

    The idea is that blkdiscard will tell the SSD’s own controller to zero out everything. The controller can actually access all blocks regardless of what it exposes to your OS. But will it do it? Who knows?

    I feel that, unless you know the SDD supports secure trim, or you always use -z, dd is safer, since blkdiscard can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.

    After reading all of this I would just do both… Each method fails in different ways so their sum might be better than either in isolation.

    But the actual solution is to always encrypt all of your storage. Then you don’t have to worry about this mess.


  • I don’t see how attempting to over-write would help. The additional blocks are not addressable on the OS side. dd will exit because it reached the end of the visible device space but blocks will remain untouched internally.

    The Arch wiki says blkdiscard -z is equivalent to running dd if=/dev/zero.

    Where does it say that? Here it seems to support the opposite. The linked paper says that two passes worked “in most cases”, but the results are unreliable. On one drive they found 1GB of data to have survived 20 passes.




  • You don’t have to trust Drew, though. Vaxry is pretty clear on his stance on the subject.

    if I run a discord server around cultivating tomatoes, I should not exclude people based on their political beliefs, unless they use my discord server to spread those views.

    which means even if they are literally adolf hitler, I shouldn’t care, as long as they don’t post about gassing people on my server

    that is inclusivity

    Source: https://blog.vaxry.net/articles/2023-inclusiveActivists

    Note how this article is not where he first stated the above. This article is where he doubles down on the above statement in the face of criticism. In the rest of the article he presents nazism as an opinion people might have that you disagree with. He argues that his silent acceptance of nazis is the morally correct stance while inclusive communities are toxic actually.

    This means that it’s not just Drew or the FDO who are arguing that Vaxry’s complete lack of political stance is creating safe spaces for fascists. It’s Vaxry himself that explicitly states this is happening and that it’s intentional on his part.


  • C is pretty much the standard for FFI, you can use C libraries with Rust and Redox even has their own C standard library implementation.

    Right, but I’m talking specifically about a kernel which supports building parts of it in C. Rust as a language supports this but you also have to set up all your processes (building, testing, doc generation) to work with a mixed code base. To be clear, I don’t image that this part is that hard. When I called this a “more ambitious” approach, I was mostly referring to the effort of maintaining forks of linux drivers and API compatibility.

    Linux does not have a stable kernel API as far as I know, only userspace API & ABI compatibility is guaranteed.

    Ugh, I forgot about that. I wonder how much effort it would be to keep up with the linux API changes. I guess it depends on how many linux drivers you would use, since you don’t need 100% API compatibility. You only need whatever is used by the drivers you care about.



  • Right, so this is exactly the sort of “benefit” I never expect to see. This is not something that has happened to me in ~25 years of computer use, and if it does happen there are better ways to deal with it. Btrfs and zfs have quotas for this, but even if they didn’t it would not be worth the tradeoff for me. Mispredicting the partition sizes I’ll end up needing after years of use is both more likely to happen and more tedious to fix.


  • Are you going to dual boot? Do you have some other special requirement? If not, there’s no reason to overthink partitioning in my opinion. I did this for my main NVME:

    • Partition table: GPT
    • /boot : 1GB fat32 partition. Depending on your needs (number of kernels, initramfs’s, other OSs) you might be fine with 500MB or even less. But because resizing can be a pain and I have the space to spare, I would much rather overprovision.
    • / : LUKS2 partition containing a btrfs filesystem with all the remaining space

    I use a swap file so I don’t use a swap partition. If you want more control over specific parts of the filesystem, eg a separate /home that you can snapshot or keep when reinstalling the system, then use btrfs subvolumes. This gives you a lot of the features a partition would give you without committing to a specific size.

    This is the only partitioning scheme I have never regretted. When I’ve tried to do separate partitions I find myself always regretting the sizes I’ve allocated. On the other hand, I have not actually seen any benefit of the separation in practice.