

Nobody can have proof of that, because no such proof can ever exist. How would you ever have a proven correct number of cheaters not detected?
Nobody can have proof of that, because no such proof can ever exist. How would you ever have a proven correct number of cheaters not detected?
globally trivial
Please share your trivial solution then.
Multicast wouldn’t really replace any of the sites you mention because people want and are used to on-demand curated content.
It’s also not as practical as you make it sound to implement it for the entire internet. You claim that this would be efficient because you only have to send the packets out once regardless of the number of subscribers. But how would the packets be routed to your subscribers? Does every networking device on the internet hold a list of all subscriptions to correctly route the packets? Or would you blindly flood the entire internet with these packets?
The common misconception that swap is pointless stems from misunderstanding what it’s supposed to do. You shouldn’t be triggering the OOM killer frequently anyway. In the much more normal case where you’re only using some of your RAM for running applications, the rest is used as a filesystem cache/buffer. Having swap space available gives your OP the option to evict stale application memory from RAM rather than the filesystem cache when that would be the optimal choice to make.
This page explains it detail: https://chrisdown.name/2018/01/02/in-defence-of-swap.html
Political views as they are, it’s gotten a lot of pushback
Yeah, the comment above mixed up grammar nazis with actual nazis I guess.
<package>.install scripts which don’t have to be explicitly mentioned in the PKGBUILD if it shares the same name as the package.
Can you show a reproducible example of this? I couldn’t get a <package>.install included in a test package I made without explicitly adding it as install=<package>.install
.
Most people claim they read the PKGBUILD (which I don’t believe tbh)
If you don’t trust people to read PKGBUILD’s I’m curious which form of software installation (outside of official repositories) you find safe.
What did he do? I’m out of the loop.
That’s just like your opinion man.
Yeah for sure there’s ton of clickbait, but this isn’t “a minor technical matter”. The news here isn’t the clash over whether the patch should be accepted in the RC branch, but the fact that Linus said he wants to remove bcachefs from the kernel tree.
I’m sure many people don’t even think about that. Having to reinstall all your packages from scratch is not something they do frequently.
And for the people who are looking to optimize the initial setup, there are many ways to do it without a declarative package manager. You can:
So the SSD is hiding extra, inaccessible, cells. How does
blkdiscard
help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells withblkdiscard
?
The idea is that blkdiscard
will tell the SSD’s own controller to zero out everything. The controller can actually access all blocks regardless of what it exposes to your OS. But will it do it? Who knows?
I feel that, unless you know the SDD supports secure trim, or you always use
-z
,dd
is safer, sinceblkdiscard
can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.
After reading all of this I would just do both… Each method fails in different ways so their sum might be better than either in isolation.
But the actual solution is to always encrypt all of your storage. Then you don’t have to worry about this mess.
I don’t see how attempting to over-write would help. The additional blocks are not addressable on the OS side. dd
will exit because it reached the end of the visible device space but blocks will remain untouched internally.
The Arch wiki says
blkdiscard -z
is equivalent to runningdd if=/dev/zero
.
Where does it say that? Here it seems to support the opposite. The linked paper says that two passes worked “in most cases”, but the results are unreliable. On one drive they found 1GB of data to have survived 20 passes.
in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data.
Your conclusion is incorrect because you made the assumption that the SSD has exactly the advertised storage or infinite storage. What if it’s over-provisioned by a small margin, though?
He didn’t say anything about Nazism being an opinion you disagree with.
This is literally the only point the article makes and there’s no point even discussing it further if you’re too blind or dishonest to admit that.
You don’t have to trust Drew, though. Vaxry is pretty clear on his stance on the subject.
if I run a discord server around cultivating tomatoes, I should not exclude people based on their political beliefs, unless they use my discord server to spread those views.
which means even if they are literally adolf hitler, I shouldn’t care, as long as they don’t post about gassing people on my server
that is inclusivity
Source: https://blog.vaxry.net/articles/2023-inclusiveActivists
Note how this article is not where he first stated the above. This article is where he doubles down on the above statement in the face of criticism. In the rest of the article he presents nazism as an opinion people might have that you disagree with. He argues that his silent acceptance of nazis is the morally correct stance while inclusive communities are toxic actually.
This means that it’s not just Drew or the FDO who are arguing that Vaxry’s complete lack of political stance is creating safe spaces for fascists. It’s Vaxry himself that explicitly states this is happening and that it’s intentional on his part.
C is pretty much the standard for FFI, you can use C libraries with Rust and Redox even has their own C standard library implementation.
Right, but I’m talking specifically about a kernel which supports building parts of it in C. Rust as a language supports this but you also have to set up all your processes (building, testing, doc generation) to work with a mixed code base. To be clear, I don’t image that this part is that hard. When I called this a “more ambitious” approach, I was mostly referring to the effort of maintaining forks of linux drivers and API compatibility.
Linux does not have a stable kernel API as far as I know, only userspace API & ABI compatibility is guaranteed.
Ugh, I forgot about that. I wonder how much effort it would be to keep up with the linux API changes. I guess it depends on how many linux drivers you would use, since you don’t need 100% API compatibility. You only need whatever is used by the drivers you care about.
Does it have to be Linux?
In order to be a viable general use OS, probably yes. It would be an enormous amount of effort to reach a decent range of hardware compatibility without reusing the work that has already been done. Maybe someone will try something more ambitious, like writing a rust kernel with C interoperability and a linux-like API so we can at least port linux drivers to it as a “temporary” solution.
Right, so this is exactly the sort of “benefit” I never expect to see. This is not something that has happened to me in ~25 years of computer use, and if it does happen there are better ways to deal with it. Btrfs and zfs have quotas for this, but even if they didn’t it would not be worth the tradeoff for me. Mispredicting the partition sizes I’ll end up needing after years of use is both more likely to happen and more tedious to fix.
Are you going to dual boot? Do you have some other special requirement? If not, there’s no reason to overthink partitioning in my opinion. I did this for my main NVME:
I use a swap file so I don’t use a swap partition. If you want more control over specific parts of the filesystem, eg a separate /home that you can snapshot or keep when reinstalling the system, then use btrfs subvolumes. This gives you a lot of the features a partition would give you without committing to a specific size.
This is the only partitioning scheme I have never regretted. When I’ve tried to do separate partitions I find myself always regretting the sizes I’ve allocated. On the other hand, I have not actually seen any benefit of the separation in practice.
There’s no reason to be rude and insulting. It doesn’t make the other person look lazy; it just makes you look bad, especially when you end up being wrong because you didn’t do any research either. The article is garbage. It’s obviously written by someone who wants to talk about why they don’t like bcachefs, which would be fine, but they make it look like that’s why Linus wanted to remove bcachefs, which is a blatant lie.
But if we click on the article’s own source in the quote we see the message (emphasis mine):
Stability has absolutely nothing to do with it. On the contrary, bcachefs is explicitly expected to be unstable. The entire thing is about the developer, Kent Overstreet, refusing to follow the linux development schedule and pushing features during a period where strictly bug fixes are allowed. This point is reiterated in the rest of the thread if anyone is having doubts about whether it is stated clearly enough in the above message alone.