• 3 Posts
  • 332 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle
  • As someone pretty new to linux, what’s wrong with snaps? I’ve seen a lot of memes dunking on them but haven’t run into any issues with the couple that ive tried (even had a problem with a flatpack version of a program that the snap version fixed, though I think it may have been related to an intentional feature of flatpacks rather than a bug).


  • To my understanding, liberalism does value things like that (or at least personal rights and freedoms to some extent, which can include stuff like that). The problem as I see it is that it also includes an overly strong emphasis on personal property rights (for example, one would not expect a liberal government to do something like forcibly nationalize a company, especially without simply buying the shares at market price, because that would be seen as impinging on the rights of the company’s owners).

    Now, I don’t object to all personal property, like a person owning the home they live in or something, but if some people own overwhelmingly more than others, that in itself limits the effective rights of others. For example, a person with more money to spend on lawyers is less likely to face justice for crimes than someone else, a person with enough money to buy ads political lobbyists or even entire media platforms has their speech go much further than someone of average wealth, and even for property rights itself, there’s only so much wealth generation to go around and if someone owns a large percentage of it that can’t be owned by someone else, those others people’s work will end up going to enrich that one owner.

    That’s why I find liberalism problematic: it’s generally well intentioned I think but by failing to ensure a relatively even distribution of wealth, the other values it tries to promote are subverted and slip away, until eventually a few people have enough power to seize authoritarian control.


  • I tend to think that any system can theoretically be transformed into any other system via a finite number of reforms, if you can exert enough power to force it (which a revolution also requires), thus, I don’t see these as a mutually exclusive axis. As far as I see it, the thing about liberalism isn’t that they just want to reform the system, it’s that they don’t want the system to be changed at all by any means, or at least not those aspects of the system that lead to it’s being dominated by a small handful of people.















  • The utility of a nuclear stockpile is as a deterrent against a threat that we know exists (hostile foreign powers). The utility of this is a deterrent or response to, what exactly? A hypothetical AI beyond what we currently have the tech to make, and which if built probably would not behave in the way that it is fictionally portrayed to, such that the button is unlikely to actually be pressed even if needed (consider that the AIs we have already can be used to persuade people of things, so if we somehow managed to actually make a skynet style super-AI bent on taking over the world, rather than suddenly launching a war on humanity, its most obvious move would be to just manipulate people into giving it control of things, such that the one in charge of pressing the button would pretty much be itself or someone favorable to it, long before anyone realized pressing it was even necessary).



  • No, but the safeguard should be designed in response to the dangers that the tech actually poses, and those tend to be more subtle than actively trying to kill everyone, like perpetuating existing human biases in things like medicine, hiring etc without a clear way to tell that biased decision has been made or a human in the loop to hold accountable, or providing dangerously inaccurate information. Nobody is likely to press a universal off button to deal with these types of “everyday” problems and once the response is given the damage is done, so safety should focus on regulating what the AI says and does in the first place more than responding to it afterwards.


  • Skynet style attack on civilization isn’t really a realistic example of the dangers AI, or at least the current tech using that term, has, and I struggle to think of a danger that it poses where the best solution is to shut down the actual data center as quickly as possible. That’d be like pointing to the problems Facebook has caused, and insisting that the proper response is an instant global Facebook shutdown button. Further, a lot of AIs can run locally, which would make shutting down data centers an ineffective way to deal with it. And who presses this button? The AI company, that has an incentive to keep their product active if there’s any doubt that it can be? The government, that will likely take a long time to act or who may use the button as leverage to force AI to be biased towards the current crop of politicians? Is this button remotely accessible, thereby enabling a hacker to disable the AI and any infrastructure that has been foolishly made reliant on it in some way? Or is it airgapped, and therefore not much more useful than just shutting down the power to the site or disconnecting the data cables involved would be?