“Microsoft Edge. You will never be satisfied, guaranteed.”
“Google Chrome. Only the thinnest veneer of quality.”
“Microsoft Edge. You will never be satisfied, guaranteed.”
“Google Chrome. Only the thinnest veneer of quality.”


Goddammit!
✌️ I’m not a gate. ✌️


While that would be nice to see, if they do it as poorly as this case appears to be going, the wrong group would benefit.


Not every conspiracy has to end with gate.
#TheSlothIsOutThere


I think nerdy stuff is attractive to people on the autism spectrum, and while people on the spectrum tend to like consistency, they also have trouble recognizing social norms, let alone following them. So some act that is in large part (from other people’s perspective, at least) a deviation from social norms isn’t that much of a problem to them. And why wouldn’t trans people prefer to be in spaces where people don’t care how they’re living their life? Now, add on that exposure tends to normalize social experiences, and people on the spectrum are already weird in their own way, and the neurotypical people in those nerdy spaces are already used to dealing with weird people. Adding a different flavor of weird isn’t that much of a stretch.
Or, to put it another way,


The good news is RFC 3339 doesn’t have this problem and is an unambiguous subset of ISO 8601.


A single point of data rarely answers the question unless you’re looking for absolutes. “Will zipping 10 files individually be smaller than zipping them into a single file?” Sure, easy enough to do it once. Now, what kind of data are we talking about? How big, and how random, is the data in those files? Does it get better with more files, or is the a sweet spot where it’s better, but it’s worse if you use too few files, or too many? I don’t think you could test for those scenarios very quickly, and they all fall under the original question. OTOH, someone who has studied the subject could probably give you an answer easily enough in just a few minutes. Or he could have tried a web search and find the answer, which pretty much comes down to, “It depends which compression system you use.”


That’s precisely why I chose string theory, because it does have value, even if it can’t be tested at this time. Yet, even though little can be done to advance it, shrugging and ignoring it won’t change that state, if you’re a scientist.
As for the pondering of philosophers, there is a good chance that many of their questions will never be answered, and yes, there would be little value to study them, as a scientist. But that qualifier has a dramatic effect on your previous statements.


It sounds like you’re trying to use the wrong tool, though. Science is a great system for learning about the observable universe, but less so for other things. To put it another way, science is great for telling you how, philosophy is great for exploring why.


This is kind of wrong, and is a common conflation with respect to science. First, scientists do talk about things that cant be proven, string theory being just one of them. It’s an idea of the physical world that cant be proven. If we have a way to actually test a hypothesis of string theory, it will get more attention. But if you don’t have people thinking about these things, we won’t have better models for describing the universe, such as relativity. Similarly, science can’t prove a negative. Science will never tell you God doesn’t exist or can’t exist, only that we have no proof that God exists and that we have no model where he could. But our knowledge has been less complete before, and our models have been updated as knowledge is gained.
And much of philosophy has no basis in the physical world, but this doesn’t mean it isn’t worth thinking about.


The Pebble Time 2 has a heart rate monitor. I can’t say if the rest of your statement is correct or not.
What you’re saying is mostly right, and in a practical sense is right, as well, but not as much from a technical sense. This is the specific block that is problematic.
This is generally correct, per cycle. Overall, it really depends. The problem is, the x86 architecture does okay as long as it’s kept busy and the work to be performed is predictable (for the purposes of look-ahead and parallelization). This is why it’s great for those mathematical calculations you referred to, and why GPUs took over - they’re massively better performers on tasks that can be parallelized such as math calculations and graphics rendering. Beyond that, the ARM use case has been tuned to low power environments, which means the computing does poorly in environments that need a lot of calculations because, in general, more computing requires more power (or the same power with more efficient hardware, and now we’re talking about generational chip design differences). Now, couple that with the massive amount of money spent to make x86 what it is, and the relatively lower amounts that RISC and ARM received, and the gap gets wider.
Now, as I started with, even a basic x86 computer running at mostly idle is going to have pretty low power consumption, dollar-wise. Compare that to the power draw on a new router, or even a newer low-power mini PC, and your ROI is not going to indicate the need for that purchase if you have the hardware just sitting around idle. And it will still perform better than a raspberry pi configured to act as a router if your bandwidth is above about 250 mbps, if I remember correctly (and something like 120 mbps for the v4 and earlier generations).