Don Piano
- 0 Posts
- 248 Comments
Don Piano@feddit.orgto Fuck AI@lemmy.world•PayPal's Honey to integrate with ChatGPT and other AIs for shopping assistance2·2 days agoIs this different from Nate?
Hey now
He was also a racist
Don Piano@feddit.orgto Frag Feddit@feddit.org•Wie finde ich heraus, wenn Freunde bei mir rumjammern, ob die grad nur ausheulen möchte oder nach einer konstruktiven Lösung suchen?6·3 days agoFragen, ob emotionale oder praktische Unterstuetzung erwuenscht ist, sagen, dass beides angeboten ist und im letzteren Fall dann noch fragen, welche Art erwuenscht ist. Die Nachfrage ist nicht fehl am Platz.
Great song!
How’s the weather in Tyrol these days, Anton?
Don Piano@feddit.orgto VeganDE@discuss.tchncs.de•Katzen vegan ernähren? Tierarzt erklärt die StudienlageDeutsch4·9 days agoWeil man Tiere mag, zB.
Ich wuerd - und habe das auch getan - bei nem Haendler fuer gebrauchte bueromoebel gucken. Als Marke ist Mauser ziemlich gut, hab da aus einem Stuhl Jahrzehnte bekommen.
The first one is the best one, great timing, framing, lighting!
I recommend finding a different statistics teacher, preferably one who isn’t a comic and one who knows what the difference between a standard deviation, a standard error, and a 95% interval is. Those should not be too hard to find, it’s relatively basic stuff, but many people actually kinda struggle with the concepts (made harder by various factors, don’t get me started on the misuse of bar charts).
Oops, should have multiplied those intervals with 1.96, ao here again:
9 - 49%
16 - 38%
25 - 30%
100 -16%
400 - 8%
That’s how a standard error with normal-ish data works. The more data points for the estimation of a conditional mean you have, the fewer of the data point will be within it. For a normal distribution, the SE=SD/√N . Heck, you can even just calculate which proportion of the distribution you can expect to be within the 95% CI as a function of sample size. (Its a bit more complicated because of how probabilities factor into this, but for a large enough N it’s fine)
For N=9, you’d expect 26% of data points within the 95% CI of the mean For N=16, 19% For 25, 16% For 100, 8% For 400, 4% Etc
Out of curiosity: What issue did you take with the error margin not including most data points?
To be honest, I doubt Munroe wants to say “if the effect is smaller than you, personally, can spot in the scatterplot, disbelieve any and all conclusions drawn from the dataset”. He seems to be a bit more evenhanded than that, even though I wouldn’t be surprised if a sizable portion of his fans weren’t.
It’s kinda weird, scatterplot inspection is an extremely useful tool in principled data analysis, but spotting stuff is neither sufficient nor necessary for something to be meaningful.
But also… an R^2 of .1 corresponds to a Cohen’s d of 0.67. if this were a comparison of groups, roughly three quarters of the control group would be below the average person in the experimental group. I suspect people (including me) are just bad at intuitions about this kinda thing and like to try to feel superior or something and let loose some half-baked ideas about statistics. Which is a shame, because some of those ideas can become pretty, once fully baked.
Sure, you could do some wild overfitting. But why? What substantive theoretical model would such a data model correspond to?
A more straightforward conclusion to draw would be that age is far from the only predictor of flexibility etc., but on the list nevertheless, and if you wanna rule out alternative explanations (or support them), you might have to go and do more observations that allow such arguments to be constructed.
To expand a little: you get a 95% ci by taking the expected value ±SE*1.96 . The SE you get for a normal distribution by taking the sample SD and dividing that by the sqrt of the sample size. So if you take a standard normal distribution, the SE for a sample size of 9 would be 1/3 and for a sample size of 100 it would be 1/10, etc. This is much tighter than the population distribution, but that’s because youre estimating just the population mean, not anything else.
Capturing structured variance in the data then should increase the precision of your estimate of the expected value, because you’re removing variance from the error term and add it into the other parts of your model (cf. the term analysis of variance).
It’s a 95% CI, presumably for the expected value of the conditional (on age) population mean. It looks correct, given the sample size and variance, what issue do you see with it?
Well, your comment is a better variant of mine, i should have checked. :o) Thanks!
Keep learning, and it’ll stay easier than if you didn’t. See if you can find changes for the structure of what you’re learning so you don’t get too ossified about that, either. Like, have a decade where you focus more on sciences, one more for arts, one more for languages, one more for understanding people who are very different from you… Maybe a decade is too big a chunk, but you get the idea.
I don’t think you should expect that the average proportion of votes should correspond to the expected number of won chunks. This is a known property of first past the post electoral systems, which mayoral elections approximate.
There’s slightly over 2000 cities and towns in Germany. If there was a party that gets 1% of the votes, you would not expect them to get 20ish mayoral seats, without further information about clustering of voters, you’d rather expect them to be somewhat homogeneous in their losses (because of the mechanisms that cause the low popularity presumably (!) applying everywhere; if the 1% of voters were all coming from the same place because it’s a hyperregional party for example, that would possibly change).
Now, sadly, there’s a bit of clustering going on and that makes fascist mayors more likely, but I guess other than that, one possible conclusion here is: Fascist voters are everywhere, so whatever mechanism is behind their etiology, it applies all over the place.
The media landscape and dominant politics feeding into what feeds fascists is a “likely” candidate here. Scare quotes because, like, cmon. :| Friede, Fritze…