• 2 Posts
  • 56 Comments
Joined 3 个月前
cake
Cake day: 2025年7月2日

help-circle





  • They do simultaneous workloads but each load is essentially performing the same function. A person, on the other hand, is a huge variety of differing functions all working in tandem; a set of vastly different operations that are all inextricably linked together to form who we are. No single form of generative algorithm is anything more than a tiny fraction of what we think of as a conscious being. They might perform the one operation they’re meant for on a much greater scale than we do, but we are nowhere near linking all those pieces together in a meaningful, cohesive stream.

    Edit, think of the algorithms like adding more cores to a CPU. Sure, you can process workloads simultaneously, but each workload is interchangeable and can be arbitrarily assigned to each core or thread. Not so with people. Every single operation is assigned to a specialized part of the brain that only does that one specific type of operation. And you can’t just swap out the RAM or GPU; every piece is wired specifically to interact with only the other pieces they were grown together with.


  • I think it’s a bit overzealous to say LLMs are the wrong approach. It is possible the math behind them would be useless to a true AI, but as far as I can tell, the only definitive statement we can make right now is that they can’t be the whole approach. Still, you’re absolutely right that there is a huge set of operations we haven’t figured out yet if we want a genuine AI.

    My understanding of consciousness is that it isn’t one single sequence of operations, but a set of simultaneous ones. There’s a ton of stuff all happening at once for each of our senses. Take sight, for example. Just to see things, we need to measure light intensity, color definition, and spacial relationships, and then mix that all together in a meaningful way. Then we have to balance each of our senses, decide which one to focus on in the moment, or even to focus on several at once. And that hasn’t even touched on thoughts, emotions, social relationships, unconscious bodily functions, or the systems in place that let us switch things back and forth between conscious and unconscious, like breathing, or blinking, or walking, and so on. There are hundreds, maybe thousands of operations happening in our brains simultaneously at any given moment.

    So, without a doubt, LLMs aren’t the most energy efficient way to do pattern recognition. But I find it hard to believe that a strong system for pattern recognition would be fully unusable in a greater system. If/when we figure the rest out, I’m sure an LLM could be used as a piece of a much greater puzzle… if we wanted to burn all that energy.



  • On the other hand, jellyfin’s identify feature works better than plex’s did for me, and it lets you rename stuff very easily whereas Plex needed you to find the exact piece of media in a database.

    My mom asked me to rip a set of weirdo bootleg tai chi DVDs years ago, back when I used Plex, but I couldn’t figure out how to get them to show up in the library because, again, weirdo bootleg media and I have no idea where she got them. But I switched to jellyfin last year and on a whim decided to mess with them, and getting them to show up in my jellyfin library was basically automatic

    Edit, another fun example of fucking with Plex’s identify feature just came to mind. For some reason it kept deciding that random movies were actually some movie named “A Fish Called Wanda.” I’d never heard of it before, the movies it would misidentify were entirely random as far as I could tell, and no amount of fuckery would get it to identify the movie correctly. It would decide that, say, The Matrix was actually AFCW, I’d remove the files for The Matrix, and it would decide something else was AFCW. Eventually I got fed up and downloaded an actual copy of AFCW, but it still refused to play the correct files if I navigated to AFCW in my library. Never did figure that one out.



  • Oh, getting kicked even when you’re not cheating has definitely happened, it’s even happened to me. But that cheater was suspicious even before that shot, that one is just the one that sealed the deal for me. Using the direct hit is also part of the evidence! Projectile cheaters like using it because the rockets move faster so the cheat engine is more accurate against people who can airstrafe. also it was citadel (a new map, the good prefire airshot spots are far from clear) and they would have had to be randomly firing rockets up at the sky for no reason. I get owned plenty often, this was not that.

    I was never at the top of the comp scene, but I’ve spent my fair share of time playing pugs against the best of the best and not even soooapymeister hits shots like that person was.








  • Same, just started a few minutes ago across all my devices no matter where I connect to.

    After letting my phone screw up the connection a few more times I finally managed to get the gist of the error message that would blink onto the screen: something about a certificate failing to auto renew

    seems to be fixed this morning. not seeing any mention of last night’s issues anywhere else though. weird!




  • The big difference is that hacky shit in Linux almost always happens because of oversights, whereas Windows actively fights you on things you want to do. This means that a solution that worked for some forum poster 10 years ago has a pretty good chance of still working today (if it’s even still necessary), whereas Microsoft would see that fix as a bug and try to “patch” it. You would never have the fuck around like this just to get your default browser to be, y’know, the default.

    Not to mention trying to troubleshoot Windows always means having to browse through a half dozen forum posts of people having your exact problem, but the only replies are some IT script that takes 3 paragraphs to tell you to reinstall whatever program, with no follow up when that inevitably doesn’t work.