• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle

  • My take: if Rossman came out swinging as an anti-corporate revolutionary, his ideas wouldn’t have wide appeal right now, since many people still think the problem is just “bad” mega-corporations. So instead, he’s arguing for less-shitty tech corporations as a first step (symbolized by Clippy, of a less-intrusive software age), rather than “destroy all tech corpos now.” No, Microsoft wasn’t good then, but they were less awful.

    If his video were starkly anti-capitalist, it would not have reached 2.5 million people, and I’d say getting that many people to start thinking about rejecting invasive software is a great step in the right direction, as opposed to ideological purism that would only resonate with those who already agree. The need for these baby steps is frustrating for those who already see the big picture, but a few chats with my coworkers quickly reveal how shockingly little some people have actually thought about the sins of big tech.



  • DegenerateSupreme@lemmy.ziptoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    3 months ago

    I’d say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person’s use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what’s wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about “innovation” and beating China at another dick-measuring contest.

    The other concern is that ChatGPT’s ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model’s training. As the adage goes, “AI allows wealth to access talent, while preventing talent from accessing wealth.” But since a ridiculous amount of data goes into these models, it’s an amorphous ethical issue that’s understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.

    By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we’ll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).

    As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.


  • DegenerateSupreme@lemmy.ziptoFuck Cars@lemmy.worldNo words
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    1
    ·
    5 months ago

    The shift to these ridiculously large trucks is partially consequent of the poorly-implemented Obama fuel economy regulations. The regulations were determined by wheelbase and tread width, which disincentivized manufacturers from making mid- or small-sized trucks. The bigger they made them, the less restricted they were by fuel economy. Larger vehicles also ease constraints on engineers; they don’t have to struggle fitting a lot into a small body. Once large trucks became the default offering, they morphed into the annoying cultural “status” symbol we know today.

    Anyway I have a Miata MX-5 and I love my tiny car.


  • To say “that feeling” of indignation (at the letter’s inclusion in a gallery) is the same as other things that make him roll his eyes, is reductionist. We regard things as stupid for different reasons; they’re not all the “same feeling.” As others have said, the artist’s intentionality in presenting something is part of its message. So the indignation he felt about a piece being put in a gallery is part of that piece’s effect on him, born from the artist’s choices. That feeling is different than hearing a moron say something dumb and thinking it’s stupid.

    Intentionality is the key. Case in point, “language evolves” is a silly thing to say after a mistake, but many subcultures start misspelling things on purpose, and that intentionality is how language evolves.