A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.

  • Naval Ravikant
  • 30 Posts
  • 594 Comments
Joined 9 months ago
cake
Cake day: January 30th, 2025

help-circle

  • Opinionhaver@feddit.ukOPtoBicycles@lemmy.caNew (to me) bike day
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    I possibly could but I haven’t given up on it just yet so I don’t want to disassemble it into pieces before I’ve at least attempted to fix the frame.

    I bought a 32T chainring to replace the 38T one and see how much that helps. If it’s still too stiff then I’ll just convert it into 11x1 with a 11 - 50 cassette so then with the 38T chainring it has the same gear ratio as my old bike and I might try then using the 32T chainring for winter driving for when there more need for torque rather than speed.







  • One of the main issues in the current AI discussion is user expectations. Most people aren’t familiar with the terminology. They hear “AI” and immediately think of some superintelligent system running a space station in a sci-fi movie. Then they hear that ChatGPT gives out false information and conclude it’s not intelligent - and therefore not even real AI.

    What they fail to consider is that AI isn’t any one thing. It’s an extremely broad term. It simply refers to any system designed to perform a cognitive task that would normally require a human. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. Narrow AI can have superhuman cognitive abilities, but only within the specific task it was built for, like playing chess.

    A large language model like ChatGPT is also a narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. It often gets things right - not because it knows anything, but because its training data contains a lot of correct information. That accuracy is an emergent byproduct of how it works, not its intended function.

    What people expect from it, though, isn’t narrow intelligence - it’s general intelligence: the ability to apply cognitive ability across a wide range of domains, like a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but AGI and LLMs are not the same thing, even though both fall under the umbrella of AI.






  • Opinionhaver@feddit.uktoShowerthoughts@lemmy.worldIf I had a hammer …
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    3 months ago

    Ironically, I had to use AI to figure out what this is supposed to mean.

    Here’s the intended meaning:

    The author is critiquing the misapplication of AI—specifically, the way people adopt a flashy new tool (AI, in this case) and start using it for everything, even when it’s not the right tool for the job.

    Hammers vs. screwdrivers: A hammer is great for nails, but terrible for screws. If people start hammering screws just because hammers are faster and cheaper, they’re clearly missing the point of why screws exist and what screwdrivers are for.

    Applied to AI: People are now using large language models (like ChatGPT) or generative AI for tasks they were never meant to do—data analysis, logical reasoning, legal interpretation, even mission-critical decision-making—just because it’s easy, fast, and feels impressive.

    So the post is a cautionary parable: just because a tool is powerful or trendy (like generative AI), doesn’t mean it’s suited to every task. And blindly replacing well-understood, purpose-built tools (like rule-based systems, structured code, or human experts) with something flashy but poorly matched is a mistake.

    It’s not anti-AI—it’s anti-overuse or misuse of AI. And the tone suggests the writer thinks that’s already happening.





  • It means Artificial General intelligence and the term has been around for almost three decades.

    The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’

    By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.___