• Null User Object@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    9 hours ago

    So this absolute dumb fuck, that has utterly fallen for the sycophancy, is very likely putting military secrets into his prompts. Military secrets that can then be read by employees at OpenAI, or hackers that have gotten into their servers, and sold to whatever nation state pays up. What an absolute shit-for-brains.

    ETA: Even if not literal secrets. I guarantee that the the enemies are very willing to pay just to find out about what (even mundane) problems he’s asking AI for help with. Is a top US General asking AI how to unclog a toilet? North Korea wants to know.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    3
    ·
    13 hours ago

    I’m thinking of telling management that AI came up with my plans so I can just do whatever I want and have them love it.

  • TommySoda@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    1 day ago

    I’m less concerned about it making poor decisions and more concerned about plausible deniability. It’s going to make poor decisions, that’s just a feature of AI, but if either one makes a bad call and people die he can just say it wasn’t him. Basically gives them a golden ticket to do whatever they want and blame AI to avoid accountability. It’s literally one of the excuses that Israel has uses when murdering civilians in Gaza.

    I’m not saying this particular person is the kind that would do something like this, but there are plenty that will and already do.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    Decisions like AI suggesting to depressed people to kill themselves, told dry alcoholics to have a drink as a treat, and recommended taking heroin for others?