• flandish@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    3
    ·
    17 days ago

    AI goes “rogue” as much as a firearm “shoots itself.” This is just 100% negligence. Not “rogue AI.”

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      17 days ago

      Eh, if you pay attention, most of the times this happens the person was a jerk in their prompts.

      Like look at the instruction echoed back in this case. All caps and containing a curse word.

      You can believe that the incidents occurring are 100% because of negligence and not related to the model behavior shifting, but there seems to be a widening gap between people who prompt like this and have horror stories and people who give the models breaks over long sessions and seem to also regularly post pretty positive results.

      An image of the model responding about not following user prompt

      • bountygiver [any]@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        17 days ago

        the LLM also do not understand what “not guessing” means. Same energy as “make no mistakes” in your prompts

        • pinball_wizard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 days ago

          Same energy as “make no mistakes” in your prompts

          Oh, shit. I should be adding that.

          (I’m joking.)

      • flandish@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        17 days ago

        exactly. it’s on the consumer not the model “going rogue.” when i use it, it’s as if it’s a rubber duck or plain english rtfm