• RunawayFixer@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    “intentionally using low-quality AI output in their work without fixing it”

    This reads like victim blaming or scapegoating. That ai company makes shoddy software that outputs faulty results, users output faulty results when using that software, and now the ai company blames the users for outputting faulty results. That some (but likely not all) users know that the results are faulty, doesn’t change that the software itself is faulty.

  • it_depends_man@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    A new report

    BY THE AI COMPANY “WRITER”

    and research firm Workplace Intelligent found a massive portion of workers across the US, UK, and Europe are intentionally trying to sabotage their bosses’ AI initiatives.

    Please don’t spread obviously doctored “reports”.

  • greyscale@lemmy.grey.ooo
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    Good.

    It is morally and ethically the right thing to do.

    Also, did you know it is ethically and morally correct to firebomb datacenters? They’re being used for structural violence, and are basically piñatas.

    • Bluegrass_Addict@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      …workers admitted to sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools, or intentionally using low-quality AI output in their work without fixing it.

      tbh it just reads like people are just using it ai, not actually sabotaging it. lol it’s such trash

      • cabbage@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        “workers admitted to sabotaging their company’s AI by […] intentionally using low-quality AI output in their work without fixing it”

        Lol. Sounds an awful lot like the company is sabotaging itself in this case.

  • ctry21@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    Can’t sabotage what’s already broken. The rare time I’ve been asked to use it for a piece of work, the output is so shit and full of errors that it would be easier to have done it by hand as a human.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools,

    This is counter-productive and can get you in big trouble IMO. I don’t even get what these are protesting.

    or intentionally using low-quality AI output in their work without fixing it.

    This is better and I think I would totally do this if management forced me to use AI. If they want to pretend using this thing is a better use of my time, I’ll give them what they want.

    Fortunately I am working for an administration that has had rather tame expectations for gen AI use till now. They’re basically just like “experiment if you want, be careful and use what works for you”. So I just keep doing what I always did.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      This is counter-productive and can get you in big trouble IMO. I don’t even get what these are protesting

      It reads like a policy/implementation fault. The workers have been told to use AI, but haven’t been told clear information, or are presented with a bad model/interface, so they just hop on Google bard or something familiar that works better.

      It’s still using AI, so basically the same thing.

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      I don’t even get what these are protesting.

      It doesn’t make sense because the protest is an invention.

      or intentionally using low-quality AI output in their work without fixing it.

      Translated: “our software tool works poorly and produces bad output. If workers do not work to manually fix the output, then they are InTeNtIoNaLlY sAbOtAgInG our business. Responsibility should be on the workers to fix our product’s flaw.”