• Ignotum@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    2 days ago

    Oh are you walking back the “it would be unethical” claim, and the claim that AI model cannot give nuanced responses like a human can?

    Sounds like you are now saying that a model can be made that is far better than any human expert, but since it can never be perfect and because people are far less forgiving when machines make mistakes, therefore what exactly?

    If we could make something that would reduce the absolute amount of yearly mushroom poisonings, then i would view that as an ethically good thing, not doing so would be like not making a medicine because it can give side effects, if the benefits outweigh the risks then i view it as a good thing

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      14 hours ago

      If we could make something that would reduce the absolute amount of yearly mushroom poisonings,

      You are begging the question. This is not known.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          9 hours ago

          You’re in here arguing with a dissertation you haven’t read because there might possibly be a chance we could maybe build an AI that could do this?

          If we can’t, then you have nothing to add to this conversation.

    • manualoverride@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Can you see the irony of us having a nuanced debate which is leading to misunderstanding, because we are using a medium where detail and emphasis are difficult to achieve? 😀

      My assumption of my mushroom identification program was they it would become widely available, which would be unethical.

      In the hands of a trained Mycologist using it purely as a check on their established results. Possibly useful but easy to misuse.

      A Mycologist using the program to perform the identification first, which they would then check, also dangerous as human factors would lead to confirmation bias.

      AI systems inevitably lead to overconfident conclusions from people without the time or knowledge to know the potential risks.