I think AI is neat.

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    edit-2
    11 months ago

    The reason it’s dangerous is because there are a significant number of jobs and people out there that do exactly that. Which can be replaced…

    • Szymon@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      11 months ago

      People making content should immediately pivot to become the approvers, not the generators.

      • Redacted@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        3
        ·
        edit-2
        11 months ago

        Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.

        This argument always boils down to one’s definition of the word “understanding”. For me that word implies a degree of consciousness, for others, apparently not.

        To quote GPT-4:

        LLMs do not truly understand the meaning, context, or implications of the language they generate or process. They are more like sophisticated parrots that mimic human language, rather than intelligent agents that comprehend and communicate with humans. LLMs are impressive and useful tools, but they are not substitutes for human understanding.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          10
          ·
          edit-2
          11 months ago

          When people say that the model “understands”, it means just that, not that it is human, and not that it does so exactly humans do. Judging its capabilities by how close it’s mimicking humans is pointless, just like judging a boat by how well it can do the breast stroke. The value lies in its performance and output, not in imitating human cognition.

          • Redacted@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            4
            ·
            11 months ago

            Understanding is a human concept so attributing it to an algorithm is strange.

            It can be done by taking a very shallow definition of the word but then we’re just entering a debate about semantics.

              • Redacted@lemmy.world
                link
                fedilink
                English
                arrow-up
                8
                arrow-down
                1
                ·
                edit-2
                11 months ago

                Yes sorry probably shouldn’t have used the word “human”. It’s a concept that we apply to living things that experience the world.

                Animals certainly understand things but it’s a sliding scale where we use human understanding as the benchmark.

                My point stands though, to attribute it to an algorithm is strange.

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          2
          arrow-down
          5
          ·
          11 months ago

          You are moving goal posts

          “understanding” can be given only when you reach like old age as a human and if you meditated in a cave

          That’s my definition for it

          • Redacted@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            11 months ago

            No one is moving goalposts, there is just a deeper meaning behind the word “understanding” than perhaps you recognise.

            The concept of understanding is poorly defined which is where the confusion arises, but it is definitely not a direct synonym for pattern matching.