• b34k@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    5 months ago

    The HomePod, on the other hand, is not slated to get the new suite of features, with the company holding off for a new “AI-powered table-top robot.”

    Well that sucks. Siri on HomePod is dumb as hell and really badly needs an update.

    As much as I couldn’t stand Alexa’s constantly trying to sell me things, I must say she was way better at actually doing what I wanted.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      5 months ago

      I do see complaints about Siri being dumb. If Apple’s super clever about this, they could hone the experience without subjecting us to the bulk of the usual hallucination/confabulation irks.

      • Petter1@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        It all boils down to training data and context data. I bet apple has enough good “anonymous” user data helping to train siri only with relevant data. 🤷🏻‍♀️I guess we will see

        • bamboo@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          You don’t need to wonder, Apple has said as much that their AI is built on LLMs, just like everybody else. While hallucinations are still a major unsolved problem, that doesn’t mean they aren’t able to be reduced in frequency and severity. A ChatGPT like chatbot is going to hallucinate because you’re asking it to give extremely open ended responses to literally any query. The more data you feed it in the prompt, and the more you constrain its output, the less likely it is to hallucinate. It’ll likely be extremely rare that using the grammar check or rephrasing tools in Apple AI will be affected by hallucinations for that reason. Siri is more comparable to ChatGPT with regards to open ended questions, but it’s likely that they will integrate LLMs primarily for transforming inputs and outputs rather than the whole process. For example, the LLM could be prompted to call a function based on the user’s query. Then, that function finds a reliable result, either using existing APIs for real time information like weather, or using another LLM with a search engine. The output from this truth-finding process is then fed back into an LLM to generate the final output. The role of the LLM is heavily constrained at every step of the way, which is known to minimize hallucinations.

          You arguing that this is an unsolvable problem is defeatist and not helpful to actually mitigating the real issue.

    • Nogami@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      6
      ·
      edit-2
      5 months ago

      Speak for yourself.

      I’m super excited to have actual real language conversations with my devices. It’s been science fiction for so long now it’s going to be science fact.

      Don’t bother replying to me. You’re toxic and I’m blocking you after this reply.

      • Fades@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        To add to your point, AI still provides a lot of utility potential outside of art and all that cheap stuff. As a developer, I’ve used it in my work to help speed up troubleshooting or repetitive tasks.

        I’m excited for what you describe to be implemented in video games, instead of being locked to the same n lines the devs wrote for the character

        • Nogami@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          Exactly. Can you imagine something like WOW if all of the NPCs and some quests were AI based?

          Every time you play it would be like playing in a real world with real inhabitants and it could dynamically adjust the difficulty depending on how well you play so you always feel challenged but successful.

    • LostWanderer@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      7
      ·
      5 months ago

      Only the techbro shills, the executive class, and a few people who don’t know or care how the sausage is made want AI Bullshit. I want this AI bubble to not only burst, but also burn every techbro and make them shut up. Real AI is going to be a multi-discipline and multigenerational project that will take a lot of time and research to reify. This gassed up LLM bullshit is not it!

      • fer0n@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        5 months ago

        Think of it what you will, compared to crypto there’s actually value that people are getting from LLMs. Even with its current shortcomings.

        • LostWanderer@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          5 months ago

          If techbros weren’t trying to conflate LLMs with AI, I’d mostly have no issue with them. Still have concerns with the lack of security first and dubious means that are being implemented to gather data. LLMs are a useful spelling and grammar checking tool, so there are practical uses for them.

          I wouldn’t yet use them for any other purpose until truly ethical practices are implemented. Also, the weird generative quirks are smoothed out.

          • fer0n@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            There’s definitely tons of issues surrounding LLMs: how they’re created, what they spit out, and the impact they have. But there’s also use cases that go beyond grammar check imo, but everyone can figure that out for themselves.

            I‘ve mostly given up on complaining about “AI” as a description, that’s just what we’re calling it now even though it never made any sense.

            • LostWanderer@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              5 months ago

              As LLMs haven’t been able to pass a Turing Test, I can’t quite let the ‘AI’ descriptor go unchallenged. It doesn’t sit right with me; I’m not comfortable with the intentional dishonesty that conflating LLMs with AI. However, I do understand your exhaustion with trying to correct that mistake.

              As for the usage beyond spellchecking and grammar checking; it is up to an individual if they want to make more usage of it despite the ethical, privacy, and security concerns surrounding LLMs. I made my choice due to this, even that makes me feel occasionally uneasy. Making a decision with all the information is far better than falling for the hype surrounding LLMs at the moment.