• exposable_preview@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    This is a very rare use case, but one where i definetly found them very useful. Similar to another answer mentioning reverse-dictionary lookup, i used llms for reverse-song/movie lookup. That is, i describe what i know about the song/movie (whatever else, could be many things) and it gives me a list of names that i can then manually check or just directly recognize.

    This is useful for me because i tend to not remember names / artists / actor names, etc.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    It’s helping me understand how I think so that I can create frameworks for learning, problem solving, decision making etc. I’m neurodivergent.

  • Psythik@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    As a DJ with ADHD, it’s great for helping me decide what to play next when I forget where I was going with the set, and mix myself into a corner. That said, it’s not very good at suggesting songs with a compatible BPM and key, but it works well enough for finding tunes with a similar vibe to what I’m already playing. So I just go down the list until I find a tune that can be mixed in.

    As for the usual boring stuff, I’m learning how to code by having it write programs for me, and then analyzing the code and trying to figure out how it works. I’m learning a lot more than I would from studying a textbook.

    I also used to use it for therapy, but not so much anymore when I figured out that it will just tell you what you want to hear if you challenge it enough. Not really useful for personal growth.

    One thing it’s useful for is learning how stuff works, using metaphors comparing it to subjects I already understand.

  • jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    Very effective at translating between different (human) languages. Best if you can find a native speaker to double-check the output. Failing that, reverse translate with a couple different models to verify the meaning is preserved. Even this sometimes fails though – e.g. two words with similar but subtly different definitions might trip you up. For instance, I’m told “the west” refers to different regions in english and japanese, but translating and reverse translating didn’t reveal this error.

  • AndrasKrigare@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    I’ve used them both a good bit for D&D/TTRPG campaigns. The image generation has been great for making NPC portraits and custom magic item images. LLM’s have been pretty handy for practicing my DM-ing and improv, by asking it to act like a player and reacting to what it decides to do. And sometimes in the reverse by asking it to pitch interesting ideas for characters/dungeons/quest lines. I rarely took those in their entirety, but would often have bits and pieces I’d use.

  • lenz@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    5 days ago

    Good for gaining an outside perspective/insight on an argument, discussion, or other form of communication between people. I fed it my friend’s and their ex’s text conversation to it (with permission), and it was able to point out emotional manipulation in the text when asked neutrally about it:

    Please analyze this conversation between A and B and tell me what you think of their motivations and character in this conversation. Is there gaslighting? Emotional manipulation? Signs of an abusive communication style? Etc. Or is this an example of a healthy communication?

    It is essential not to ask a leading question that frames A or B in particular as the bad or the good guy. For best results, ask neutral questions.

    It would have been quite useful for my friend to have this when they were in that relationship. It may be able to spot abusive behaviors from your partner before you and your rose-colored glasses can.

    Obvious disclaimers about believing anything it says are obvious. But having an outside perspective analyze your own behavior is useful.

  • MrBobs@lemmy.one
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    With mixed results I’ve used it for summarising the plots of books if I’m about to go back into a book series I’ve not read for a while.

      • Cousin Mose@lemmy.hogru.ch
        link
        fedilink
        arrow-up
        0
        ·
        5 days ago

        I can’t be too specific without giving away my location, but I’ve recreated a sauce that was sold by a vegan restaurant I used to go to that sold out to a meat-based chain (and no longer makes the sauce).

        The second recipe was the seasoning used by a restaurant from my home state. In this case the AI was rather stupid: its first stab completely sucked and when I told it it said something along the lines of “well employees say it has these [totally different] ingredients.”

  • grue@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    One day I’m going to get around to hooking a local smart speaker to Home Assistant with ollama running locally on my server. Ideally, I’ll train the speech to text on Majel Barrett’s voice and be able to talk to my house like the computer in Star Trek.

  • jbaber@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Great for giving incantatons for ffmpeg, imagemagick, and other power tools.

    “Use ffmpeg to get a thumbnail of the fifth second of a video.”

    Anything where syntax is complicated, lots of half-baked tutorials exist for the AI to read, and you can immediately confirm if it worked or not. It does hallucinate flags, but fixes if you say “There is no --compress flag” etc.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    Legitimately, no. I tried to use it to write code and the code it wrote was dog shit. I tried to use it to write an article and the article it wrote was dog shit. I tried to use it to generate a logo and the logo it generated was both dog shit and raster graphic, so I wouldn’t even have been able to use it.

    It’s good at answering some simple things, but sometimes even gets that wrong. It’s like an extremely confident but undeniably stupid friend.

    Oh, actually it did do something right. I asked it to help flesh out an idea and turn it into an outline, and it was pretty good at that. So I guess for going from idea to outline and maybe outline to first draft, it’s ok.

    • Bob Robertson IX @discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      The output is only as good as the model being used. If you want to write code then use a model designed for code. Over the weekend I wrote an Android app to be able to connect my phone to my Ollama instance from off my network. I’ve never done any coding beyond scripts, and the AI walked me through setting up the IDE and a git repository before we even got started on the code. 3 hours after I had the idea I had the app installed and working on my phone.

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        I didn’t say the code didn’t work. I said it was dog shit. Dog shit code can still work, but it will have problems. What it produced looks like an intern wrote it. Nothing against interns, they’re just not gonna be able to write production quality code.

        It’s also really unsettling to ask it about my own libraries and have it answer questions about them. It was trained on my code, and I just feel disgusted about that. Like, whatever, they’re not breaking the rules of the license, but it’s still disconcerting to know that they could plagiarize a bunch of my code if someone asked the right prompt.

        (And for anyone thinking it, yes, I see the joke about how it was my bad code that it trained on. Funny enough, some of the code I know was in its training data is code I wrote when I was 19, and yeah, it is bad code.)

  • ocean@lemmy.selfhostcat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    ChatGPT kind of sucks but is really fast. DeepSeek takes a second but gives really good or hilarious answers. It’s actually good at humor in English and Chinese. Love that it’s actually FOSS too

  • KubrickFR@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    I use it for books/movies/music/games recommandations (at least while it isn’t used for ads…). You can ask for an artist similar to X or a short movie in genre X. The more demanding you are the better, like a “funny scifi book in the YA genre with a zero to hero plot”.

  • Jabril [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    The image generator to 3D model to animation pipeline isn’t too bad. If you’re not a great visual artist, 3D modeler, or animator you can get out pretty decent results on your own that would normally take teams of multiple people dozens of hours after years of training