A specific example from me would be implementing LLM AI into my code (genetically) and without more details than that I’ll get people demanding that I don’t do that and giving suggestions for what I should do.

Suggestions are cool, but I’m gonna ask why I should not put LLM in my code in a generic sense just to have my question ignored or have lies and insults hurled my way

It’s cool if you want to answer that question, I’m just curious about other people’s similar story about receiving resistance to follow up questions if you just have to say those people aren’t worth it or you feel like you missed something you shouldn’t have in those situations.

  • That_Devil_Girl@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    Yup, I work as a shipwright welder and have had to refuse to complete assigned tasks. When I’m tasked with welding two large steel plates together, end to end, they both need to be double beveled. If they’re not, then all I can do is make thin surface welds which are easily broken.

    That’s dangerous as these steel plates are an inch and a half thick and weigh a lot. Their weight alone will break surface welds. So I refuse to do the job. They ask why I refuse and I tell them about the lack of double bevel.

    I’m even willing to break out the oxy/acetalyne torch and cut the bevels myself, but they refuse. They’re in a hurry, they don’t have time to do things correctly or safely, and they don’t care about making it someone elses problem. That’s the sort of shit that’s likely to cause serious injury or death.

  • Sir_Kevin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Yes, many times. Often it’s because they don’t understand what it is I’m trying to do. More often than not, they make wild assumptions about what I’m suggesting and then lose their fucking minds instead of asking for clarification. Ultimately it becomes an argument about what they think I’m talking about and I never get an actual answer.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      I’m trying to think if that has happened to me, but I try to keep it simple like “i dont understand what you mean my X are you saying Y?”. that probably gets me different set of interactions. One time I even tried humbling myself and just say “I’m still learning all of this and trying to figure out my mistakes…” and such ill get bereted(by strangers). i try to power through the insults and just ask what they mean and it still comes off as you being offensive

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    When tech changes quickly, some people always resist exponentially in the opposite vector. The bigger and more sudden the disruption, the bigger the push back.

    If you read some of Karl Marx stuff, it was the fear of the machines. Humans always make up a mythos of divine origin. Even atheists of the present are doing it. Almost all of the stories about AI are much the same stories of god machines that Marx was fearful of. There are many reasons why. Lemmy has several squeaky wheel users on this front. It is not a very good platform for sharing stuff about AI unfortunately.

    There are many reasons why AI is not a super effective solution and overused in many applications. Exploring uses and applications is the smart thing to be doing in the present. I play with it daily, but I will gatekeep over the use of any cloud based service. The information that can be gleaned from any interaction with an AI prompt is exponentially greater than any datamining stalkerware that existed prior. The real depth of this privacy evasive potential is only possible with a large number of individual interactions. So I expect all applications to interact with my self hosted OpenAI compatible server.

    The real frontier is in agentic workflows and developing effective niche focused momentum. Any addition of AI into general use type stuff is massively over used.

    Also people tend to make assumptions about code as if all devs are equal or capable. In some sense I am a dev, but not really. I’m more of a script kiddie that dabbles in assembly at times. I use AI more like stack exchange to good effect.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      yeah i see that marx said

      [doing away] with all repose, all fixity and all security as far as the worker’s life-situation is concerned; how it constantly threatens, by taking away the instruments of labour, to snatch from his hands the means of subsistence, and, by suppressing his specialised function, to make him superfluous

      but that’s marx just saying industrialization is threating working class. im not seeing much myth just boring explanations of workers relation to machinery

      I do use perplexity and chatgpt to code a lot. i really rather not go to stack overflow and try to understand three posts and figure out an implementation. I’m fine with that being automated

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        I use the term myth loosely in abstraction. Generalization of the tools of industry is still a mythos in an abstract sense. Someone with a new lathe they bought to bore the journals of an engine block has absolutely no connection or intentions related to class, workers, or society. That abstraction and assignment of meaning like a category or entity or class is simply the evolution of a divine mythos in the more complex humans of today.

        Stories about Skynet or The Matrix are about a similar struggle of the human class against machine gods. These have no relationship to the actual AI alignment problem and are instead a battle with more literal machine gods. Point is that the new thing is always the boogie man. Evolution must be deeply conservative most of the time. People display a similar trajectory of conservative aversion to change. In this light, the reasons for such resistance are largely irrelevant. It is a big change and will certainly get a lot of push back from conservative elements that collectively ensure change is not harmful. Those elements get cut off in the long term as the change propagates.

        You need a 16 GB or better GPU from the 30 series or higher, but then run Oobabooga text gen with the API and an 8×7b or like a 34b or 70b coder in a GGUF quantized model. Those are larger than most machines can run but Oobabooga can pull it off by splitting the model between CPU and GPU. You’ll just need the ram to initially load the thing or deepspeed to load it from NVME.

        Use a model with a long context and add a bunch of your chats into the prompt. Then ask for your user profile and start asking it questions about you that seem unrelated to any of your previous conversations in the context. You might be surprised by the results. Inference works both directions. You’re giving a lot of information that is specifically related to the ongoing interchanges and language choices. If you add a bunch of your social media posts, it is totally different in what the model will make up about you in a user profile. There is information of some sort that the model is capable of deciphering. It is not absolute or like some kind of conspiracy or trained behavior (I think), but the accuracy seemed uncanny to me. It spat out surprising information across multiple unrelated sessions when I tried it a year ago.

        • PixelPilgrim@lemmings.worldOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          I actually didn’t pursue a an llm ai project because the suggested model needed like 32 gigs of ram (i dont have that and i dont want to by a machine for that project).

          i jokingly call llm ai dubious linear algebra. i try to see an arguement against llm ai, like i sided with the writers guild in the strike and I can sympathize with ai trained on their work taking there job so they lost out on income and job they want, but im a socialist so i believe that the economy should provide them a house and food without having to work and that shouldn’t need to rely on writing gigs to survive

  • rbn@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 day ago

    Regarding your specific example, there pretty good reasons not to use AI if there’s an adequate alternative, so I can absolutely understand people arguing against that.

    AI is resource intensive and thus bad for the environment. Results usually aren’t deterministic, so the behavior is no longer reproducible. If there is a defined algorithm to solve the issue in a correct way, AI will be less accurate. If you use cloud services, you may run into privacy issues.

    Not saying there aren’t any use cases for LLMs or other forms of AI. But just applying it everywhere 'cause it’s fancy, is not a good idea.

    In general, I appreciate if people question my work or come up with proposals for improvement as long as it’s polite and the person is at least qualified to some degree. However, that does not mean that I change my mind immediately and follow their advice.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Yeah if you have better way of doing anything with no drawbacks you should do that I’ll just say out of pure reason.

      Thinking about deterministic results. I can think of a flawed code that gives a wrong result deterministically 1 out of its thousands of potential outputs and you can determine that 1 wrong answer is A) not big enough flaw to fix(code is good enough) B) not worth fixing since it’s rare (too much effort to fix). Now how that applies to LLM is that you can see the what LLM outputs and determine it’s execution is good enough or not working.

      Using a lot of resources at the cost of the environment is more a value thing. Cyanobacterial didn’t care about poisoning the environment with oxygen. Ironically I don’t think the electric grid should be restructured for ai since I don’t think so is doing anything important enough to warrant changing the electrical grid.

      I would care if someone was rude or unqualified on an issue. Id want to know why something I did was wrong, either technically or morally, or if there a better way of doing and why it’s better

      • crusa187@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        10 hours ago

        I would care if someone was rude or unqualified on an issue

        Would you? Your tone reads as fairly rude in this post, and your qualifications seem quite lacking if you don’t even comprehend the dire environmental impact and obvious drawbacks of the vast majority of contemporary AI big compute. For that matter, most llm outputs are not deterministic, especially with certain configurations eg high temperature, etc, so I don’t even follow your contrived example here. Consider that Cyanobacteria are unaware of their environmental impact - humans are not so ignorant, unless they choose to be.