• 0 Posts
  • 101 Comments
Joined 2 years ago
cake
Cake day: September 25th, 2023

help-circle

  • extremely not tech-savvy

    You managed to make an account and post on Lemmy so you’re probably underestimated your technical knowledge. That being said IMHO it’s best to first list what software you use then find alternatives that work on Linux. Once that’s done then yes sure try whatever distribution you want.


  • If you know the right tool for the task, very few things take time. IMHO what’s more problematic is that with enshitification you’re swimming upstream. Sure as long as the maintainer finds the right trick, you can postpone indefinitely bad “surprises” but ultimately, why do so when proper alternatives more aligned with your Worldview exist?




  • utopiah@lemmy.worldtomemes@lemmy.worldNice one
    link
    fedilink
    arrow-up
    10
    ·
    15 days ago

    Corpospeak […] Like a sociopath.

    And this is why LLMs are so well suited for the task! People get genuinely excited by the prospect of using AI to read/reply email… because they don’t mean actual thoughtful email written with intent, maybe even emotions or even reasoning. No… no they mean corpospeak that is entirely pointless, empty of meaning and definitely written for a human by human, but rather for a cog, to another lifeless cog in the corporation.

    This is why people are investing tons of money and expending tons of CO2.

    What a fucking farce of a species we are.



  • Skimmed through the article and I found it surprisingly difficult to pinpoint what “AI” solution they actually covered, despite going as far as opening the supplementary data of the research they mentioned. Maybe I’m missing something obvious so please do share.

    AFAICT they are talking about using computer vision techniques to highlight potential problems in addition to bringing the non annotated image.

    This… is great! But I’d argue this is NOT what “AI” at the moment is hyped about. What I mean is that computer vision and statistics have been used, in medicine and elsewhere, with great success and I don’t see why it wouldn’t be applied. Rather I would argue the hype at he moment in AI is about LLM and generative AI. AFAICT (but again had a hard time parsing through this paper to get anything actually specific) none of that is using it.

    FWIW I did specific in my post tht my criticism was about “modern” AI, not AI as a field in general.


  • There are pretty great applications in medicine.

    Like what? I discussed just 2 days ago with a friend who works in public healthcare, who is bullish about AI and best he could come up with DeepMind AlphaFold which is yes interesting, even important, and yet in a way “good old fashion AI” as has been the case for the last half century or so, namely a team of dedicated researchers, actual humans, focusing on a hard problem, throwing state of the art algorithms at a problem and some compute resources… but AFAICT there is so significant medicine research that made a significant change through “modern” AI like LLMs.


  • Can’t believe I’m doing this… but here I go, actually defending cryptocurrency/blockchain :

    … so yes there are some functionalities to AI. In fact I don’t think anybody is saying 100% of it is BS and a scam, rather… just 99.99% of the marketing claims during the last decade ARE overhyped if not plain false. One could say the same for crypto/blockchain, namely that SQLite or a random DB or is enough for most people BUT there are SOME cases where it might actually be somehow useful, ideally not hijacked by “entrepreneurs” (namely VC tools) who only care about making money but not what the technology could actually bring.

    Now anyway both AI & crypto use an inconceivable amount of resources (energy, water, GPU and dedicated hardware, real estimate, R&D top talent, human resources for dataset annotation including very VERY gruesome ones, etc) so yes even if in 0.01% they are actually useful one still must ask, is it worth it? Is it OK to burn literally tons of CO2eq … to generate an image that one could have done quite easily another way? Summarize a text?

    IMHO both AI & crypto are not entirely useless in theory yet in practice have been :

    • hijacked by VCs and grifters or all kinds,
    • abused by pretty terrible people, including scammers and spammers,
    • absolutely underestimated in terms of resource consumption and thus ecological and societal impact

    So… sure, go generate some “stuff” if you want to but please be mindful of what it genuinely costs.




  • I’m playing games at home. I’m running models at home (I linked in other similar answers to it) for benchmarking.

    My point is that models are just like anything I bring into my home I try to only buy products that are manufactured properly. Someone else in this thread asked me about child labor for electronics and IMHO that was actually a good analogy. You here mention buying a microwave and that’s another good example.

    Yes, if we do want to establish feedback in the supply chain, we must know how everything we rely on is made. It’s that simple.

    There are already quite a few initiatives for that with e.g. coffee with Fair Trade Certification or ISO 14001, in electronics Fair Materials, etc.

    The point being that there are already mechanisms for feedback in other fields and in ML there are already model cards with a co2_eq_emissions field, so why couldn’t feedback also work in this field?





  • utopiah@lemmy.worldtoLemmy Shitpost@lemmy.worldAI Training Slop
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    1 month ago

    Yes indeed, yet my point is that we keep on training models TODAY so if keep on not caring, then we do postpone the same problem, cf https://lemmy.world/post/30563785/17400518

    Basically yes, use trained model today if you want but if we don’t set a trend then despite the undeniable ecological impact, there will be no corrective measure.

    It’s not enough to just say “Oh well, it used a ton of energy. We MUST use it now.”

    Anyway, my overall point was that training takes a ton of energy. I’m not asking your or OP or anyone else NOT to use such models. I’m solely pointing out that doing so without understand the process that lead to such models, including but not limited to energy for training, is naive at best.

    Edit: it’s also important to point out alternatives that are not models, namely there are already plenty of specialized tools that are MORE efficient AND accurate today. So even if the model took a ton of energy to train, in such case it’s still not rational to use it. It’s a sunk cost.


  • utopiah@lemmy.worldtoLemmy Shitpost@lemmy.worldAI Training Slop
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    1 month ago

    Indeed, the argument is mostly for future usage and future models. The overall point being that assuming training costs are negligible is either naive or showing that one does not care much for the environment.

    From a business perspective, if I’m Microsoft or OpenAI, and I see a trend to prioritize models that minimize training costs, or even that users are avoiding costly to train model, I will adapt to it. On the other hand if I see nobody cares for that, or that even building more data center drives the value up, I will build bigger models regardless of usage or energy cost.

    The point is that training is expensive and that pointing only to inference is like the Titanic going full speed ahead toward the iceberg saying how small it is. It is not small.


  • utopiah@lemmy.worldtoLemmy Shitpost@lemmy.worldAI Training Slop
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    1 month ago

    Right, my point is exactly that though, that OP by having just downloaded it might not realize the training costs. They might be low but on average they are quite high, at least relative to fine-tuning or inference. So my question was precisely to highlight that running locally while not knowing the training cost is naive, ecologically speaking. They did clarify though that they do not care so that’s coherent for them. I’m insisting on that point because maybe others would think “Oh… I can run a model locally, then it’s not <<evil>>” so I’m trying to clarify (and please let me know if I’m wrong) that it is good for privacy but the upfront training cost are not insignificant and might lead some people to prefer NOT relying on very costly to train models and prefer others, or a even a totally different solution.