Many people claim AI can help us solve climate change, so I decided to ask Google Gemini.

It regurgitated the same points climate advocates have made for for over 40 years:

  1. Transition to Renewable Energy
  2. Reduce Greenhouse Gas Emissions
  3. Sustainable Agriculture and Land Use
  4. Climate-Resilient Cities and Infrastructure: Design cities to be more walkable, bikeable, and transit-oriented
  5. International Cooperation and Policy

So there we have it folks.

If you’ve been waiting for an LLM to give you the list of things we need to do to solve climate change, then you now have the answer as regurgitated by an AI.

Now let’s get on with it.

#AI #ArtificialIntelligence #ChatGPT #ClkmateChange #ClimateCrisis #ChatGPT @fuck_cars

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    The problem is that ChatGPT is not capable of original ideas. When you see AI, you (and the bulk of the population) think Artificial Intelligence, but what you should be thinking is Assumed Intelligence.

    If you open up a mobile phone keyboard and tap the next suggested word repeatedly, you’re doing exactly the same as a large language model like ChatGPT, just much slower and with a tiny dataset.

    And just like an autopredict keyboard can spout nonsense, so can ChatGPT. It’s euphemistically called hallucinations, but really it’s just grammatically correct gibberish.

    • AJ Sadauskas@social.vivaldi.netOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      @vk6flab In this case, the answers it’s serving up are the statistically most probable sequence of words based on what climate scientists, energy experts, architects, urban planners, meteorologists, geologists, physicists, engineers, chemists, and other researchers have been saying for decades.

      I’m personally pessimistic about whether the same words regurgitated by Gemini or ChatGPT will make a difference.

      Hopefully it does.

      Off topic: 73’s, love the callsign 😁

    • @vk6flab @ajsadauskas Back in my days, “artificial intelligence” meant things like equipping a computer with some “quality function” helping to win some sort of game by rating possible future situations. Or it meant applying all sorts of filtering to raw data (e.g. images) to help with pattern recognition (so, that’s a cow on that picture!). So now we’re modelling natural languages by collecting huge amounts of (text) data, which then helps a computer to spit out plausible stuff in natural language form. The term “artificial intelligence” is not wrong, but non-technical people assume a very wrong meaning. 🤷

      The major misunderstanding is to mentally draw a trajectory towards what we consider “human intelligence”. That’s entirely not where all of this is heading though, artificial intelligence is a completely different game.

      But that said, thanks for this awesome comparison to explain what an LLM *actually* does, I guess that’s a perfect way to explain it to anyone without the theoretical background! 👍