What are pros and cons of doing this? What impact it will have on the personality / mind of the person down the line after say 10 yrs?

  • Frater Mus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    Like any other automated tool, I’d want them to master the manual skills first.

    With math and calculators first we show we can do it longhand then get the calc. Show you can search and assess sources first then incorporate AI.

  • bilboswaggings@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    Why would you want that?

    AI does not know things, it’s answers depend on the wording of the guestion. I guess it could be used if limited (teaching how to use it responsibly and showing how they make mistakes even in very simple situations)

    Much like a calculator both are more effective if you know what is happening so you can catch the mistakes and fix them

      • bilboswaggings@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        If it doesn’t understand what it’s saying can you really say it knows it? It has access to a lot of training data so it can get many things correct, but it’s effectively just generating the most likely answer from the training data

        • NoiseColor@startrek.website
          link
          fedilink
          arrow-up
          0
          ·
          9 months ago

          Well obviously it doesn’t “know” know, it’s not alive.

          We are all generating the most likely answer from the training data. But going back to the original question : what do you fear chatgpt would say that would be detrimental to a 16 year old?

  • 𝘋𝘪𝘳𝘬@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    To use as a tool? Yes.
    To use as a friend? No.

    A person using a tool for a longer time will become better in using said tool.

  • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    It’s just AI chatbot, I don’t see how it would be dangerous.

    And I am also pretty sure a 16 year old knows to expect inaccurate results from it, unless they’ve been living restricted from the outside world until now.

    The only thing negative thing I see from it so far is kids using it to create essays, but it’s not like there wasn’t a countless number of them available on the internet before. It was just easier to detect as you could search up the text and see if you can find it online.

    Anyway, for just playing around it gets boring after 15 minutes.
    Why don’t you try?

    • LWD@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Something that appears more human is more likely to elicit them sending their private data. And that data is then sold, obviously without consent, and used however the buyers feel.

      Instead of being scared to share information with it, you will volunteer your data…

      – Vladimir Prelovac, CEO of Kagi AI and Search

      Remember Replika, the AI chatbot that sexually harassed minors and SA victims, and (allegedly) repeated the contents of other people’s messages verbatim?

      It might not be as mind-rotting as TikTok but it’s not good.

  • AlwaysNowNeverNotMe@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    The context of the word “let” is interesting here.

    I would recommend a collaborative approach, it’s not as if they can’t use it because you tell them no. They don’t need a credit card or a driver’s license or even a computer.

  • amio@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    It’s not even a good idea to let quite a lot of adults use ChatGPT. People don’t know how it works, don’t treat the answers with anything close to appropriate skepticism, and often ask about things they don’t have the knowledge/skills to verify. And anything it tells you, you likely will need to verify.

    It’s quite unlikely to affect their personality, but it might make them believe a bunch of weird shit that some unknowable, undebuggable computer program hallucinated up. If you’ve done an uncommonly great job with their critical thinking skills, great. If not, better get started. That is not specific to “AI” though.

    • NoiseColor@startrek.website
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      People don’t know how TV works and we are hardly gonna tell people not to use it.

      As long as people are aware that some responses might be made up it should be fine for anyone to use it.

  • rufus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    Kids year olds should use their own creativity, practice reading, creating something. Play outside, get dirty. Do sports, maybe learn a musical instrument. And do their homework themselves.

    I’d say many things are alright in the proper dose. I mean ChatGPT is part of the world they’re growing in to…

    And 16 isn’t a kid anymore. They can handle some responsibility. I don’t see a one-size-fits every 16 yo solution. I think you should allow them and decide individually.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    Well, lets take a look at how the 16-year-olds who got to use ChatGPT ten years ago have turned out…

    In seriousness, as others have been pointing out, the big online AI assistants are all super neutered these days. I think it’s probably fine, and indeed given how these tools are going to likely become more widespread in the future I think it’s a good idea for kids to get used to using them. At 16 I’d say they’re too old to sit them down and give them a lecture about “it’s not really aware, it doesn’t feel emotions or have memories, and if you go to it with any sort of medical questions definitely double-check those with another source” - lectures at that age are probably going to backfire from what I’ve seen. Instead, suggest that they research those things themselves. Just put those questions out there and hopefully it’ll motivate them to be curious.

  • lemontree@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    I would go back a few years and ask: Should i let a 16 year old use search engines?

    Probably not too different

    • PrinceWith999Enemies@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      That’s exactly my perspective.

      I came of age with the birth of the web. I was using systems like Usenet, gopher, wais, and that sort of thing. I was very much into the whole cypherpunk, “information wants to be free” philosophy that thought that the more information people had, the more they could talk to each other, the better the world would be.

      Boy, was I wrong.

      But you can’t put the genie back into the bottle. So now, in addition to having NPR online, we have kids eating tide pods and getting recruited into fascist ideologies. And of course it’s not just kids. It’s tough to see how the anti-vax movement or QAnon could have grown without the internet (which obviously has search engines as a major driver of traffic).

      I think you’re better off teaching critical thinking, and even demonstrating the failings of ChatGPT by showing them how bad it is at answering questions. There’s plenty of resources you can find that should give you a starting point. Ironically, you can find them using a search engine.

      • rufus@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        9 months ago

        I think that’s a good take on things.

        Ultimately it still holds true. Information does want to be free. You just can’t mix that with misinformation, have everything on the same level and a general audience completely oblivious to the fact and uneducated.

        Things have changed. Back in those times it was a small elite on the internet. People who could afford computers and an internet connection and make some use out of it. You needed some amount of intelligence because you had to put some effort in to get online, learn about the tools because that wasn’t easy or provided to you. So you’d generally be at least somewhat intelligent if you ended up on the internet. And that’s beneficial when it comes to receiving unfiltered information. Combined with the fact that there were comparatively more academics and students, because that was the origin of the internet.

        And it wasn’t that common to push your agenda there or advertise for your skewed political views in the way people do it nowadays. Due to the nature of the internet and the amount of people there, it wasn’t worth the effort. You’d be better off focusing somewhere else where you could influence more people. So the dynamics were just different due to history and circumstances.

        Things have changed. Nowadays everyone is online all the time. It’s the place to influence people and make money. And that’s the other part of the problem. The actual people, connecting them and providing information to them (or to each other) isn’t what’s most of the internet is about, anymore. Motivations are gathering data about people and selling them, making people become addicted to your platform so they spend more time there and you can make more money. Everyone is competing for attention. And bad, emotional stories are what works best. Giving people the “simple truths” they seek instead of an intellectual and nuanced view. Factuality just gets in the way of all of that.

        I sometimes like to compare that to the Age of Reason / Enlightenment. Back then it was monarchs, bad dynamics and missing education. Now it’s big tech companies, bad dynamics and insufficient education. People need to get emancipated, educated and leave the current “immature state of ignorance” (to quote Kant.)

        Information and education are key. And the internet, algorithms and AI are just tools. They can be used for progress, or to enslave us. At least the internet has the potential (and was build) to connect people and provide a level playing field to everyone. But it can be used for a variety of different things. And choosing the right things isn’t something that can be solved by technology alone.

  • AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    A friend of mine is a French teacher, and I was discussing with her an idea for how to incorporate Chat-GPT into the curriculum. Specifically, her idea was to explore its limitations as a tool, by having a lesson in the computer suite where students actively try to answer GCSE (exams for 15/16 year olds) French questions using Chat-GPT, and then peer mark them, with the goal of “catching out” their peers.

    The logic was that when she was learning French in school, Google translate was still fairly new, and whilst many of her teachers desperately tried to ignore Google Translate, one teacher took the time to look at how one should (and shouldn’t) use this new tool. She said that it was useful to actually be able to evaluate the limitations of online translators, rather than just saying they’re always wrong and should never be used.

    We tried out a few examples to see whether her idea with Chat-GPT had merit and we found that it was pretty easy to generate errors that’d be hard to spot if you’re a student looking for a quick solution. Stuff like “I can’t answer that because I’m a large language model” or whatever, but in French.

    • JungleJim@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      That’s a great teacher. Refusing to teach a technology only leads to poor use. Even if one thinks it’s a poor technology, teach THAT instead of just black boxing the topic. The bottle is open, the genie is out. Better to teach how to make legally airtight wishes than to ban wishmaking.

  • Saigonauticon@voltage.vn
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I think it would be a bad idea to do otherwise. Children need to learn about useful tools, and the shortcomings of those tools.

    16 year old me would have had a great time getting an AI to teach me things that my teachers in school did not have expertise in. Sure, it would be wrong some of the time, but so were my teachers at that age. It would have given me such a head start on university!

  • wathek@discuss.online
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 months ago

    ChatGPT is overly safe, it’s a great tool to learn, more so than a teacher because you can freely ask it very specific questions in your own words and it will give an understandable answer. I think it’s actually a perfect tool for someone that age. Once the topics get too advanced, the results become less reliable though.

    It doesnt make things up anymore as much as it used to. You’ll get the best results from the paid GPT4 subscription, which i would recommend.

    The only real risk i see is overreliance on it. I notice this in myself too, it’s almost like i forgot googling things is an option, so when i’m stuck rather than trying another approaxh, i just keep throwing prompts at GPT-4 until i give up and find the solution elsewhere, often within minutes. The way things are going, classic web search is becoming obsolete (unreliable result because of AI written content and fake news) while AI actively tries to be unbiased.

    tldr: Yes, it’s extremely useful, make sure they don’t forget how to do things without chatgpt too.

  • InputZero@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    I don’t think there will be any change in personality or cognition just by using ChatGPT. The only concern I can think of is over reliance. Especially if your child intends to goto post secondary school. Universities are very strict regarding plagiarism and view AI generation as such. If they can use it responsibly there no downside, if they’re going to use it to start to do their homework for them it’ll be a problem.