• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: September 27th, 2023

help-circle




  • From what I remember and what a quick search on the internet confirmed, B didn’t actually deny her anything. He actually went out of his way to do as much good for her as he could. He claims to have replied “Language.” because he knew other people at NASA with more say on her job would find her, which would get her into trouble (and they did find her even before his first Tweet).







  • When were talking about teaching kids the alphabet we need to train both individual and applied letters

    This is only slightly related but I once met a young (USAmerican) adult who thought the stripy horse animal’s name was pronounced zed-bra in British English and it was really hard to convince her otherwise. In her mind zebra was strongly connected to Z-bra, so of course if someone was to pronounce the letter “zed” it would turn into “zed-bra” and not just into “zeh-bra”.




  • My bad, I wasn’t precise enough with what I wanted to say. Of course you can confirm (with astronomically high likelihood) that a screenshot of AI Overview is genuine if you get the same result with the same prompt.

    What you can’t really do is prove the negative. If someone gets an output then replicating their prompt won’t necessarily give you the same output, for a multitude of reasons. e.g. it might take all other things Google knows about you into account, Google might have tweaked something in the last few minutes, the stochasticity of the model is leading to a different output, etc.

    Also funny you bring up image generation, where this actually works too in some cases. For example they used the same prompt with multiple different seeds and if there’s a cluster of very similar output images, you can surmise that an image looking very close to that was in the training set.





  • Mirodir@discuss.tchncs.detoLemmy Shitpost@lemmy.worldAutomation
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    6 months ago

    So is the example with the dogs/wolves and the example in the OP.

    As to how hard to resolve, the dog/wolves one might be quite difficult, but for the example in the OP, it wouldn’t be hard to feed in all images (during training) with randomly chosen backgrounds to remove the model’s ability to draw any conclusions based on background.

    However this would probably unearth the next issue. The one where the human graders, who were probably used to create the original training dataset, have their own biases based on race, gender, appearance, etc. This doesn’t even necessarily mean that they were racist/sexist/etc, just that they struggle to detect certain emotions in certain groups of people. The model would then replicate those issues.


  • I can only speak for myself. For me it felt really great being able to explore the world having absolutely zero idea of what is what, how much game is left, etc. It is reminiscent of a time when I was a kid and playing a game was exactly like that.

    I even got quite sad when my friend “accidentally” told me

    spoiler

    That a certain action I did locked me into a specific ending unless I did something I probably wouldn’t be able to figure out. Rationally I understand that this is as inconsequential as it gets, but I didn’t even know for sure if there were multiple endings until that point.