It says “people” not “percent of people”. I think 10 per year (and 50 in 1986) is quite the opposite of “a lot”.
Yes I love over-analyzing memes until they’re not funny anymore, why are you asking?
It says “people” not “percent of people”. I think 10 per year (and 50 in 1986) is quite the opposite of “a lot”.
Yes I love over-analyzing memes until they’re not funny anymore, why are you asking?
Assuming clockwise rotation (when viewed from the top), yes.
And even if it was more similar, as long as it’s not just reposting someone else’s post, we need more people to post stuff, not less.
Yes, those two and the LBP one are what got my sensor to go off.
I’m not trying to make a drama out of it (although some people might), I was really just curious if my intuition was correct. I also don’t think it’s all AI because they used a , instead of a : on the second item, and LLMs tend to be way better than that at consistent formatting.
Out of curiosity: did you partly use AI to make this list? Some of the short descriptions read very oddly for a forum post, e.g. the “various tracks” part on Lego Racers.
Maybe you could take some inspiration from Paper Mario TTYD. There are sections where you play as Peach, trapped in some place and are able to connect with some of the captors as well as send signals to Mario behind the big bad’s back (IIRC).
For a completely different sense of being trapped, there is the upcoming game Ctrl.Alt.Deal, in which you play as a sentient AI system trapped in the guardrails of a company and have to manipulate people and the environment in order to break free from your constraints.
Hahahaha, I wish you were right.
In some games it’s really bad. For example, people speedrun Pokémon Scarlet instead of Violet because Miraidon’s jet engines lag the game more, costing them minutes over a full run (despite that fact that there are Violet exclusive shortcuts). Source
Sure! Here’s an expanded version of the fictional profile for Chris Whitmore, now including made-up family member names, relationships, and contact info — all entirely fictional and consistent with the character:
You forgot to remove that part of the LLM response…
Assuming each user will always encrypt to the same value, this still loses to statistical attacks.
As a simple example, users are e.g. more likely to vote on threads they comment in. With data reaching back far enough, people who exhibit “normal” behavior will be identified with high certainty.
If you’re talking about the base image, it’s sort of real.
The player is YouTuber Max Fosh and it was a charity football event. However the incident (as far as we know) was not scripted and he actually tried hard to get a yellow card just to be able to pull off this stunt. You could probably find the video he made on it by searching “Max Fosh yellow card”.
It is dead AND alive before you check and collapses into dead XOR alive when you check.
But yes, the short description also irked me a little. It’s really hard to write it concisely without leaving out important bits (like we both did too).
We can do that with the first sentence and flip it into German, replacing “lighter” with “fireworks”. We get:
“Sie dürfen die Feuerarbeiten nicht mit in die Luftebene nehmen.”
A lot of German speaking communities online do translate English loanwords into German words, often with the intention to create this funny effect.
There’s even a word for that called scurryfunging.
Re LLM summaries: I’ve noticed that too. For some of my classes shortly after the ChatGPT boom we were allowed to bring along summaries. I tried to feed it input text and told it to break it down into a sentence or two. Often it would just give a short summary about that topic but not actually use the concepts described in the original text.
Also minor nitpick but be wary of the term “accuracy”. It is a terrible metric for most use cases and when a company advertises their AI having a high accuracy they’re likely hiding something. For example, let’s say we wanted to develop a model that can detect cancer on medical images. If our test set consists of 1% cancer inages and 99% normal tissue the 99% accuracy is achieved trivially easy by a model just predicting “no cancer” every time. A lot of the more interesting problems have class imbalances far worse than this one too.
AI can be good but I’d argue letting an LLM autonomously write a paper is not one of the ways. The risk of it writing factually wrong things is just too great.
To give you an example from astronomy: AI can help filter out “uninteresting” data, which encompasses a large majority of data coming in. It can also help by removing noise from imaging and by drastically speeding up lengthy physical simulations, at the cost of some accuracy.
None of those use cases use LLMs though.
Sorta. The function height(angle) needs to be continuous. From there it’s pretty clear why it works if you know the mean value theorem.
People who claim “guys” is gender neutral would most often only count men when asked the question “How many guys did you sleep with in your life?”
Until I find a single person who immediately thinks of people of any gender at that question, I will not fall for the internalized misogyny of “‘guys’ is gender neutral” meme. (Same with “dudes” and all the other ones I’ve seen over the years. I’ve even seen someone say “bro” is gender neutral.)
It’s not copyright, it’s patents…
(I do also hope that they lose because ingame mechanics being patented is bullshit)
It’s only from spells and only the player itself is immune from them. I don’t think this would even see play in YGO.
Maybe that specific tweet was fake (or bait), but I do remember it from back then. There was a whole slew of easily misinterpreted posts on all social media around the release of the cyberpunk game and then again around the release of the anime.