

As I recall, the mathematical definition of accuracy does have some overlap with precision, so increasing precision improves accuracy as well. It’s just a little confusing.
polite leftists make more leftists
more leftists make revolution
As I recall, the mathematical definition of accuracy does have some overlap with precision, so increasing precision improves accuracy as well. It’s just a little confusing.
Well I’m not claiming that an AI-apocalypse is inevitable, just that it’s possible enough we should start worrying about it now. As for the reason to believe it would happen – isn’t that covered by (2)? If you believe that (2) will occur with near-100% certainty, then that would be the impetus.
2.5356%.
clever. (But is that accurate?)
I don’t entirely agree with that image – the first one says low accuracy, low precision – but it’s the best accuracy possible given the low precision.
Ah – I was being sarcastic when I said “if we bully him enough, the genocide will stop.” Perhaps I should have added /s
.
Depends on the meaning of “accurate” (e.g. an archer, a research paper, a copy…).
Impeccable; flawless; dead-on; faithful – if I had to choose one, I’d pick “impeccable”
Don’t say “very accurate,” say “exact”
“exact” is a synonym for “very precise,” not “very accurate.”
It was not copyrighted until 2016. Seeing as it’s from the 1800s, that would not be possible.
If we bully him enough, the genocide will stop.
Well, the probability you have for the AI apocalypse should ultimately be the product of those three numbers. I’m curious which of those is the one you think is so unlikely.
Please assign probabilities to the following (for the next 3 decades):
bonus: given 1 and 2, probability that we don’t even notice it wants to kill us, e.g. because we don’t know how to understand what it’s thinking.
Since the AI is smarter than me, I only need to propose one plausible method by which it could exterminate all humans. It can come up with a method at least as good as me, most likely something much better though. The typical answer here would be that it bio-engineers a lethal virus which is initially harmless (to avoid detection), but responds to some trigger like the introduction of a certain chemical or maybe a strong radio signal. If it’s very smart, and has a very good understanding of bioengineering, it should be able to produce a virus like this by paying a laboratory to e.g. perform some CRISPR operations on some existing bacteria strain (or even just mix some chemicals together if Sagan turns out to be right about bioengineering) and mail a sample somewhere. It can wait until everyone is infected before triggering the strain.
metric system
Is this one of those intentionally-obviously-wrong comments designed to encourage people to comment on the meme?
The reason it’s always just around the corner is because there is very strong evidence we’re approaching the singularity. Why do you sound sarcastic saying this? What probability would you assign to an AI apocalypse in the next three decades?
Geoff Hinton absolutely kicked things off. Everybody else had given up on neural nets for image recognition, but his breakthrough renewed interest throughout the world. We wouldn’t have deepdreaming slugdogs without him.
It should not be surprising that most people in the field of AI are not predicting armageddon, since it would be harmful to their careers to do so. Hinton is also not predicting the apocalypse – he’s saying 10-20% chance, which is actually a prediction that it won’t happen.
5490175897536472785479178950797495787834 [sic]
I’m guessing you don’t do 10 because you just don’t wear tank-tops in general. But why on earth a bra, especially if you’re ditching the panties? Don’t you find it uncomfortable to decompress wearing one? Do you just have unusually uncomfortable underpants?
Literally anything except 5 10 and 15. Extremely curious to hear from the 5/10/15 crowd.
I dislike AI because it produces slop, not because it can’t be original.
I honestly like this… I feel like this still fits into the good old-fashioned fun of the humour of e.g. aiweirdness.com
In that case, you should know that Geoff Hinton (the guy whose lab kicked off the whole AI revolution last decade) quit Google in order to warn about the existential risk of AI. He believes there’s at least a 10% chance that it will kill us all within 30 years. Ilya Sutskever, his former student and co-founder of OpenAI, believes similarly, which is why he quit OpenAI and founded Safe Superintelligence (yes that basic html document really is their homepage) to help solve the alignment problem.
You can also find popular rationalist AI pundits like gwern, acx, yudkowsky, etc. voicing similar concerns, with a range of P(doom) from low to the laughably high.
I don’t like using country flags for languages. For one thing, not every language has a country of its own – there are 700+ languages in use today, but <200 countries. Many languages don’t even have any obvious insignia to represent them at all.
If you’re making a piece of software and you want it ported to many languages, just use text to represent the language.