Sure it is. If it’s a program that is meant to make decisions in the same way an intelligent actor would, then it’s AI. By definition. It may not be AGI, but in the same way that enemies in a video game run on AI, this does too.
They’re functionalities that were not made with traditional programming paradigms, but rather by modeling and training the model to fit it to the desired behaviour, making it able to adapt to new situations; the same basic techniques that were used to make LLMs. You can argue that it’s not “artificial intelligence” because it’s not sentient or whatever, but then AI doesn’t exist and people are complaining that something that doesn’t exist is useless.
Or you can just throw statements with no arguments under some personal secret definition, but that’s not a very constructive contribution to anything.
It’s possible translate has gotten better with AI. The old versions, however, were not necessarily using AI principles.
I remember learning about image recognition tools that were simply based around randomized goal-based heuristics. It’s tricky programming, but I certainly wouldn’t call it AI. Now, it’s a challenge to define what is and isn’t; and likely a lot of labeling is just used to gather VC funding. Much like porn, it becomes a “know it when I see it” moment.
Image recognition depends on the amount of resources you can offer for your system. There are traditional methods of feature extractions like edge detection, histogram of oriented gradients and viola-jones, but the best performers are all convolutional neural networks.
While the term can be up for debate, you cannot separate these cases and things like LLMs and image generators, they are the same field. Generative models try to capture the distribution of the data, whereas discriminitive models try to capture the distribution of labels given the data. Unlike traditional programming, you do not directly encode a sequence of steps that manipulate data into what you want as a result, but instead you try to recover the distributions based on the data you have, and then you use the model you have made in new situations.
And generative and discriminative/diagnostic paradigms are not mutually exclusive either, one is often used to improve the other.
I understand that people are angry with the aggressive marketing and find that LLMs and image generators do not remotely live up to the hype (I myself don’t use them), but extending that feeling to the entire field to the point where people say that they “loathe machine learning” (which as a sentence makes as much sense as saying that you loathe the euclidean algorithm) is unjustified, just like limiting the term AI to a single digit use cases of an entire family of solutions.
None of this stuff is “AI”. A translation program is no “AI”. Spam detection is not “AI”. Image detection is not “AI”. Cars are not “AI”.
None of this is “AI”.
Sure it is. If it’s a program that is meant to make decisions in the same way an intelligent actor would, then it’s AI. By definition. It may not be AGI, but in the same way that enemies in a video game run on AI, this does too.
They’re functionalities that were not made with traditional programming paradigms, but rather by modeling and training the model to fit it to the desired behaviour, making it able to adapt to new situations; the same basic techniques that were used to make LLMs. You can argue that it’s not “artificial intelligence” because it’s not sentient or whatever, but then AI doesn’t exist and people are complaining that something that doesn’t exist is useless.
Or you can just throw statements with no arguments under some personal secret definition, but that’s not a very constructive contribution to anything.
It’s possible translate has gotten better with AI. The old versions, however, were not necessarily using AI principles.
I remember learning about image recognition tools that were simply based around randomized goal-based heuristics. It’s tricky programming, but I certainly wouldn’t call it AI. Now, it’s a challenge to define what is and isn’t; and likely a lot of labeling is just used to gather VC funding. Much like porn, it becomes a “know it when I see it” moment.
Image recognition depends on the amount of resources you can offer for your system. There are traditional methods of feature extractions like edge detection, histogram of oriented gradients and viola-jones, but the best performers are all convolutional neural networks.
While the term can be up for debate, you cannot separate these cases and things like LLMs and image generators, they are the same field. Generative models try to capture the distribution of the data, whereas discriminitive models try to capture the distribution of labels given the data. Unlike traditional programming, you do not directly encode a sequence of steps that manipulate data into what you want as a result, but instead you try to recover the distributions based on the data you have, and then you use the model you have made in new situations.
And generative and discriminative/diagnostic paradigms are not mutually exclusive either, one is often used to improve the other.
I understand that people are angry with the aggressive marketing and find that LLMs and image generators do not remotely live up to the hype (I myself don’t use them), but extending that feeling to the entire field to the point where people say that they “loathe machine learning” (which as a sentence makes as much sense as saying that you loathe the euclidean algorithm) is unjustified, just like limiting the term AI to a single digit use cases of an entire family of solutions.