It is definitely here to stay, but the hype of AGI being just around the corner is definitely not believable. And a lot of the billions being invested in AI will never return a profit.
AI is already a commodity. People will be paying $10/month at max for general AI. Whether Gemini, Apple Intelligence, Llama, ChatGPT, copilot or Deepseek. People will just have one cheap plan that covers anything an ordinary person would need. Most people might even limit themselves to free plans supported by advertisements.
These companies aren’t going to be able to extract revenues in the $20-$100/month from the general population, which is what they need to recoup their investments.
Specialized implementations for law firms, medical field, etc will be able to charge more per seat, but their user base will be small. And even they will face stiff competition.
I do believe AI can mostly solve quite a few of the problems of an aging society, by making the smaller pool of workers significantly more productive. But it will not be able to fully replace humans any time soon.
It’s kinda like email or the web. You can make money using these technologies, but by itself it’s not a big money maker.
Does it really boost productivity? In my experience, if a long email can be written by an AI, then you should just email the AI prompt directly to the email recipient and save everyone involved some time. AI is like reverse file compression. No new information is added, just noise.
If you’re using the thing to write your work emails, you’re probably so bad at your job that you won’t last anyway. Being able to write a clear, effective message is not a skill, it’s a basic function like walking. Asking a machine to do it for you just hurts yourself more than anything.
That said, it can be very useful for coding, for analyzing large contracts and agreements and providing summaries of huge datasets, it can help in designing slide shows when you have to do weekly power-points and other small-scale tasks that make your day go faster.
I find it hilarious how many people try to make the thing do ALL their work for them and end up looking like idiots as it blows up in their face.
See, LLM’s will never be smarter than you personally, they are tools for amplifying your own cognition and abilities, but few people use them that way, most people think it’s already alive and can make meaning for them. It’s not, it’s a mirror. You wouldn’t put a hand-mirror on your work chair and leave it to finish out your day.
I’m not a coder by any means, but when updating the super fucking outdated excel files my old company used, I’d usually make a VBA script using an LLM. It wasn’t always perfect, but 99% of the time, it was waaaay faster than me doing it myself. Then again, the things that company insisted was done in Excel could easily have been done better with other software. But the reality is that my field is conservative as fuck, and if it worked for the boss in 1994, it has to work for me.
If that email needs to go to a client or stakeholder, then our culture won’t accept just the prompt.
Where it really shines is translation, transcription and coding.
Programmers can easily double their productivity and increase the quality of their code, tests and documentation while reducing bugs.
Translation is basically perfect. Human translators aren’t needed. At most they can review, but it’s basically errorless, so they won’t really change the outcome.
Transcribing meetings also works very well. No typos or grammar errors, only sometimes issues with acronyms and technical terms, but those are easy to spot and correct.
As a programmer, there are so very few situations where I’ve seen LLMs suggest reasonable code. There are some that are good at it in some very limited situations but for the most part they’re just as bad at writing code as they are at everything else.
Programmers can double their productivity and increase quality of code?!? If AI can do that for you, you’re not a programmer, you’re writing some HTML.
We tried AI a lot and I’ve never seen a single useful result. Every single time, even for pretty trivial things, we had to fix several bugs and the time we needed went up instead of down.
Every. Single. Time.
Best AI can do for programmers is context sensitive auto completion.
Another thing where AI might be useful is static code analysis.
Not really. As a programmer who doesn’t deal with math like at all, just working on overly-complicated CRUD’s, and even for me the AI is still completely wrong and/or waste of time 9 times out of 10. And I can usually spot when my colleagues are trying to use LLM’s because they submit overly descriptive yet completely fucking pointless refactors in their PR’s.
AI is a commodity but the big players are losing money for every query sent. Even at the $200/month subscription level.
Tech valuations are based on scaling. ARPU grows with every user added. It costs the same to serve 10 users vs 100 users, etc. ChatGPT, Gemini, copilot, Claude all cost more the more they’re used. That’s the bubble.
It is definitely here to stay, but the hype of AGI being just around the corner is definitely not believable. And a lot of the billions being invested in AI will never return a profit.
AI is already a commodity. People will be paying $10/month at max for general AI. Whether Gemini, Apple Intelligence, Llama, ChatGPT, copilot or Deepseek. People will just have one cheap plan that covers anything an ordinary person would need. Most people might even limit themselves to free plans supported by advertisements.
These companies aren’t going to be able to extract revenues in the $20-$100/month from the general population, which is what they need to recoup their investments.
Specialized implementations for law firms, medical field, etc will be able to charge more per seat, but their user base will be small. And even they will face stiff competition.
I do believe AI can mostly solve quite a few of the problems of an aging society, by making the smaller pool of workers significantly more productive. But it will not be able to fully replace humans any time soon.
It’s kinda like email or the web. You can make money using these technologies, but by itself it’s not a big money maker.
Does it really boost productivity? In my experience, if a long email can be written by an AI, then you should just email the AI prompt directly to the email recipient and save everyone involved some time. AI is like reverse file compression. No new information is added, just noise.
If you’re using the thing to write your work emails, you’re probably so bad at your job that you won’t last anyway. Being able to write a clear, effective message is not a skill, it’s a basic function like walking. Asking a machine to do it for you just hurts yourself more than anything.
That said, it can be very useful for coding, for analyzing large contracts and agreements and providing summaries of huge datasets, it can help in designing slide shows when you have to do weekly power-points and other small-scale tasks that make your day go faster.
I find it hilarious how many people try to make the thing do ALL their work for them and end up looking like idiots as it blows up in their face.
See, LLM’s will never be smarter than you personally, they are tools for amplifying your own cognition and abilities, but few people use them that way, most people think it’s already alive and can make meaning for them. It’s not, it’s a mirror. You wouldn’t put a hand-mirror on your work chair and leave it to finish out your day.
I’m not a coder by any means, but when updating the super fucking outdated excel files my old company used, I’d usually make a VBA script using an LLM. It wasn’t always perfect, but 99% of the time, it was waaaay faster than me doing it myself. Then again, the things that company insisted was done in Excel could easily have been done better with other software. But the reality is that my field is conservative as fuck, and if it worked for the boss in 1994, it has to work for me.
If that email needs to go to a client or stakeholder, then our culture won’t accept just the prompt.
Where it really shines is translation, transcription and coding.
Programmers can easily double their productivity and increase the quality of their code, tests and documentation while reducing bugs.
Translation is basically perfect. Human translators aren’t needed. At most they can review, but it’s basically errorless, so they won’t really change the outcome.
Transcribing meetings also works very well. No typos or grammar errors, only sometimes issues with acronyms and technical terms, but those are easy to spot and correct.
As a programmer, there are so very few situations where I’ve seen LLMs suggest reasonable code. There are some that are good at it in some very limited situations but for the most part they’re just as bad at writing code as they are at everything else.
Programmers can double their productivity and increase quality of code?!? If AI can do that for you, you’re not a programmer, you’re writing some HTML.
We tried AI a lot and I’ve never seen a single useful result. Every single time, even for pretty trivial things, we had to fix several bugs and the time we needed went up instead of down. Every. Single. Time.
Best AI can do for programmers is context sensitive auto completion.
Another thing where AI might be useful is static code analysis.
Not really. As a programmer who doesn’t deal with math like at all, just working on overly-complicated CRUD’s, and even for me the AI is still completely wrong and/or waste of time 9 times out of 10. And I can usually spot when my colleagues are trying to use LLM’s because they submit overly descriptive yet completely fucking pointless refactors in their PR’s.
AI is a commodity but the big players are losing money for every query sent. Even at the $200/month subscription level.
Tech valuations are based on scaling. ARPU grows with every user added. It costs the same to serve 10 users vs 100 users, etc. ChatGPT, Gemini, copilot, Claude all cost more the more they’re used. That’s the bubble.
Of course, I totally agree with that