Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.
Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.
You’re trying to graph something that you can’t quantify.
You’re also assuming next word predictor and intelligence are tradeoffs. They could as well be the same.
I agree, people who think LLMs are intelligent are as smart as phone keyboard autocomplete