Technology fan, Linux user, gamer, 3D animation hobbyist

Also at:

linuxfan@tube.tchncs.de

linuxfan@cheeseburger.social

  • 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: July 24th, 2023

help-circle



  • And we make it worse by saying “Just pick one. It doesn’t matter what instance you’re on because they’re federated.”

    Some people are going to be very upset to find their local feed is a lot of content they don’t agree with. Or when they go out into the fediverse and people automatically assume they’re an A-hole because of the instance they’re from. I mean, it’s generally not that bad, but there are a few instances that are that bad.

    And for people like me who gravitate toward smaller instances, that instance is probably gonna die. Happened to me twice already, 4 times if you count Mastodon and Peertube.













  • Probably better to ask on !localllama@sh.itjust.works. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.

    The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.

    Short answer - yes, you can do it. It’s just a matter of how much RAM you have available and how long you’re willing to wait for an answer.