deleted by creator
Yeah, you are right. I’ve always remembered it this way because it makes more sense to me.
The idea of a federated, decentralized Wikipedia alternative is intriguing, but implementing it successfully faces major hurdles. Federating moderation policies and privileges across different instances seems incredibly complex. I believe it would also require some kind of web of trust system. Quality control is also a huge challenge without centralized oversight and clear guidelines enforced universally.
While it could potentially replace commercial wiki farms like Wikia/Fandom for niche topics, realistically replacing Wikipedia’s dominance as a general reference work seems highly ambitious and unlikely, at least in the short term. But as they say - shoot for the stars, and you may just land on the moon.
That said, ambitious goals can spur innovation. Even if Ibis falls short of usurping Wikipedia, it could blaze new trails and pioneer federated wiki concepts that feed back into Wikipedia and other platforms. The federated model allowing more perspectives and focused communities is worth exploring, despite the technical obstacles around distributed moderation and content integration. The proof-of-concept shows the core pieces are in place as a starting point.
Human bias is a pervasive element in many online communities, and finding a platform entirely free from it can be akin to searching for the holy grail. Maybe look into self-hosting an instance and punish moderators who don’t follow their own rules.
Regrettably, complaining tends to be a common pastime for many individuals. I acknowledge your frustrations with certain users who may appear entitled or unappreciative of the considerable effort you’ve dedicated to developing Lemmy. Shifting towards a mindset that perceives complaints as opportunities for enhancement can be transformative. Establishing a set of transparent rules or guidelines on how you prioritize issues and feature requests could help turn critiques into opportunities for improvement. This transparency can help manage expectations and foster a more collaborative relationship with the users in your community. While not all complaints may be actionable, actively listening to feedback and explaining your prioritization criteria could go a long way in building trust and goodwill. Open communication and a willingness to consider diverse perspectives can lead to a stronger, more user-centric product in the long run.
The philosophy of Complaint-Driven Development provides a simple, transparent way to prioritize issues based on user feedback:
Following these straightforward rules allows you to address the most pressing concerns voiced by your broad user community, rather than prioritizing the vocal demands of a few individuals. It keeps development efforts focused on solving real, widespread issues in a transparent, user-driven manner.
Here’s a suggestion that could help you implement this approach: Consider periodically making a post like What are your complaints about Lemmy? Developers may want your feedback. This post encourages users to leave one top-level comment per complaint, allowing others to reply with ideas or existing GitHub issues that could address those complaints. This will help you identify common complaints and potential solutions from your community.
Once you have a collection of complaints and suggestions, review them carefully and choose the top 3 most frequently reported issues to focus on for the next development cycle. Clearly communicate to the community which issues you and the team will be prioritizing based on this user feedback, and explain why you’ve chosen those particular issues. This transparency will help users understand your thought process and feel heard.
As you work on addressing those prioritized issues, keep the community updated on your progress. When the issues are resolved, make a new release and announce it to the community, acknowledging their feedback that helped shape the improvements.
Then, repeat the process: Make a new post gathering complaints and suggestions, review them, prioritize the top 3 issues, communicate your priorities, work on addressing them, release the improvements, and start the cycle again.
By continuously involving the community in this feedback loop, you foster a sense of ownership and leverage the collective wisdom of your user base in a transparent, user-driven manner.
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
Where? I haven’t heard any of that.
Trust lvls themselves are just Karma plus login/read tracking aka extra steps.
Trust Levels are acquired by reading posts and spending time on the platform, instead of receiving votes for posting. Therefore, it wouldn’t lead to low-quality content unless you choose to implement it that way.
The Karma system is used more as a bragging right than to give any sort of moderation privilege to users.
But in essence is similar, you get useless points with one and moderation privileges with the other.
If you are actually advocating that the Fediverse use Discourse’s service you have to be out of your mind.
You are making things up just so you can call me crazy. I’m not advocating anything of the sort.
On a basic level, the idea of certain sandboxing, i.e image and link posting restrictions along with rate limits for new accounts and new instances is probably a good idea.
If there were any limits for new accounts, I’d prefer if the first level was pretty easy to achieve; otherwise, this is pretty much the same as Reddit, where you need to farm karma in order to participate in the subreddits you like.
However, I do not think “super users” are a particularly good idea. I see it as preferrable that instances and communities handle their own moderation with the help of user reports - and some simple degree of automation.
I don’t see anything wrong with users having privileges; what I find concerning is moderators who abuse their power. There should be an appeal process in place to address human bias and penalize moderators who misuse their authority. Removing their privileges could help mitigate issues related to potential troll moderators. Having trust levels can facilitate this process; otherwise, the burden of appeals would always fall on the admin. In my opinion, the admin should not have to moderate if they are unwilling; their role should primarily involve adjusting user trust levels to shape the platform according to their vision.
An engaged user can already contribute to their community by joining the moderation team, and the mod view has made it significantly easier to have an overview of many smaller communities.
Even with the ability to enlarge moderation teams, Reddit relies on automod bots too frequently and we are beginning to see that on Lemmy too. I never see that on Discourse.
Karma promotes shitposting, memes and such, I’ve yet to see that kind of content on Discourse.
Yeah, and the FOSS alternative Codidact isn’t any better. What’s the point of asking for solutions for bugs when even an LLM can solve that already? I want proper solutions to actual problems so that I can find everything in there, not just troubleshooting bugs.
I don’t know how that works. Why would have to do anything to participate in the discussions? The curation can be done by whoever wants to do it.
deleted by creator