Sunday, January 18, 2026

Wikipedia - the largest pool of common knowledge in human history!

 

How is the human ability to scale up knowledge  made possible by a moral system? One of the best examples I can think of is Wikipedia.

According to founder Jimmy Wales, in his book,  The Seven Pillars of Trust,  “Wikipedia is a nonprofit Encyclopedia written and edited by volunteers.”  and, “Wikipedia is free content that anyone can use, edit, and distribute.”  But also, “Wikipedia has no firm rules.”   The fact is, that in Wikipedia anyone can edit anything, and everyone’s edits will be subject to editing. 

Reading or hearing these words one can easily conjure up a free for all, a morass of truth mixed with falsehoods with no way to distinguish them. But that’s actually not what you get. 

I think we can take Wale’s  rule about no firm rules with a grain of salt, because, in the same book he goes on to say: “ We’ve never permitted deliberate falsification, and when we spot people doing it, we block accounts, or if necessary, temporarily or permanently block their Internet Protocol (IP) address.”

   Wales notes that “Wikipedia has a “three revert rule”, meaning that if you can’t settle your disagreement, and you’ve already reverted the edit three times, Stop. Ask others to take a look and offer an opinion….” And, “Editors who go beyond three reverts may be blocked,” he says.  

 No one is in charge of Wikipedia! The rules are not written on a stone tablet, but they exist in practice and by general consent; and people who consistently break the rules end up being blocked from editing.  Jimmy Wales’ role was to conceive of an internet encyclopedia, to recruit the first volunteers, and to facilitate their agreement as to how to go about it, and then, save for publicizing and asking for donations,  to basically get out of the way and let it happen.

Wales emphasizes that it is not the rules that make Wikipedia work, it is trust on a personal level.  The editors are volunteers.  They call themselves “Wikipedians”.  So there are rules for editing Wikipedia, but they are more like guidelines that get enforced when people repeatedly ignore, or take advantage of the relaxed attitude.  Ultimately, people who abuse the system get blocked from Wikipedia.

Wales appears to be correct that Wikipedia runs on trust, and the guidelines and rules are in the background and mostly invisible to Wikipedia users. The basic attitude that pervades Wikipedia is that of trusting that editors are not doing the editing for malicious reasons, and, when that is proven wrong they are blocked and effectively ejected from the Wikipedian community. 

 Jimmy Wales’ seems to have a pretty relaxed attitude about “firm rules” compared to what he might think about someone  caught stealing a car, or shooting someone. But, the fact is,  any damage that a malicious person can do to a Wikipedia article can be deleted, the proper accounts restored, and the perpetrator can be permanently banned from editing on the site, in a way that is far easier than enforcing moral rules in real life. Remember, the worst that can happen is to have your IP address banned from editing the  Wikipedia site.

Still, in spite of his relaxed attitude about editing Wikipedia, there are red lines, and there are consequences for going over them, as in all moral systems. In the moral system there are rules that, if consistently broken, will get the perpetrator shunned and banished from society. For Wikipedians there are red lines, like the one about deliberate falsification, and the three revert rule, and their observation and enforcement are what makes it possible for Wikipedians to trust editors they don’t know.  Initially people are given the benefit of the doubt. Enforcement follows violating behavioral rules, and that ultimately facilitates trust.

To participate in Wikipedia, becomes, in effect,  committing to the collective project, which eventually implies participating in enforcement against violations, and even recruiting other participants when an individual’s efforts alone cannot stop the violations.  This is enough to facilitate trust amongst the Wikipedians and build an online encyclopedia with seven million articles and counting and 130 billion page views per year - the most comprehensive and popular encyclopedia in history. Wikipedia is the perfect example of the massive spread of human knowledge, and the best evidence that the spread of knowledge is largely made possible by rules that limit violations and exclude persistent violators, rules that are not initiated from the top down, but enforced by volunteer participants in a collective fashion. A moral system makes knowledge possible.


                               The danger from AI


 Now what knowledge technology have we recently come to recognize that has the potential for moral abuse?  Something about AI that should concern everybody is that Artificial Intelligence systems themselves have no moral responsibility. They are not persons in any sense. They cannot exercise moral responsibility, nor be held accountable. Without accountability there is no respect for truth and knowledge becomes a dead-end.  If we allow AI systems to dominate our knowledge systems it will destroy knowledge as a human commons. 

In, “The academic community failed Wikipedia for 25 years — now it might fail us”,  Dariusz Jemielniak writes: “...AI systems trained heavily on Wikipedia are now threatening the future of this free, volunteer-driven resource. ..Large language models offer instant, Wikipedia-derived answers without any attribution.” “When AI chatbots provide seemingly authoritative responses drawn from Wikipedia’s very pages, why… Jelmielniak asks,  would anyone navigate to the source, let alone contribute to it?”

This parasitic relationship,” writes Jemielniak, “endangers the last bastion of freely accessible, human-curated knowledge and undermines the premise of collaboration on which many of Wikipedia’s knowledge-sharing practices rely.”

Jelmielniak points out: “Wikipedia represents something unprecedented: the only major platform on which truth emerges through transparent debate, rather than algorithmic opacity or corporate interests. Every edit is logged, every discussion archived.”

“In an era of AI hallucinations, black-box algorithms and widespread disinformation,” states Jelmielniak, “Wikipedia’s radical transparency has become even more essential.”

It’s becoming apparent to me that AI is capable of sucking up everything in the common pool of knowledge for the sake of corporate profits.  Everyone can do his or her part by going directly to Wikipedia with their queries, rather than depending on these AI systems that lack the ability to make moral distinctions, and do not acknowledge the source of their knowledge as Wikipedia does. I, myself, pledge to do this from now on.

No comments:

Post a Comment