1 Comment

These cheerful, well-meaning, educated, presumably average to higher than average IQ people are our enemies: https://forum--gegen--fakes-de.translate.goog/de/aktuelles/wie-begegnen-wir-desinformation-der-buergerrat-uebergibt-seine-empfehlungen-an-die-bundesinnenministerin-kopie-1?_x_tr_sl=de&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp.

They are "Head Girls": https://www.eugyppius.com/p/once-more-on-renowned-fool-emily. Condescending, know-it-all, protect the ordinary people for their own good even though they think they don't want or need it, interfering, numbskull, vandals of freedom and civilisation. Alternatively, they are pathetic individuals who don't trust their own judgement and who seek the protection of such Head Girls.

Adapting Muriel Spark's (https://hipsterbookclub.livejournal.com/808367.html) "pissuer de copie", these people are pisseurs sur la vérité, because they have no idea that wide-ranging discussion is the best - and broadly the only - way to learn about reality and how to understand and evaluate it. They think everyone can rely on centralised government, expert structures and or AI for this. Have they never heard of groupthink? (Irving L. Janis. Psychology Today, 1971: https://www.wcupa.edu/coral/documents/groupthink.pdf.)

There is a common belief that massive harm, reasonably described as great evil, arises primarily or solely in a top-down pattern, where a small group of people - an elite - or perhaps a single person, directs actions which result in the lasting and pervasive harm.

But here we see the opposite: ordinary people enjoying and thriving in an environment of free speech which prior generations fought and died for, arguing and pleading for their government to prevent anyone from writing, speaking or otherwise communicating whatever it is they, or in practice the government, defines as "disinformation".

They are so sure that society can't cope with whatever it is they disagree with that they propose preventing the dissemination of any communication which algorithms and/or human content classifiers deem to be even possibly "misinformation": "oblige the platforms to programme their algorithms in such a way that possible disinformation is not disseminated and not recommended to users”. (Via: https://dailysceptic.org/2024/09/19/german-citizens-council-calls-for-criminalisation-of-disinformation/.) >> https://forum-gegen-fakes.de/fileadmin/files/FGF/Buergergutachten_Forum_gegen_Fakes.pdf page 13: "Die Algorithmen sollen dafür sorgen, dass Inhalte, die Kennzeichen von Desinformation aufweisen, nicht verbreitet werden" Google translation: "The algorithms are intended to ensure that content that has the characteristics of disinformation is not spread.".

Page 37: "Desinformation wird definiert als „gezielte Falschinformation, die verbreitet wird, um Menschen zu manipulieren.": "Disinformation is defined as “targeted false information that is spread to manipulate people."

This involves judgements about correct and incorrect beliefs, the possible effects of text, image, video etc. on such beliefs and the intentions of the person who posted such material.

Maybe these people have been gathered and supported by a rich and either accidentally or deliberately evil elite. But no-one is forcing them to participate, or to support this proposal.

The ability of AI to generate images, video and sound which looks pretty much indistinguishable from a known person saying or doing things they never said or did is a genuine, novel, problem. However, the solution is more speech to debate it and especially for people to get into the habit of always providing sources, links, for whatever it is they communicate. Far too often, on social media, even professional people post an image, a video or a piece of text as if it is genuine, without providing the one or more links which would enable the reader to verify its authenticity.

Twitter, or whatever it is called now, has a fair go at providing "Community notes" which enable interested readers to see critical commentary, *with* proper references, on the material in question. This is not censorship or even moderation of content. This is a constructive way of dealing with doubts which may arise about the authenticity of material, and/or the validity or applicability of arguments There's no need for governments to try to regulate the adoption of such arrangements. The social media companies which care about authenticity - or even those who don't but respond to their users' desire for the ability to easily check authenticity - will implement such arrangements.

Unfortunately, many people don't care a hoot about rigour and truth. They will post whatever their tribe expects them to post, and will support government measures to stop anyone else posting material to the contrary.

Expand full comment