LLMs, Truth, and Democracy: An Overview of Risks.

Mark Coeckelbergh
Author Information
  1. Mark Coeckelbergh: Department of Philosophy, University of Vienna, Vienna, Austria. mark.coeckelbergh@univie.ac.at. ORCID

Abstract

While there are many public concerns about the impact of AI on truth and knowledge, especially when it comes to the widespread use of LLMs, there is not much systematic philosophical analysis of these problems and their political implications. This paper aims to assist this effort by providing an overview of some truth-related risks in which LLMs may play a role, including risks concerning hallucination and misinformation, epistemic agency and epistemic bubbles, bullshit and relativism, and epistemic anachronism and epistemic incest, and by offering arguments for why these problems are not only epistemic issues but also raise problems for democracy since they undermine its epistemic basis- especially if we assume democracy theories that go beyond minimalist views. I end with a short reflection on what can be done about these political-epistemic risks, pointing to education as one of the sites for change.

Keywords

References

  1. Philos Trans A Math Phys Eng Sci. 2018 Oct 15;376(2133): [PMID: 30323003]
  2. Sci Robot. 2017 May 31;2(6): [PMID: 33157874]
  3. AI Ethics. 2022 Nov 22;:1-10 [PMID: 36466152]

MeSH Term

Humans
Democracy
Politics
Knowledge
Risk
Communication

Word Cloud

Created with Highcharts 10.0.0epistemicLLMsproblemsrisksespeciallyagencydemocracyTruthmanypublicconcernsimpactAItruthknowledgecomeswidespreadusemuchsystematicphilosophicalanalysispoliticalimplicationspaperaimsassisteffortprovidingoverviewtruth-relatedmayplayroleincludingconcerninghallucinationmisinformationbubblesbullshitrelativismanachronismincestofferingargumentsissuesalsoraisesinceunderminebasis-assumetheoriesgobeyondminimalistviewsendshortreflectioncandonepolitical-epistemicpointingeducationonesiteschangeDemocracy:OverviewRisksArtificialintelligenceBullshitDemocracyEpistemic

Similar Articles

Cited By