Cooperation is the great challenge of democratic practice [
29]. To be able to cooperate, people must be able to transcend their own interests to such an extent that they can meet other points of view with respect and include them in a cooperative practice. Cooperation does not require convergence of opinions and interests, but mutual recognition as equals and free, as well as the willingness to exchange reasons and to coordinate across different points of interest to the extent that cooperation is possible. Rawls [
30] noted that “a plurality of reasonable, yet incompatible, comprehensive doctrines is the normal result of the exercise of human reason within the framework of the free institutions of a constitutional democratic regime”. Briefly, Rawls, along with many others, contends that the institutions of modern democracies, rooted in toleration and the recognition of what economists term bounded rationality, as well as what Rawls labeled the burdens of judgment, will inevitably give rise to a multitude of diverse beliefs and moral perspectives. It means that despite our best efforts to employ reason and reach consensus in significant conversations, disagreement frequently arises. We understand Rawls as indicating that an individual’s faculties of judgment must bear the weight of his or her personal history, which encompasses the inevitable repercussions of subjective experiences. These factors impose constraints on our rationality, leading to the emergence of disagreements. Transcending one’s own interests and opinions does not mean abandoning them but taking them into account in joint practice with other interests and opinions. The criteria of appropriate reconciliation are themselves part of political practice; they cannot be established externally by ethics, philosophy, social science, or expertise. These merely contribute to conceptual and normative clarification. They can help to assess the expected consequences and side-effects, they can draw attention to value conflicts, direct attention to phenomena that are neglected in public discourse, and give a voice to minorities but also to silent majorities. Yet, they cannot replace the political process itself, for that would be the end of democracy. At its essence, democracy rests on a civic foundation, wherein the pivotal element is the capacity for autonomous and equal citizens to engage in discourse; without this, democracy undergoes erosion.
As we have tried to show, the talent of reason is a skill that requires practice. As the practice becomes more sophisticated, it may evolve into an ethos, serving as the foundation for a community, a collective lifeform. This foundation is based on an independent use of reason and of common discourse. One such community that promotes and demands this skill is the democratic society.
Nevertheless, the consequences of an excessive reliance on LLMs remain a subject of concern. Exploring the ways in which LLMs can be harnessed to uphold our identity as free and equal individuals is of paramount importance. Consequently, we should scrutinize the impact of ChatGPT on our comprehension of democracy and our autonomous selves. In our perspective, three categories can be established to classify the various use cases of extensive LLMs utilization as threats to democracy: (1) losing the skill of reasoning by relying on automated argumentation machines, (2) the question of authentic identity, and (3) the post-facticity argument. These three arguments are by no means all-encompassing but are fundamental to live a life as autonomous and free citizens. We will now review and discuss each of these three arguments in more detail.
3.1. Losing Skills via Substitution
In this section, we will approach the losing skills via substitution argument regarding a potential decline in the ability to reason if we use LLMs for argumentation purposes and its effect on the democratic lifeform. Considering the normative educational imperative to engage our reasoning abilities, which is crucial for us as autonomous individuals, it becomes imperative to deliberate more thoughtfully on how machines can be judiciously employed to assist us in our journey as self-determining human beings. The notion that technological advancements contribute to a decline in skill acquisition is far from novel. It dates back to the invention of the printing press and the introduction of calculators, freeing people from the need to engage in laborious calculations. This trend extends to the use of recipe books in culinary practices and the general dependence on search engines and the internet. The latter has diminished the imperative nature of retaining factual information, thereby potentially compromising our memory capabilities—a phenomenon some have referred to as the “Google effect” [
31]. There seems to be a worry among many, as Nyholm [
32] observes, that AI “might make us less intelligent—or perhaps less prone to use or develop our cognitive abilities” [
32].
Nicholas Carr [
33], for example illustrates, the loss of basic skills due to automation in his book
Glass Cage with the work of architects, who no longer learn technical drawing independently, but above all learn to use computer-aided design programs. In the process, he says, they lose some of their ability to use spatial imagination, which is exercised precisely in the drawing of geometric figures. In fact, the impact of digital media exposure on the brains of young children can significantly influence language networks in the brain, shaping the development of language processes such as semantics and grammar [
34]. A similar phenomenon, which is known to most, can also be observed with AI-supported spelling and grammar aids, particularly when endowed with predictive text functionality—wherein the program anticipates and suggests the subsequent word during the writing process. We react to this support by formulating texts more thoughtlessly, assuming that the program will ensure its appropriateness, and thus walk more and more on digital crutches without yet having the goal of ever putting them down again. Notably, the predictive text feature establishes a robust anchor or benchmark dictating the trajectory of the text, exerting a discernible influence on thought processes by furnishing the “most likely” next word and potentially diverting from the original intended expression.
Prior to the advent of ChatGPT, research and writing procedures constituted a valuable means of cultivating cognitive skills. As we have shown, these basic abilities to form a judgment, to understand what a judgment is and determine how we arrive at it require practice. It is precisely this ability to reason that we should not hand over completely and without consideration, but rather focus on the application of critical thinking and self-reflection as an expression of authorship. Writing and arguing are some of the appropriate means of practicing this skill and applying them in a wide variety of areas. Consequently, this process played a pivotal role in fostering this skill set and, thus, making well-informed decisions. Engaging in a topic via self-generated text pushes us into an ”active, persistent, and careful consideration of a belief or supposed form of knowledge in the light of the grounds which support it and the further conclusions to which it tends” [
35]. However, the emergence of LLMs may significantly streamline writing endeavors, raising concerns about the potential impact on the development of reasoning skills. Should students overly depend on text passages supplied by ChatGPT or analogous tools, there exists a risk of foregoing essential engagement in critical thinking and problem-solving processes vital for cognitive enhancement [
36]. This escalating concern pertains to the potential erosion of peoples’ capacity to generate original ideas and construct cogent arguments substantiating their perspectives [
37].
This pragmatic argument is reminiscent of the substitution argument, a concern that can be traced back to the introduction of writing, an apprehension that can already be found in Plato:
“I cannot help feeling, Phaedros, that writing is unfortunately like painting; for the creations of the painter have the attitude of life, and yet if you ask them a question they preserve a solemn silence. And the same may be said of speeches. You would imagine that they had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves.”
Plato’s point is that the written word cannot respond sensitively to the reader. It can be read at will. Ideas can spread quickly, even among those who misunderstand it. The danger may be averted by the interactivity of chatbots; after all, chatbots can react and be asked to give explanations once more. Inaccessible passages can be paraphrased; interpretations can be requested. Reasoning can remain in dialogue. The discourse does not break off here. This may be taken as an advantage, given the possibility that people may have access to some sort of an AI expert.
As the EU points out, these interactive LLM chatbots can be used in many ways for democratic education due to their engaging design. Chatbots may respond to citizens’ questions on candidates’ electoral programs or update citizens on how policies in which they have an interest are evolving. On one hand, they can foster civic dialog, and on the other hand, summarize citizen views and stances for the politicians [
39]. Interestingly, these suggestions point to the fact that LLMs are to be deployed as deliberative tools rather than as mere decision-making tools [
40]. Deliberative tools aim at supporting the understanding of political arguments and gaining insight into political work, thereby encouraging citizens to use their power of judgment.
It is pertinent to underscore that skills can undeniably be enhanced through the utilization of LLMs. Particularly, once a skill set has been acquired, it might be deemed imprudent to engage in laborious procedural steps to apply it personally when it could be effectively delegated to a machine, just like it seems to be imprudent or at least impractical to carry out complex arithmetic calculations by hand. The pivotal consideration revolves around the timing of task delegation—whether it occurs before or after the acquisition or mastery of a skill by the human agent. Skill loss can of course only occur when a skill was present in the first place. According to Arthur et al. [
41], cognitive skills are especially susceptible to skill loss. Premature and constant delegation may pose potential risks to our societies rooted in humanistic democracy, as we explicated. The crux of the matter lies in finding a balance between skill acquisition and the delegation of tasks to machines to attain efficiency. Alternatively, we can frame the question as follows: how can we cultivate a coveted talent, and at what point does it transform into a consistently effective skill, that is, attains a state of being sufficiently proficient? Especially, the second question is fundamentally an empirical one, necessitating comprehensive research in the years to come. It is, however, important for us to take a proactive stance in navigating this transformative landscape, remaining vigilant of both the risks and benefits that accompany this paradigm shift and ensuring that the foundations of a free and democratic society are left intact.
3.2. Authenticity: Whose Capabilities and Skills Are Used?
Another aspect that is normatively constituent for democratic societies and thus of great importance we would like to call the authenticity argument: it can well be illustrated by Heidegger’s metaphor of language as the “house of being” [
42]. Language as the “house of being” in which man dwells very aptly describes the significance that language assumes for man in its function of creating meaning and understanding connections. In the uncovering of reason, man creates individuality by recognizing himself. We will refer to this relationship and positioning as authenticity: we ascribe authenticity to a person or his or her speech and actions, if their “statements seem genuine, credible and truthful to us, i.e., and true, i.e., not invented, fictitious, just made up or fake” [
43] (p. 9—own translation). This refers to the core of authorship and how individuals ascribe attitudes and actions to themselves by always already anticipating that these originate within the individual. Even more, through those deeds and speeches, their lives will become unique. This renders discourse intelligible to onlookers as well: acknowledging the autonomy of others in their speech and actions and recognizing that their words and behavior emanate from their own selves. We experience ourselves and the other as being autonomous, as being responsible for our utterances and actions [
17]. For example, in laborious phases, Nietzsche points out that he even forbade himself to read books because he did not want anyone’s thoughts and voice near him: “It would never occur to me to allow anyone to speak or even to think in my presence. For that is what reading would mean” [
44]. Nietzsche was afraid that a foreign thought would creep over the mental wall he had erected to protect his genuity—his sui generis.
The infusion of LLMs into democratic processes may thus raise profound concerns regarding the expression of one’s own thoughts and standing up for them as a supporting ideal of democracy. Authenticity, characterized by statements perceived as genuine, credible, and truthful, is fundamentally challenged by the inherent limitations of LLMs. In democracy, where transparency, trust, and well-informed decision-making are foundational, the potential biases, susceptibility to manipulation, and lack of accountability in machine-generated content pose significant threats. The absence of a human touch and the inability to trace information back to its sources erode the core tenets of authenticity, thereby challenging the integrity of democracy. Striking a balance between the efficiency of automation and the preservation of authenticity is crucial to safeguard the essence of democracy in the face of technological advancements.
As we outlined, the intriguing aspect of LLMs such as ChatGPT lies in their remarkable appearance as if their output would resemblance human argumentation. The generated text is intricately intertwined with the specific context, making it deceptively easy for us to misconstrue its outputs as our own work. The authenticity argument refers to different levels that may underpin democratic ideals: On the one hand, it is difficult to trace arguments back to their origin, not knowing where they stem from. On the other hand, the individual itself does not know whether the idea is made by itself or whether it belongs to the machine. We appropriate text modules and excerpts from search results, and the responses from ChatGPT give the, albeit wrong, impression that we authored them ourselves. After all, it was our formulation of the pertinent questions that causally led to the very existence of the text.
In other words, the ability of LLMs like ChatGPT to appear as if human beings argued introduces a nuanced challenge to authenticity in democratic discourse. The seamless integration of machine-generated content where only the prompts are human initiated blurs the lines between genuine authorship and automation. Commonly, we find that the resulting persuasive quality of the generated text, coupled with the intricate contextualization, creates a scenario where users may inadvertently appropriate the content as their own. This not only raises concerns about the authenticity of individual expressions but also challenges the genuine approach to self-positioning and self-discovery in democratic dialogue. The temptation to use generative AI as a shortcut to self-representation introduces a layer of complexity, highlighting the need for careful consideration of the ethical implications surrounding the integration of such technologies into democratic processes.
This phenomenon in many ways contradicts an authentic approach to life: there is no need for us to undergo the process of self-positioning or self-discovery; nevertheless, we can portray ourselves as genuine. Generative AI appears to offer a shortcut to self-representation. This is an enhancement argument insofar as our human existence is improved by the use of technology. However, any enhancement of our capabilities through technology leads, as Coeckelbergh [
45] points out, to a new form of vulnerability. Here, then, we need to think about how our human existence is transformed in its vulnerability in order to evaluate the use of this technology. Vulnerability must be addressed with a central focus on the philosophical inquiry into the essence of subjectivity and its manifestation.
This goes hand in hand with the question of how we value machine output. Machine-generated output seems to differ from human work due to its inherent lack of authenticity, especially when fully disclosed. A notable example is the incident involving Vanderbilt University, where an email of condolence regarding a mass shooting at another university was composed using ChatGPT. The response from people conveyed a sense of betrayal and inappropriateness, highlighting a perceived deficiency in “personal connection and empathy” [
46].
Merely formulating a prompt seems insufficient to assert originality when the output relies on text passages crafted by others or machines. We as individuals must try to find arguments and counterarguments and weigh them. We must know where these arguments and the factual knowledge come from, including the origins and contexts of statements. We should be aware of when and by whom these statements were made. All this is engulfed within machine-generated output, as if it were unimportant and inconsequential. The knowledge about the genesis of attitudes, however, creates the possibility to be able to position oneself, to come to oneself, and to individualize oneself.
Being able to articulate and defend your perspective is a democratic virtue. It involves supporting your views with good reasons, engaging in discussions, and exchanging ideas openly. This necessitates respect and confidence in your own judgment, ultimately contributing to what we would consider an authentic person. This raises a normative query regarding how we intend to manage interactions between humans and machines in this context: To what degree does the output of a machine-human collaboration alter the concept of authenticity? Additionally, if we can no longer speak of authentically expressed arguments because they have been synthetically unrecognizably formed and adopted, then we can no longer simply speak of responsibility. Who is the author of the information expressed? Who can be held responsible for this linguistic act? Is it the user who passes on the synthesized text? Is it the developer who created the product? Is it the company that has not standardized the language output good enough? Is it the people who supplied the data, i.e., who created the text sources but can no longer be traced? Is it the political system and the inadequate social discourse that has taken place?
In the future, entrusting machines with poetry, text interpretation, literature, journalism, and scientific writing poses a risk. It could lead to a society of mere imitators confined within a self-referential world. This concern intensifies with the projection that by 2026, 90% of online content will be synthetically generated [
39]. Moreover, if machines increasingly shape their input based on machine-generated output, humans might relinquish their role in cultural mediation. This involves the compilation, transmission, storage, and dissemination of cultural elements. In our house, we find ourselves silenced to the extent that we might eventually lose the desire to speak at all.
What implications does this have for our comprehension of cooperation predicated on individual autonomy? How can the legitimacy of our democratic society be upheld when its public discourse is no longer driven by individuals but rather by seemingly impartial computer-generated content? For instance, LLMs can be employed to craft responses to regulatory processes or compose letters to the editors of local newspapers, which may subsequently be published [
39]. Furthermore, the issue extends beyond the matter of copyright pertaining to the training data used, encompassing our approach to handling arguments stemming from human–machine interaction. The New York Times’ lawsuit against OpenAI, the developer of ChatGPT, and Microsoft for the unauthorized use of their journalistic contributions in the training of the chatbot highlights the problem of copyright and plagiarism [
47]. This necessitates a nuanced proficiency in managing outcomes, their origins, and their evolution, thereby raising the question of how we can cultivate our discernment when engaging interactively with a counterpart whose output can never genuinely express intent.
However, that is not yet sufficient in the sense of manipulation: skillful adversarial machine learning, such as data poisoning, constitutes the specific aptitude to manipulate datasets to achieve desired outcomes or deliberately distort results, as the example of Nightshade impressively shows for pictures [
48]. In regard to democracy, this may lead into voter manipulation beyond the fact that we distort the electoral system or election campaigns. Even if the phenomenon is already known from social media [
49,
50], the personalized interactive chat function allows for a much more targeted way of responding to users’ authorship.
3.3. Lost in Post-Factual Times?—Teach Concepts, Not Things
What also constitutes part of authenticity and originality is the aspect that we often think of LLMs as knowledge generators. Nevertheless, we have to treat this argument separately: let us call this the facts argument. Generative AI systems map the state of opinion as data input, which is why they conserve a variation of the dataset, but do not extend it. Extensions happen as associations, which happen by chance. If these random outputs extend the state of knowledge then they leads into an unsafe state of facts. However, these skewed and unintended post-factual outputs cannot be readily identified as such.
ChatGPT gives many people the impression that it represents current world knowledge, like comprehensive encyclopedias did in earlier decades and centuries. These attached great importance to source citations and precise evidence, as is still the case today with Wikipedia, while the products of text generation systems do not go through an editorial review process and do not (yet) cite sources (correctly) but instead oftentimes invent fictitious sources—a process widely known as “hallucination” [
51]. Even if not all knowledge is fully reproduced by people, we can turn to experts and hold them responsible for wrong statements. In the realm of AI, there seems to be, at least prima facie, no identifiable entity to hold accountable for defamation and untrue assertions, neither in a legal nor moral sense. The resolution of this ethical concern requires political and legal decisions. After all, we are familiar with fake news and conspiracy theories from social networks. In this scenario, a solely technical resolution does not seem feasible. In the pursuit of truth, expecting a flawless track record from LLMs is akin to demanding perfection from established institutions like encyclopedias or even the scientific literature. We accept a margin of error in scientific discoveries and, in fact, even strive for falsification since we readily accept that our current knowledge might not prove to be true in the future. We also recognize that encyclopedias, while comprehensive, may undergo revisions and updates or that some facts are in a reasonable dispute (e.g., historical facts). Similarly, understanding the nuanced nature of LLM-generated content and acknowledging the continuous learning process involved allows us to appreciate the information they offer without imposing an impossible standard of infallibility. Nil satis nisi optimum is not a standard that we should expect LLMs to achieve; yet, understanding that a LLM is not necessarily a truth generating machine must be seen as a crucial aspect in our interactions with these machines. Frequently, we rely on the quality of a text as a signal of reliability. It signals that someone invested effort in creating it, increasing the probability of the text being worthwhile. However, this signal is diminished with LLMs.
Thus, as with any source of information, a discerning approach that considers multiple perspectives and verification methods remains crucial in evaluating a satisfactory threshold for truth. Critical to discuss are the facts that we must be careful not to unlearn the ability to check the outputs of the AI and that we remain able to develop independent thinking and thus are able to check the AI generated texts for plausibility and sense.
Even efforts to label outputs as machine-generated do not necessarily alleviate this aspect. Ultimately, these markers can themselves be manipulated, and the handling of the output may still not change, allowing misinformation to spread. Once it is in circulation, containment becomes a challenging endeavor. Politicians often interpret correspondence received from their constituents as a reflection of public sentiment that can inform their decision-making—even if it is a new form of publicity. Furthermore, with the emergence of artificial intelligence, a concerning development known as astroturf campaigning has become notably easier. This strategy makes it possible for a small, disguised group to pose as a legitimate grassroots movement, disseminating a biased representation of public opinion. Additionally, AI has the potential to foster the illusion of widespread political consensus by flooding online platforms with millions of automatically generated posts on a given subject [
39]. As LLMs become increasingly personalized and interactive, there is a heightened risk that people may not critically assess the information they encounter or fully understand its origins.
3.4. Education as a Cornerstone of Democracy
As previously emphasized, the core of the democratic lifestyle is found in the interchange among individuals articulating their reasoning and collectively reaching agreements. This exchange, grounded in mutual respect among free and equal individuals, extends across all aspects of our shared existence. This does not imply a uniformity of interests, preferences, or desires among us, nor an obligation to adopt them as our own. Rather, it suggests expressing them in a manner that upholds and safeguards the autonomy and equality of those who hold contrasting views. This, in turn, is based on our ability to exercise our own—authentic—power of judgment. Practice aligns with skill. The power of judgment needs to be trained to apply rules of understanding and principles of reason on singular events. Introducing Plato once more:
“Is not rhetoric, taken generally, a universal art of enchanting the mind by arguments; which is practiced not only in courts and public assemblies, but in private houses also, having to do with all matters, great as well as small, good and bad alike, and is in all equally right, and equally to be esteemed-that is what you have heard?”
Hence, as we aim to convey, active participation in argumentation appears crucial for inclusion and engagement in various social spheres. Articulating, expressing, and standing for one’s perspective are simultaneously vital aspects of personal authenticity. Furthermore, they are skills imperative for all individuals to navigate and thrive in a democratic society, promoting equality and freedom.
In a world in which AI-driven machines increasingly impact our everyday lives, the question thus arises as to where we should create spaces for independent expressions of thought and action—in which we should develop, teach and learn the activity of reasoning. Leveraging LLMs undoubtedly holds promise for enhancing educational outcomes, as posited by Kasneci et al. [
52], who delineate advantages encompassing heightened student engagement, interactive learning, and personalized educational experiences, among various potential enhancements. A study by Baidoo-Anu & Ansah [
53] provides suggestions on optimizing the utilization of ChatGPT for the enhancement of teaching and learning outcomes. Nevertheless, this study addresses the tangible danger associated with an excessive dependence on artificial intelligence, coupled with the inherent human inclination to opt for the path of least resistance. Krügel et al. [
54] find that humans who are supposed to correct bad machine decisions oftentimes do not and instead turn into “partners in crime” [
54]. If LLMs are not judiciously employed as a constructive augmentation but are excessively depended upon for outsourcing cognitive processes, a critical question arises: how can we guarantee that individuals acquire the requisite skill set essential for becoming capable and autonomous citizens within a democratic state? Again, this issue pertains to the temporal aspect of task delegation, specifically whether it transpires prior to or subsequent to the human agent’s acquisition or mastery of a skill.
This question then has particular implications for our education system: how are we supposed to train children and young people to reason if they no longer have to argue independently, i.e., without the help of a machine? When opting for the easiest route involves delegating reasoning, the consequence is a potential decline or lack of development in our own capabilities. Cognitive skills such as imagination, creativity, and critical reasoning could suffer from the excessive use of AI, as we are no longer forced to put our thoughts down independently. Formulating thoughts in an orderly fashion, however, is the vehicle of the power of judgment, as it forces us to organize ideas and concepts and cast them into a form. American writer Flannery O’Connor summed up this human experience when she describes: “I write because I don’t know what I think until I read what I say” [
55]. The more text work we delegate to AI, the less powerful the connection and realization of our thinking through writing. In our function as editors or controllers, we need concepts, terms, and rules according to which we can categorize what we perceive by drawing good conclusions for our actions from applying explicit and implicit rules and facts in accordance. At the same time, this points to the fact that language manifests itself in syntax, semantics, and pragmatics, and that normative requirements are grounded in the lifeworld.
ChatGPT is often hailed as a groundbreaking tool for human-like responses and reactions. However, ChatGPT does not have semantics and does not understand meaning and contexts, even if suggested by a language model based on the probability of word sequences. This concept is similar to the situation in the thought experiment of the so-called “Chinese Room” by John Searle. In Searle’s scenario, he envisions being in a room with a computer program guiding him on responding to Chinese characters slipped under the door. Despite lacking any understanding of Chinese, he mechanically follows the program, manipulating symbols and numerals like a computer. As a result, he sends out fitting strings of Chinese characters under the door, creating the illusion for those outside that a Chinese speaker occupies the room. According to Searle [
56], the thought experiment highlights that computers simply apply syntactic rules to handle symbol strings, lacking genuine comprehension of meaning or semantics. Similar to someone learning a foreign language based only on strings, ChatGPT can process strings without an understanding of their meaning. In the context of Searle’s “Chinese Room” thought experiment, the difference with LLMs lies in the learning approach. While the model does not follow explicit syntactic rules, it grasps intricate language structures through the patterns it discerns during training. It is a nuanced distinction, as the model learns not by being explicitly programmed with rules but by implicitly capturing the underlying patterns within the data it processes. Thus, ChatGPT can play chess, but it does not understand the rules and thus cannot always apply rules of the game to individual cases. ChatGPT captures its data tokens and, because of its stochastic model, makes moves with non-existent pieces. These moves are understandable in themselves but are wrong in terms of the game, in terms of an entire set of rules. As it were, it cannot configure the single case since it cannot transcend its rules-based determination.
Readers must therefore interpret the strings themselves and establish a context of meaning. They understand the meaning of the output only because they have learned the usage of words within contexts and the sequences of actions through their participation in daily routines: they can grasp semantics and pragmatics and corroborate them through their own reason, thoughts, judgments, and actions. People can anticipate and comprehend why others follow rules. They can think, speak, and act together because they know what the thought, speech act, or action is intended to accomplish. In this way, by anticipation, we attribute meaning and reason to our fellow human beings [
14].
When it comes to educating children, youth, and even adults, it is crucial to elucidate these aspects of generative AI. We need to convey the message that machine-generated output does not equate to reasoning, comprehension, or judgmental prowess (Larson, 2023). We can no longer trust AI machines as easily as we trust a calculator. In any case, the output requires a careful check on our part. This is not like calculators, whose output we rarely (have to) question. This requires sharper judgment when it comes to digital. The debate here is about digital literacy, of which new or expanded skills we should have when dealing with digital technology. In this context, Shannon Vallor [
57] speaks of the techno-moral virtues to which she adds techno-moral wisdom to the classic virtues such as modesty or courtesy. What is initially pleasing is that many of our desirable attitudes and lifestyles remain intact, even when dealing with digital. She formulates a techno-moral wisdom that combines a multitude of virtues in order to achieve the good: “Deliberative skill alone is not practical wisdom, since the latter integrates the full range of moral habits […]. Skill in choosing the best means to a perceived good is meaningless unless I am habituated to aim at the correct good in the first place, and it isn’t virtuous unless I am motivated to choose that good for its own sake. A person who skillfully selects means to attain ends that are not actually good ends for a human being to seek, or who aims at the right ends but for the wrong reasons, is merely clever, not wise” [
57]. When dealing with ChatGPT, this first means that users have to understand how these systems work. You must have a little technical literacy [
58]. You need to understand how input relates to output. This is the only way we can learn to understand how we can handle the output responsibly, i.e., wisely. People are frequently unaware that their behavior serves as input for AI systems.
ChatGPT cannot be reasonable. However, ChatGPT can be developed, deployed, and used responsibly. The objective should be to improve human capabilities regarding practical wisdom rather than replacing it, always keeping in mind that our democratic values are necessary to pursue. Yet, transparency about the AI does not seem to be enough. Recent studies have shown that people, regardless of being informed about whether advice originated from a machine or a human, exhibited unchanged behavior [
54,
59] and could sometimes “hide behind” machine decisions [
60]. However, it appears that humans sometimes hesitate to entrust decisions to machines, and they may face scrutiny from others when they choose to do so [
61]. Either way, this highlights the significance of promoting digital literacy rather than solely emphasizing transparency. Yet, the responsibility does not rest solely on the user; developers of AI must also weigh ethical considerations in the development of their products [
62,
63]. Hence, incorporating ethical considerations into the AI development process is essential, ensuring the contemplation and implementation of good design right from the outset [
64].
As a community, we should think about and discuss which ethical and political issues should be represented by a machine output at all. Should questions about fair taxation or abortion be shown at all? Krügel et al. [
59] point out that ChatGPT and the like are considered to show a “random moral stance”. This is due to the fact that everyday moral expectations are complexly impregnated: we cannot explicate all the rules of the good life, but we can live by them. We cannot capture this moment with machines. Hence, Krügel et al. [
59] suggest that “the best remedy we can think of is to improve users’ digital literacy and help them understand the limitations of AI”.
The facts of machine limitations show us the skills that we must constantly learn. Thus, we can use ChatGPT’s output, which becomes the content of our experiential world, to exercise our mind and judgment. We are constantly instructed by ChatGPT to self-critically question what we perceive, which is why machines like ChatGPT can perhaps be used as machine-assisted Socratic dialogues. Ultimately, the foundational humanistic and enlightenment ideals of independent reasoning, thoughtful contemplation, and purposeful output necessitate a guided practice in discourse [
15]. Under the guidance of a teacher, children and adolescents can engage with LLMs or generative AI on a topic until their reasoning skills in that area are improved through practice. Eventually, they can derive more creative conclusions than their computer counterparts—and notice this with self-confidence.
Digital machines cannot check the criteria of successful communication themselves and are blind to more subtle, often ironic forms of rule transgression. Connotations that enable people to let things slide sometimes, what can be called ambivalence tolerance, are evaluated as errors in a calculation. In our everyday lives, however, this is precisely what distinguishes us: we praise people for their tact. This is the skill we teach children, whether we have calculators or operate speech machines. It is precisely for this practiced thinking that we still need the other, semantic knowledge as a normative reference point, because in the absence of a railing, there is no stable footing, and without critical examination, coherent practice is unattainable.
Learning and practicing judgment are slow and arduous tasks. They are not absorbed in factual knowledge nor in the explicit presentation of rules. In democratic societies, however, they are essential components: thinking, absorbing, weighing, and judging together cannot be delegated to AI. Individual processes of democratic life may be. As Margo Loor [
65] wisely puts it:
“Voting is quick. This can be done on a screen. This is not democracy; done wrong it can be the tyranny of the majority. Voting may be part of the democratic process, but it does not replace the core of democracy, which is human thinking. First of all, everyone must think about society’s problems and then think about and discuss the issues with others. This must be followed by listening to the arguments of the various stakeholders, both majorities and minorities, and only then can you vote.”