Mental Biases
In accordance with the extensive studies of biases in psychology and behavioral economics it is reasonable to differentiate between cognitive and affective biases, as they can distort and bias bioethics work. Additionally, I have identified imperatives (as a type of bias) and moral biases (see below). The space does not allow for mentioning, explaining, and exemplifying of all the biases that are relevant in bioethics. While several biases will be listed, defined, and explained in tables, only some will be discussed due to the very many biases and limited space. Moreover, biases in the bioethics literature are identified to come in three kinds: (1) the description of biases in the field of interest described in articles on bioethical issues, e.g., how health professionals may have specific cognitive biases in clinical decision-making; (2) biases as explanations of positions or arguments in bioethics, for example that the withholding-withdrawal-distinction is a result of loss aversion; (3) biases in doing bioethics (e.g., in arguments or reasoning). As the latter two are the most important for the quality of bioethics work, the emphasis will be on those.
Cognitive biases
The psychology and behavioral economics literature have identified a wide range of cognitive biases [9, 11, 29]. Many of these are relevant for bioethics, as they influence the cognitive aspects of ethical judgments and decision making [30, 39]. Table 1 provides a selection of cognitive biases that can be relevant in bioethics and are worth considering, a definition and/or short description of these biases as well as in which type of bioethics work the bias mainly relates to.
One bias that can be observed in the bioethics literature is the extension bias. For example, it is frequently thought that more blood tests and radiological examinations are better than few [40]. Correspondingly, in the enhancement debate it has been argued that more intelligence is better than little or normal intelligence [41]. Moreover, such cognitive biases are relevant in the ethics of priority setting where providing many low-value services erroneously is considered to be of great value [6]. Moreover, we sometimes tend to think that the more arguments we can find for a decision, the better (neglecting quality). Hence, the general tendency to think that more is better than less appears to have some relevance in bioethics as well.
According to the so-called focusing illusion we tend to focus too much on certain details, ignoring other factors [42]. This bias is particularly relevant in complex cases where we may come to base ethical analyses on specific aspects and premises (such as facts, values, principles). Moreover, it may be relevant for ethical arguments, where we can come to focus solely on specific principles, e.g., on the principle of personal autonomy in assessing prenatal screening [43]. The focusing illusion is related to the prominence effect (see below) and the anchoring effect, where we tend to rely too much on initial information, and ignore high quality evidence (or context information) that may be more difficult to obtain [8].
Confirmation bias is the tendency to focus on information in a way that confirms one’s preconceptions or expectations and is related to what has been called the “self-serving bias,” i.e., the tendency unwittingly to assimilate evidence in a way that favors a conclusion that we stand to gain from [44]. Confirmation bias may not be restricted to evidence, but may also include intuitions, arguments, and judgments that support a specific bioethical perspective or conclusion.
Another bias worth paying careful attention to is the endowment effect according to which we can come to overvalue what we already have got (or obtained) compared to alternatives. While bioethicists do not obtain or depend on things (as in experiments on the endowment effect), the same psychological mechanism may be relevant for our relationship with argument, perspectives, lines of reasoning, theoretical positions etc. In the same way that we tend to demand much more to give up an object than we would be willing to pay to acquire it [45], we may cling to a specific perspective or position in bioethics. Once we have an insight or a view, we may not be willing to give it a way or replace it by another, even if it may be better. As such, an “endowment effect” in bioethics can spur conservativism [46].
The tendency to overestimate the accuracy of one’s own’s judgments, i.e., the illusion of validity, appears to be as relevant in bioethics work as elsewhere [47]. The same goes for the tendency to rely on familiar methods, ignoring or devaluing alternative approaches [48]. Bioethicists narrowly following one approach, be it (rule-)utilitarianism or deontology, are subject to the law of the instrument.
Other general biases may be relevant in bioethics work as well, such as the implicit bias which is described as the tendency to let underlying attitudes and stereotypes attributed to persons or groups of people affect how we understand, judge, and engage with them (without being aware of it) [49]. This has also been labelled “unconscious bias” and is related to the synecdoche effect, where one specific characteristic comes to signify the whole person [50], e.g., where persons with certain disabilities are addressed in terms of their disability, and not as a person.
Also, in bioethics we may be subject to present bias, e.g., when we show a stronger preference for addressing more immediate issues, outcomes, or solutions compared to more long-term problems, outcomes, or solutions. When we face with topical cases in the clinic or in the media and are expected to suggest solutions, more long-term and principled issues may be overshadowed [51, 52].
Probability neglect is the tendency to neglect probabilities when making decisions under uncertainty. This seems to be a general psychological bias that may be relevant when we assess potential outcomes of decisions or actions [53, 54]. Empirical premises are crucial to many types of bioethics work, and we may come to neglect small risks or come to totally overrate them. For example, bioethicists arguing for germline gene editing (GGE) may downplay off-target effects: “it is plausible that as GGE develops the rate of off-target mutations will become negligible. The rates of off-targets mutations in animal models have been declining rapidly, and such mutations are now considered ‘undetectable’ in some applications” [55]. Others may overrate such effects.
The tendency to have an excessive optimism towards innovations (pro-innovation bias) is also known in healthcare [56, 57], where some bioethicists are known to be very positive to specific innovations, such as CRISPR-cas9, and others are optimistic with respect to technological innovations in general [58]. The problem is that they may ignore limitations and weaknesses. The opposite is also true, of course. See status quo bias below.
The relevance of the rhyme and reason effect can be illustrated with John Harris’ elegant argument:”I have a rational preference to remain nondisabled, and I have that preference for any children I may have. To have a rational preference not to be disabled is not the same as having a rational preference for the nondisabled as persons” [59]. While catchy, it is not clear that the claim holds [60].
Implicit biases “involve associations outside conscious awareness that lead to a negative evaluation of a person on the basis of irrelevant characteristics such as race or gender” and are prevalent amongst health professionals [2] and bioethicists [61, 62]. Because they can operate to the disadvantage of those who are already vulnerable such biases are relevant in bioethics. Examples include minority ethnic populations, immigrants, poor people, low health-literacy individuals, sexual minorities, children, women, the elderly, the mentally ill, persons with overweight and disability. Even more, anyone may be rendered vulnerable given a certain context [63].
Common to all the cognitive biases is that they may distort our reasoning in bioethics. Moreover, while all are relevant in bioethics, they may be more or less relevant in different types of bioethics work as indicated in the table.
Affective biases
While the distinction between cognitive and affective biases is debatable, several scholars prefer to differentiate between them. The readers who dispute this distinction, can add the following to the cognitive biases above. Table 2 provides a brief overview of the affective biases with definitions/descriptions and indications of which type of bioethics work the bias mainly relates to. Then follows a brief discussion of some of the biases.
In bioethics individual cases can be paradigmatic, such as the cases of Karen Ann Quinlan, Terri Shiavo, and Charlie Gard. However, identified individuals, conditions, or groups of persons can induce special sympathy and empathy (or the opposite). This can engender unwarranted attention and priority towards specific groups in bioethics due to what has been called the identifiability bias and “the singularity effect” [64] and “bias towards identified victims,” [65] but also to “compassion fade” [66, 67]. Hence, identification can induce importance in biased manners.
Affective forecasting is another type of bias where one’s emotional state and conception is projected to future events [68]. Examples from bioethics is cases where hopes and desires flavor the ethical assessment of emerging technologies [7]. Related to affective forecasting is the impact bias, which is the tendency to overestimate the impact of a future event [69]. It can be observed in bioethics debates on novel technologies where future benefits are taken for granted, e.g., on gene editing, personalized/precision medicine, BigData, and Artificial Intelligence, and relates to projection bias (see Table 2).
On the other hand, as bioethicists we may let aversions to dangers or uncertainties influence our work and be subject to biases such as aversion to risk [70, 71] and aversion to ambiguity [8]. These biases may make bioethicists promote excessive diagnostics (overdiagnosis) and therapeutics (overtreatment) [6].
Loss aversion means that the perceived disadvantage of giving up an item is greater than the utility associated with acquiring it [72]. As with the endowment effect (and others) bioethicists do not obtain things or items. Nonetheless, the same psychological mechanism may be relevant for our relationship with argument, perspectives, lines of reasoning, and theoretical positions. We have invested many years of studies, research, and work experience in specific approaches or positions, and leaving them could result in the same affective effects. On the other hand, the bias of loss aversion has been applied to explain (or undermine) a bioethical argument, such as the difference between withholding and withdrawing treatment [73]. Hence, the same bias may be used in explanation or argumentation as being relevant for us as bioethicists.
While bioethics address complicated and complex issues, we may come to simplify and let a few or even one dominant factor determine our final analyses, which resembles what is defined as the prominence effect, which relates to what has been called “the scope neglect,” “scope insensitivity,” and “opportunity cost neglect.” The point is that we lose important aspects by narrowing our scope.
The yuck factor has been extensively discussed in the ethics literature [74,75,76,77] and there are different conceptions of whether it directs or distorts moral reasoning. The point here is not to provide the final answer to this question, but more modestly to point out that it can influence reasoning in bioethics in covert manners.
Hence, the affective biases may distort our reasoning in bioethics as do cognitive biases. Accordingly, being able to identify them is the first step to addressing and handling them—and thereby to improve bioethics.
Imperatives
Another type of distortion of judgments are imperatives, which are also oftentimes called biases. Imperatives are actions that are felt needed despite dim outcomes. They are immediate reflections of long-established doctrine or belief, and can be rooted in deontology [78]. Status quo bias [46] and progress bias [58] are but two examples presented in Table 3.
Status quo bias is described as an irrational preference for an option only because it preserves the current state of affairs [79]. This may result from people’s aversion to change (conservativism), making them avoiding changing practice [8], or system justification, i.e., the need to have stability in bioethical theory and practice, even despite these being dysfunctional or hampering improvements [80]. The Status quo bias is also associated with the cognitive bias called the endowment effect, according to which we tend to overvalue what we already have got compared to alternatives. See above.
Contrary to the status quo bias there also is a progress bias, according to which persons experience a strong propensity to promote what is considered to be progressive [58]. It is also related to what has been called pro-innovation bias and optimism bias (see above). Additionally, progress bias is related to what has been called adoption addiction, according to which we appear to have a tendency to be more interested in assessing and investing in new and shiny technologies than reassessing and disinvesting in old and inefficient technologies [81]. In bioethics status quo bias and progress bias are particularly relevant with respect to the assessment of biotechnologies [58].
In ethical debates on genomic analysis, incidental findings, return of results, newborn screening, and prenatal screening we often encounter the argument that people have the right to know [82] or that not providing a test (or its results) is denying them crucial information [83]. Certainly, holding back information can undermine respect for autonomy, but this is not always the case in the mentioned examples [84]. This indicates that the imperative of knowledge is relevant in bioethics debates as it is in healthcare and society in general [7].
Correspondingly, there may be a competency effect in bioethics, as there is a tendency to think that ethicists with better formal competency will produce better bioethics work which results in better decision-making. Again, this may be the case, but it is certainly not always so. Prominent bioethicists may be extremely busy not having enough time to apply the full capacity of their competency for all purposes.
Again, these (and other imperatives) [7] may influence, undermine the quality, and even distort our work in bioethics.
So far, this review illustrates that a wide range of cognitive and affective biases as well as imperatives are relevant in bioethics work. Clearly, as general mental mechanisms they influence bioethicists in the same manner as the general population. What I have tried to investigate in addition is whether the same psychological mechanisms may have any particular relevance to bioethics work. In addition to the “mental mechanisms” identified above, the literature reveals that there are “moral mechanisms” that can (negatively) influence bioethics work.
Moral bias
In the same manner as our thoughts, affections, and imperatives may influence or even distort our moral judgments, so may various moral mechanisms. Our moral judgments may be influenced by our ethical positions, religious beliefs, methodological preferences, and moral inclinations. Accordingly, moral bias can be defined as moral beliefs, attitudes, perspectives, or behavioral tendencies that unwittingly tend to influence our moral judgment in specific directions.
Again, space does not allow for an exhaustive review of all kinds of moral bias. Only biases that can be important to acknowledge and address and that can contribute to improve the quality of bioethics work are included. To facilitate reading, the biases are grouped in five groups: (1) Framings, (2) Moral theory bias, (3) Analysis bias, (4) Argumentation bias, and (5) Decision bias. The table at the end provides a summary of the biases and indicates for what type of bioethics they may be most relevant. Please note that the groupings are not absolute.
Framings
A moral framing effect can be defined as a bias where people’s moral judgments are influenced by how the options or arguments are framed or by the (ethical) framings of moral situations or challenges.
One type of moral bias that can be understood as a framing is standpoint adherence, according to which we are not willing to change standpoint despite solid evidence. Empirical research shows that strong positions are difficult to change, even with good evidence [85]. This relates to cognitive biases such as the ostrich effect and overconfidence effect. Bioethicists that change their standpoint, method, or perspective are rarely heard of. However, it is worth noting that some experiments have shown that some of our preferences are easy to influence (choice blindness and preference change) [86].
Moreover, there may be framing effects in the terminology we apply. Although bioethicists have increasingly become aware of the normative relevant difference between “epileptics” and “persons with epilepsy,” we have used terms such as “hypochondriacs,” “diabetics,” and “downs children” etc. [87]. The moral relevance of this has been discussed in relation to the synecdoche effect [50] (discussed above).
One reason for the terminology problem may result from how bioethicists are personally, socially, and culturally embedded: “bioethics is an embedded socio-cultural practice, shaped by the everchanging intuitions of individual philosophers, and cannot be viewed as an intellectual endeavour detached from the particular issues that give rise to, and motivate, that analysis”[88].
Corresponding to what in business ethics research has been called the social desirability response bias [89] and what in the Science and Technology Studies (STS) literature has been coined tacit commitments and narrative bias [90] there can be an expectation bias in bioethics making the social expectations influence the bioethics work. In clinical ethics several such biases have been identified in terms of “bias towards the interests of hospital management,” “bias towards laws and regulations,” “bias towards individuals’ perspectives and interests,” and “bias towards the perspectives and interests of health-care professions”[12]. Such biases stem from conflicts of interest and seem especially relevant when bioethicists work in or for expert groups. Thus, expectation bias is related to social mechanisms and motivations.
As illustrated by the research of Don A. Moore and colleagues, self-interest tend to operate via automatic processes in conflict of interests [91,92,93] and Mahdi Kafaee and colleagues have experimentally demonstrated how conflict of interest might shape the perception of a situation in a subconscious manner [94]. Consequently, conflicts of interest can (unconsciously) bias bioethics work [95]. Clearly, ethicists are hired by stakeholders, and can have conflict of interests as other researchers [96, 97], especially in settings where bioethics has become a business [98]. Moreover, there may be professional conflicts of interest, e.g., between ethicists and jurists or policy makers [99, 100]. Ethicists may also have strong opinions in controversial issues biasing their judgments [101] or be subject to political attention (“political bias”) [102]. As acknowledged, conflicts of interests have been identified as biases in clinical ethics committee work [103].
According to the impartiality illusion, we may think that we are impartial while closer analysis (by independent and blinded reviewers) may reveal specific tendencies, inclinations, or partiality. Everett provides one interesting example about endorsing consequentialism [104].
Another well-known way to frame a bioethical debate is by defining what is (not) the issue (“that’s not the question”) and identifying what is an ethical problem [105]. Such delimiting claims seem to be common [106,107,108,109,110,111] and can easily result in biased bioethics.
Hence, there may many types of unconscious framings that direct bioethics work. Being aware of and addressing these framings may contribute to improve bioethics work.
Moral theory bias
There are also biases with respect to moral theories, i.e., where moral theories may direct specific moral challenges are perceived, defined, deliberated, and solved. One such bias is the theory dominance according to which one theoretical perspective inherently dominates the analysis, ignores other relevant perspectives, adequate objections, or the context of where the problems arise. Accordingly, not being “practical in approach, philosophically well grounded, cross disciplinary” or not being performed by “good people” or skilled professionals [112] may bias bioethics. The same goes for using ideal theories to tackle problems in non-ideal context [113] or the lack of specifying principles [114]. This does not make for example an explicitly stated virtue-ethical analysis of euthanasia biased, because the moral theory is explicitly declared. However, if the author uses the outcome from such an analysis to draw general conclusions, one could argue that the work is biased.
Yet another type of bias inherent in moral theories is what may be called conceptual bias. For example, it has been argued that there is a basic asymmetry in ethics, making some concepts, such as bad, easier to define and grasp than others, like good [115]. The same goes for disease versus health [116]. If there are structural asymmetries in moral concepts and ethical theories, this can bias our judgments in bioethics.
Furthermore, it has been pointed out that certain biases are more likely in specific moral theories. For example, it has been argued that there is a potential bias in casuistry, e.g., in describing, framing, selecting and comparing cases and paradigms [117]. The reason is that in order to assess relevance (of a case), we rely on general views, which may be biased. Correspondingly, it has been argued that the use of (constructed) case studies may mislead moral reasoning [118]. According to these lines of thought, it may be possible to assess various kinds of moral theories for their “characteristic biases.”
On the other hand, various types of cognitive biases may distort bioethical reasoning (in many theories). Dupras and colleagues identify three such cognitive biases that may impede ethical reasoning: exceptionalism, reductionism, and essentialism [119] where genetic exceptionalism, genetic reductionism, and genetic essentialism serve as examples.
Another theory type of bias is bias towards inadequate moral perspectives, i.e., the tendency to the rely on arguments from an erroneous or inadequate moral theory or perspective or to rationalize of a preferred conclusion by appeal to arguments that underpin a preferred conclusion, which e.g., has been identified in clinical ethics [12]. On the other hand, lack of theoretical foundation (in moral philosophy) [120], lacking specific theoretical foundations (such as utilitarianism + decision theory) [121], or not being principle based [122, 123] may also hamper and bias bioethical analysis, it is argued. Others have pointed out that the lack of “sensitivity to the problem of the multiplicity of moral traditions” [99] could bias bioethics work. While interesting, there may be very many views on what an “inadequate moral perspective” is and difficult to decide what counts as adequate. Nonetheless, there may be some agreement on adequacy, e.g., that ethics of proximity may be less relevant for the assessment of cost-effectiveness than utilitarian calculus.
The point here is that there may be implicit theoretical assumptions that may bias bioethics work. The same seems to be the case for our analyses.
Analysis bias
There are also potential biases related to ethical analysis in a broader sense. Myside bias is one example of this, according to which we can have a tendency to evaluate or generate evidence, test hypotheses, or analyze or address moral issues in a manner biased toward our own prior perspectives, opinions, attitudes, or positions [124]. At the same time it has been shown that we may consider one-sided arguments to be better than balanced arguments (even if they are opposite to one’s opinion) [125]. The way we assess arguments, weigh the various factors, and synthesize a topic may certainly be biased by unconscious mechanisms.
Moreover, the processes of specifying [126], interpreting [114], or balancing [127, 128] moral norms, values, and/or principles may be biased. Ethical work can also contain “moral fictions” biasing the analysis [129]. Moral fictions have been defined as “false statements endorsed to uphold cherished or entrenched moral positions in the face of conduct that is in tension with these established moral positions” [130]. However, labelling something as a “moral fiction” can itself introduce bias (see terminology bias above).
It has also been suggested that we can make “moral errors” or “moral fallacies” [131] due to various biases, such as psychic numbing, the identifiable victim effect, and victim-blaming [132].
Again, implicit assumptions or tendencies in our analyses may bias bioethics work.
Argumentation bias
Flawed arguments and fallacies in argumentation can also bias bioethics work. (Most) bioethicists are trained in detecting and avoiding flawed arguments, such as fallacies of vagueness, ambiguity, relevance, and vacuity [133]. However, the reviewed literature identifies flawed moral reasoning [134] or bad arguments that do not fall under the groups of illogical or flawed arguments [135]. Some of these can be characterized as rhetoric, deception, or argumentative techniques. The list of logical fallacies and bad arguments is long [133] and beyond the scope of this article. Here only some examples of how they may bias bioethics work are included to illustrate their profound importance for biasing bioethics work.
False analogies can bias arguments if there are morally relevant differences between the case and the analog. One example in bioethics is revealing false analogies in the argument for coercive measures against alcohol consumption during pregnancy, where it has been argued that using court orders to medically treat women (for alcohol dependency) during pregnancy is analogue to coercion by “physically abusive partners” [136].
Moreover, reasoning from is to ought can bias bioethics work. This is related to what has been called “Hume’s law,” “Hume’s guillotine,” or “the is-ought-fallacy,” and to”the naturalistic fallacy” attributed to George Edward Moore (1873–1958) [137]. It is also related to the reasoning from quantity to quality, e.g., in the enhancement debate where it is argued that more intelligence is better [41].
Accordingly, inference from description to prescription is a well-known challenge where ethical conclusions are based on opinion polls [138]. The number of empirical work in bioethics has increased substantially the last decade [139] improving the empirical premises for ethical analyses, but also posing challenges [140]. As knowledge about people’s attitudes towards biotechnologies, such as genetically modified human germlines, are used to inform policy making [141] they may also come to influence ethical analyses and argumentation.
Relatedly, the experience paradox is an appeal to experience which represents a “wide-ranging and under-acknowledged challenge for the field of bioethics” according to which personal experience is a liability in bioethics debates when they express vested interests or are not representative of those involved [142]. This relates to epistemic (testimonial) injustice [143].
Related to some of the framing effects that have been discussed above, we may in bioethics use vague, unclear, or ambiguous concepts, which can confuse, obfuscate, or frame the argument in unwarranted ways. One example of this is the use of the concept “naturalness” (for example in the enhancement debate) which has been shown to be used in a number of ways confusing rather than clarifying arguments [144]. Admittedly, vagueness can be beneficial in bioethics [145,146,147]. However, it can also confuse arguments or stop them, e.g., in statements such as “that is not natural” or “that breaches with personal autonomy.”
Related to the yuck-factor (see above) bioethicists can also appeal to effects of revulsion, repugnance, abhorrence, or repulsion [148] in their work. While moral disgust may play a role in bioethics, it may also be used in a manipulative way and bias an analysis or argument.
“Begging the question,” or petitio principii, is the tendency to assume the conclusion of an argument. This form of argument can result in bias in bioethics, for example in debates on the beneficial outcomes of proton therapy for the treatment of cancer, it has been argued that it is unethical to waste time in assessing its outcome by high-quality trials as its outcomes obviously must be beneficial [149]. This relates to progress bias, discussed above.
Bias can also result from assuming controversial premises (without justification), drawing conclusions beyond premises, using obscure or controversial examples, analogies, or thought experiments [150], or concluding without assessing the truthfulness or plausibility of crucial premises [16]. The same goes for straw man arguments (refuting something else), argument selection, and not addressing relevant counterarguments.
While clearly not exhaustive, the examples illustrate that there many ways that flawed arguments can distort biases in bioethics work.
Decision bias
Many biases also appear in moral decision-making, and many of them have been mentioned under cognitive and affective biases. While biases in decision-making merit a separate study [30], three main types of decision biases should be mentioned [151].
First, simplification biases can be observed when decisions are made based on selected and limited empirical evidence, e.g., when they are insensitive to base rates or are based on illusory correlations—or when only some of the empirical premises are taken into account.
Second, verification biases occur when decisions are made to stick to status quo or maintaining consensus in the group, e.g., when decisions are made to maintain consistency in a group and the experience of control.
Third, regulation biases are tendencies to avoidance temperaments in ethical debates for example in “rationalizing or downplaying the seriousness of ethical dilemmas and avoiding taking personal responsibility due to feelings of discomfort” [30].
The moral biases are summarized in Table 4.
Measures to address or avoid bias
As there are many biases there are also many ways to address them. For example, some call for a ‘critical bioethics’ where the ethical analyses in bioethics are empirically rooted [152], others argue for providing a reflexive autoethnographic account of arguments in bioethics applying “confessional tales” [88], and some urge to acknowledge the importance of (framing of) stories [153].
Special (reverse) tests have been suggested to avoid specific biases, such as the status quo bias [46] and progress bias [58]. Adhering to criteria for good ethical argumentation, such as the Rapoport rules (after the Russian-born American game theorist Anatol Rapoport) [154] or the many sets of criteria for “good bioethics” [112, 122, 123, 130, 150, 155,156,157,158,159,160,161,162,163] may help avoiding the negative effects of biases in bioethics. Correspondingly, declarations of biases together with (or as part of) declarations of conflicts of interest may also reduce (the effects of) biases.
Moreover, in the general literature on biases there are many advice on debiasing [164, 165] and compensatory measures (such as nudging). Such suggestions also exist for health decisions in the clinical setting [31, 166,167,168]. “Moral debiasing” has also been suggested [169]. Clearly, several of these approaches and attempts may be relevant for bioethics as well. However, to decide which biases should be addressed and in what manner and how to address mistaken moral judgments (or moral heuristics) [170] is another big issue which is beyond the scope of this study. Here the point has been to provide an overview of biases that are relevant to bioethics work, to suggest a way to classify them, and to stimulate reflection, debate, and further research.
link