The Unreality of Scientific Literacy
The "Well-Rounded Student" is a Consumer of Information, Not Knowledge
This article assumes basic familiarity with concepts introduced here.
Generic Smartness
Weird times result in weird things. Weirdness here means things being present where they shouldn’t be. Weirdness shows up a lot at times of change, when there are signs the road we are on is diverging into something new. It is seen in strange political bedfellows. In alignments where there shouldn’t be convergence. In unintended consequences becoming legible. I want to focus on one such weird phenomenon. Namely, that a major trend of dissociation from reality has been a set of practices meant to make us better reasoners about reality: education.
Today, people are more educated than ever before. This extends beyond an education in an expanding collection of brute facts or the how-to of a mechanical process. The former is increasingly seen as unnecessary with our digital devices and AI serving as extended memory functions for us, and the latter relegated to trade schools, certifications, and professional education for those outside the laptop class that still work with physical reality. Instead, what really describes education in the 21st century is the push for an education in “critical thinking”, in how to think at the level of epistemic and meta questions.[1] As Matthew Crawford puts it: “The ideal seems to be that [students] shouldn’t be burdened with any particular set of skills or knowledge; what is wanted is a generic smartness, the kind one is certified to have by admission to an elite university”.[2] This kind of domain agnostic intelligence is part of preparing students for an information age workforce where white collar work means being a human parser and stop-gap in flows of information whose data collection happens long before it reaches our desks, and requires working across topic areas.[3] However, this pressure is also driven by an ostensible public good as well. Namely, the need to develop a responsible information consumption citizenry in light of an age of misinformation:
“Scientific literacy means understanding how science is done, what is science versus non-science, and how to evaluate claims we are exposed to on a daily basis. Without scientific literacy, we are unable to make informed decisions about crucial issues, such as climate change, toxic waste disposal, the space program, urban sprawl, genetic engineering, pollution, and myriad other issues facing people all over the world.” – Ann MacKenzie, National Science Teaching Association [4]
That this being a public virtue for citizens is new can be seen in “correlation is not causation” being an aphorism widely known to millennials and Gen Z in a way it was not for their parents.
This training in scientific literacy, media literacy, digital literacy, and now even AI literacy all speak to judging the warrant of claims made about a range of domains using practices not rooted in any domain-specific expertise, but epistemic practices and meta-cognitive practices of critical thinking, identifying bias, judging soundness of reasoning, employing verification standards for establishing causality, etc. The problem to be considered is that we’ve given the public a task - and necessarily then the normative license as well - to judge the validity of research, technical claims, and citations of expertise that in the past would not have been seen as a responsibility of the public to judge. The point to be made here is not a call to reverse this change, but to acknowledge an externality: the public’s lack of expertise means they must rely on a kind of crude coherentism to do this. The unintended consequences of this being a devolving into a public epistemology centered on the theoretical over the real.
An Education in Consumption
Scientific and media literacy is a component of Common Core State Standards, the American Association for The Advancement of Science benchmarks, the National Science Education Standards, and various state standards.[5] [6] [7] When the National Science Education Standards were established by the National Research Council in 1996 they were explicit that part of the goal was to shift science education to understanding the scientific process of inquiry, to shift from “knowing scientific facts” to “developing abilities of inquiry”.[8] These desired shifts should be taken in context: they were part of trying to socially situate science as a process of inquiry, not a collection of directly known facts that people received without having to do interpretive work. The move was necessary for making a more scientifically literate people with a more realistic understanding of science, but it also meant that the public was now able to ask more of scientific claims themselves (whether these claims came from secondary sources or the practitioners). They could engage in argumentation and request for explanation. They could judge the justifications of scientific claims, that is, their epistemology and their theoretical grounding.
Standards for scientific literacy instruction recommend beginning in middle school, progressing to understanding of research design and statistics by high school.[9] The methodological and meta nature of scientific literacy though becomes starkest at college, as college-educated individuals receive an education that often includes required coursework in research methods. This coursework asks individuals to engage at a procedural, epistemic level with information, wanting them to be able to read and parse academic articles - or the results of such research studies as reported out by news or social media sources - not with full understanding of the particular methodology or statistical test used, but with enough understanding on how to “read like a scientist” and competently interpret what kinds of statements one can and cannot make from the kinds of evidence provided and research design used.[10]
Crucially, as was said, what is being taught here are domain non-specific practices of epistemology. The content knowledge of a particular field, its literature, and its previously verified or falsified content will be left to the select few that engage in an advanced degree or get a job where they can apply this level of understanding. The rest of us are learning “literacy” under a tacit realization that judging the data collection, the particular tests or methodologies used, construct or content validity, or analyzing the data oneself is neither logistically nor intellectually possible due to us being at the level of a non-expert.
However, principled agnosticism is not a live option for us. The reality of both our public information environment and market demands for a workforce able to parse “information” as a generic thing means we find ourselves charged with being good consumers of information that can learn the “high-level” theoretical frameworks of technical fields we were never formally trained in to create plausible shapes from these fields’ data and understand “enough” to develop rules for action from these fields’ affordances. Education in this way has become a teaching of epistemic and meta-cognitive practices for consumption of information rather than production of knowledge. People are placed in a normative position of being asked to develop and adhere to practices that in the past would be a responsibility of experts, not a lay public: the judging of the warrant of factual claims from observations, data, practices, and experiences, but without having access to the literature, observations, experience, history, understanding of construct or content validities, or unmediated data that a practitioner would have. The result is a turn to coherentism that has the unintended consequence of destabilizing the grounding of fields.
Babe, New Study Just Dropped
We can see this best in how people are asked to engage with reported scientific research. Our scientific literacy education asks us, in theory, to scrutinize individual studies - or the indirect reporting of them at least. The problem is that we cannot teach judging the empirical or a priori side of the epistemic foundations of these studies. The reason for this is related to the initial, foundational steps that ground a research design: its choosing of a hypothesis by way of abduction and its establishment of measures for its construct of interest. Neither of these can have their warrant established by anyone other than a competent practitioner with the necessary prior knowledge. The reason for this relates to the concepts of content and construct validity:
Content Validity: Does the measure point to the desired construct and is a proper operationalization of that construct? This is based on assessment by experts in that domain. “Content validity is not ‘tested for’. Rather it is ‘assured’ by the informed selections made by experts…”
Example: A driving test’s inclusion of a requirement to navigate around a stalled vehicle can be content valid if that activity seems to relate to the construct of “driving”.
Construct Validity: Does the measure interrelate with other measures taken to also be valid measures of the construct in question? Construct validity requires knowledge of the other possible measures, their previous validation and grounding, and the factor analysis of how they relate.
Example: A driving test’s inclusion of being able to drive safely, while being asked to identify when any one of a set of three numbers is said from a recording, is construct invalid if performance on this task doesn’t interrelate with our other, previously validated, measures of driving performance.
Both of these forms of validity require prior knowledge of the field and access to a priori justification. Content validity is determined by an expert that can explicate the justification for why a measure is relevant to the empirical reality of a construct, drawing on their experience of this empirical reality by way of their knowledge of past research and experience. Construct validity is determined by both being able to identify the content validity of the measure and providing further warrant to the measure’s justification by showing previously verified measures interrelate with this one in an objective way. Additionally, the choosing of the hypothesis itself, which sets the data frame for what is considered as data, is done by a process of abduction that requires a kind of experience accessible to experts.
Since non-experts do not have access to the experience, prior empirical observations, and knowledge of the field that allows them to justify the foundations that provide the warrant for the empirical grounding of the measures and hypothesis of a study, the average citizen is left to focus simply on whether the study conclusions follow, largely granting the premises of the content and construct validities, hypothesis frame, and the data or statistical transformations done. Insofar as methods are judged, they are judged based on the basic “face validity” of the methods.[11] This means, in practice, the non-expert largely looks to see if the study deployed any one of the logical or discursive fallacies they are taught to look for (e.g. correlation is not causation).
This is a framework that induces a crude coherentist phenomenology; as long as the conclusion is coherent with the methods, and the methods are coherent with face validity intuitions and acceptable discursive practices, we don’t have anything further to analyze as non-experts. Neither of these intuitive affordances give us any information about the empirical grounding of the study’s methods, data definitions, or its content and construct validity. This means our analysis of individual studies bears the problems of a coherentist framework, as we find ourselves unable to judge the truth conductivity of its claims since we cannot establish the warrant of its starting justifications that provide its grounding.
Since individual studies cannot be evaluated to establish independent warrant, studies (that at least pass the test of no obvious fallacies) are “abstracted” in our minds to being a belief statement by the study’s author, as indicated by the conclusion, abstract, news report, ChatGPT summary, etc. It is not more than a belief statement, since we cannot establish sufficient warrant, in isolation, to verify the study results. Instead, it becomes identified as an affordance whose independent truth value in isolation we are agnostic on, but which nevertheless is an affordance which needs to be integrated into a holistic picture. The information consumer’s job becomes looking at the “big picture” of how the belief claims made by various studies, reports, or epistemic testifiers can be constructed into the most coherent picture. As a proper coherentist, the knowledge claim becomes a holistic property of a system, not reliant on knowing the truth basis of individual claims in that system in isolation.
Competing Networks and the Loss of Expert Deference
The problem with this is that the alternative systems problem of coherentism means equally coherent interpretations of the underlying beliefs or claims are equally valid. So, if an expert, institutional authority wishes for the public to adopt one shape over another as the narrative for the technical affordances out there, counter-networks that can muster an equally coherent ordering and shape for the expert claims out there – or equally coherent interpretations of them – can intelligibly lay claim to a counter-position that is equally valid.
Indeed, we see this in the problem of why “expert consensus” does not carry with it the normative weight the Center and institutions wishes it would anymore. The “consensus” position of a field is an abstraction that necessarily summarizes the conclusions of experts with different lower-level theoretical commitments. If the question at hand is at the level of theory, such as in what the proper theoretical framework for a field itself is, the “consensus” position will necessarily exclude those in the field that propose alternative theoretical frameworks than the consensus, even if they themselves are still credentialed experts in the field. Put more simply, looking to a supramajority consensus in fields becomes more complicated the more the question being asked hinges on theoretical differences in the field, even when the existing data are held in common in the field.
Counter networks then may appeal to the background beliefs and theoretical frameworks of the institution in question, noting what is excluded from the institution’s shaping frame because of their theoretical standpoint, making the case the counter-network’s frame is superior since it explains the results of more affordances – including even the affordances of the experts that are outside the institution’s theoretical frame - thus providing a more coherence maximizing explanation. Indeed, they may even begin to point to how their frame can explain affordances from other fields or domains, implying those experts attempting to enforce boundaries between fields are guilty of a quiet conservatism.
This is, of course, how debate in fields go and is a normal part of science, with revolutionary paradigm shifts occurring from proposing new shapes that re-interpret what is out there to explain more of reality.[12] However, when operating under a crude, sufficiency coherentist epistemology, the more logical coherence maximizing narratives will have a greater claim to being “knowledge” simply by virtue of their coherence maximization, absent any empirical grounding and requirement for verification by evidence. The result is a “scientifically literate” population that is engaged in the practice of hermeneutics and interpretation more than an image of science as empirical observation, as they become interpreters of texts whose ultimate grounding they cannot understand, but which they transform into a coherent narrative, looking to those influencer-monks whose rationalistic interpretation skills can give them the most cohesive explanation.
This leads to an important upshot of this schema of coherentism: the significance of one’s theoretical standpoint ends up becoming more salient, as since we cannot judge between coherent narratives on the basis of empiricism or deductive validity from self-evident justifications, we rely on showing the supposed superiority or inferiority of the logical coherence of the theories at play. We become obsessed with paradigms, paradigm shifts, and the shape and movement of thought more than the grounding of its content. The result is that we also end up focusing quite a bit on “bias”, looking into the subjective standpoints of claimants. However, because of the phenomena above, this “bias” is not necessarily the individual psychologies, material interests, or political ideologies of researchers, but the “bias” of what theoretical framework or paradigm is their standpoint.
Because the theoretical frame a practitioner uses is related to construct validity though, and thus the prior empirical history and literature that grounds that construct, the theoretical frame a practitioner uses is also related to their history of study in the field and what they chose to study - and in some sense then, what the practitioner’s motivations were for studying that topic to begin with. In this way, personal psychology reenters the picture, but in a more psychoanalytic way, with interest in the underlying (maybe not even explicitly known) reasons one adopts a theory becoming heightened in the public discourse. In this way, even competing epistemic frames that don’t necessarily carry value commitments end up being framed as having value commitments by combatants. Competing interpretive networks seek to frame alternative networks as being motivated by some non-epistemic end that calls into question whether their theory truly seeks to maximize logical coherence instead of reifying a starting motivation or bias. The result is that the public becomes even more sensitive to epistemic institutions possessing non-epistemic goals, even punishing those epistemic institutions whose political and social goals reflect their own.[13]
Blurring of Value and Fact Distinction
In such an environment, consumers of information become even more sensitive to conceptual analysis of the “goal” for a theory, starting to see different epistemic frameworks and theories as actually being indicative of different motivations and psychologies than statements about reality born of differnet material groundings. It is, of course, worth stopping here to note the role of motivation is a real thing to consider, and again, this is part of the necessary consideration of science as a social enterprise. However, what is different is the degree to which the dialectic here, without the referee of a foundationalist and empirical verification, encourages a metaphysical rabbit hole.
As an example, stronger proponents of the psychological theory of 4E accuse mainstream cognitive psychology of being influenced by a nominalism and Cartesian mind-body dualism borne of the Rationalism of the Enlightenment, a seemingly coherent and intelligible claim.[14] Cognitive psychology responds to this claim by noting strong 4E’s lack of empirically testable claims and quietism on the established content and construct validities of the field.[15] This response of traditional cognitive psychology is seen as speaking to a kind of quiet conservatism of a modernist Center and institutional frame. 4E responds that what they propose, after all, is a theory whose expanded construct could plausibly explain more of reality, including the existing affordances of extant research, if we reinterpret what those affordances mean, accepting some deviance from the positivist requirements of cognitive psychology.
So far, this back and forth is merely epistemic. However, 4E theories also connect their critiques to the humanist upshots they take them to have, finding popularity among those centering an Aristotelian metaphysics and ethics that sees re-centering of “telos” in modern society as not just explaining perception better, but also having happy upshots for ethics, mental health, and responding to a “meaning crisis”. Although both theories are (in theory) theories about human perception, strong 4E makes the move of promising to be a theory that also coheres with questions in other domains, making the case that cognitive psychology is biased by its very attempt to divide questions of “meaning” from questions of “what is”, and that this implicit bias results in not realizing human perception starts with meaning-laden holistic concepts first, not sense data about external objects that are combined into perceptual objects.[16]
These kind of discussions heighten the metaphysical realm’s role in questions of external facts and stress what we can expect a public to come to stable, warranted conclusions on.
The Real Is Handicapped
It turns out this adjudication by way of psychoanalysis of one’s interlocutors and privileging of maximization of logical coherence is a perfect recipe for interminable conflict. By itself, the détente this leaves us in is concerning enough, but at least has the comforting conservatism of a permanent stalemate. The problem though is that coherentism in practice does put its pinky on the scale between theories, and we can see that in the example we just gave.
Similar to its relationship to conspiracy theories noted in our prior essay, coherentism is not neutral in phenomenological practice as to the types of theoretical frameworks it implicitly prizes in people’s minds. Although in theory it is just a theory on epistemic structure, it carries with it nudges and implicit semantic content on the question of what is justification and truth itself (as was seen in its historical origins). We can see how this plays out in the case of strong 4E and cognitive psychology. It is the traditional cognitive psychology perspective whose response points to a justification standard of empirical evidence and positive claims and that relies on a grounding of its theory in a priori knowledge claims connected to past observation and experiential practice. Strong 4E relies on critiquing the theoretical framework and interpretative judgments underlying the field’s claims and constructs. The strong version does not provide positive, testable claims so much as a hermeneutic re-interpretation of the field, relying as much on metaphysics as empirical psychology. Although both have an equally coherent logical framework, and should be equally valid under a crude coherentism, the semantic content of cognitive psychology’s claims implicitly alludes to justification as objective evidence grounded in a foundationalist structure that can be tied back to observable reality, whereas the semantic content of 4E’s claims implicitly alludes to justification as conceptual schema that provide for holistic understanding by way of superior explainability in the range of phenomena it covers and filling of the gaps of the more limited positivist model. Under a public that is unconsciously operating under a crude coherentist intuition, ideas like 4E have a leg up since they can provide more easily the subjective (one could say “embodied”) experience of holistic understanding than more limited linear theories traceable to a starting point in an observation of mind-independent reality.
Negation
We want to close here with considering one possible political upshot of all this: that this phenomenon has a role in a public that is increasingly driven by negation. As Martin Gurri warned in his prescient Revolt of the Public: The Crisis of Authority in a New Millennium, expertise finds itself under attack from processes borne of the internet and network communication that ultimately prizes negation over any positive commitments.[17] What we want to sketch here is that this is a possible upshot of a crude coherentism’s subtle prizing of theories that maximize coherence.
Making a claim of positive fact for something based on first principles traceable to sense data or axioms does not have to be any less complete of a picture of the world as a matter of philosophy. However, in practice this kind of epistemology is conservative in its conclusions, and in so far as it relates to a correspondence theory of truth, involves declaring certain claims as being untrue or certain experiences as not within the realm of reality. Coherentism, as we’ve talked about before, need not be any different than foundationalism in any of these regards. However, the crude version has the easier ability to maximize what it speaks on and include beliefs as legitimate parts of a larger truth. This uneven footing means that taking a stance that requires someone to be wrong, or some experiences or affordances to be seen as ungrounded, and thus not real, will be rejected in a coherentist marketplace in favor of the theory that maximizes coherence and treats all affordances as to be made justified by a proper holistic understanding. This leads to a bias towards constant negation on the part of the public, as any positive stance or theoretical frame from an institution will necessarily not be able to provide this maximal coherence and will be rejected as lacking, dismissive of people’s experiences, and biased by an incomplete theoretical framework or meta-narrative.
Logical perfection and maximal extension of what counts as valid for consideration, which is impossible if connected to reality, is the criterion for the public now and so the Center and the institution will always be found lacking. The public now, “opposes, but does not propose” any meaningful course of action for this thing called reality in such a context.[18] The result is a retreat into the mind and metaphysical debates better suited for maximizing logical coherence and a need for cognitive closure, over projects for improving objective, material reality right now that reject the certain closure of a theory of everything.
Conclusion
Our society induces in us an implicit verification schema whose reliance on judging networks of belief statements stumbles since we lack the expertise or empirical grounding to know the truth of any particular nodes. The result is the public is caught in interminable conflict between competing interpretative networks.
This competition is one where the Center and institutions will necessarily be at a disadvantage. It is also one where the theories most likely to propagate are those that are most concerned with logical coherence over empirical grounding, and thus most well-suited for yuppie intellectual and metaphysical debate rather than developing theories of change for reality that seek not coherence, but correspondence to facts known about an independent, objective reality.
Humanity requires an experience of the new to function, and this experience must come from an experience with reality, not the mere manipulation of shapes in our minds. Insofar as the trends above heighten the metaphysical and the merely logically coherent as a criterion for knowledge or truth, the more we will find the project of progress, material political economy, and real meaning lost from us in favor of chasing the empty, nervous cognitive closure of pure, contingent theory.
[1] Dillion, S. (2016, March 26). Schools Cut Back Subjects to Push Reading and Math. Retrieved December 5, 2024, from https://archive.ph/dhgeg
[2] Crawford, M. B. (2015). The world beyond your head: On becoming an individual in an age of distraction (First edition). Farrar, Straus and Giroux. p. 209
[3] One can see an example of this by looking at the American Psychological Association’s Guidelines for Undergraduate Majors. In particular, the 2.0 guidelines released in 2013 make this point rather explicit, collapsing previous content categories into categories stressing ways of thinking, noting that this is part of students leaving with a “psychological way of thinking” and “preparing them for the workforce”. https://ww2.ufps.edu.co/public/archivos/pdf/4ee634fcc65abeedd9e84bdba9b1776b.pdf
[4] Promoting Scientific Literacy in the Science Classroom | NSTA. (n.d.). Retrieved December 9, 2024, from https://www.nsta.org/science-teacher/science-teacher-mayjune-2023/promoting-scientific-literacy-science-classroom
[5] English Language Arts Standards » Science & Technical Subjects » Grade 6-8 | Common Core State Standards Initiative. (n.d.). Retrieved December 9, 2024, from https://www.thecorestandards.org/ELA-Literacy/RST/6-8/
[6] Benchmarks for Science Literacy | American Association for the Advancement of Science (AAAS). (n.d.). Retrieved December 9, 2024, from https://www.aaas.org/resources/benchmarks-science-literacy
[7] National Science Education Standards. (n.d.). Retrieved December 9, 2024, from http://www.csun.edu/science/ref/curriculum/reforms/nses/
[8] Ibid.
[9] See National Science Education Standards. (n.d.). Retrieved December 9, 2024, from http://www.csun.edu/science/ref/curriculum/reforms/nses/
[10] Kelp, N. C., McCartney, M., Sarvary, M. A., Shaffer, J. F., & Wolyniak, M. J. (n.d.). Developing Science Literacy in Students and Society: Theory, Research, and Practice. Journal of Microbiology & Biology Education, 24(2), e00058-23. https://doi.org/10.1128/jmbe.00058-23
[11] Williamson, M. Psyc 451. University of Nebraska-Lincoln. https://psych.unl.edu/psycrs/451/e2/fccvalidity.pdf
[12] 0226458083
[13] See the research of Cory Clark. For example: https://osf.io/preprints/psyarxiv/sfubr
[14] Crawford, M. B. (2015). The world beyond your head: On becoming an individual in an age of distraction (First edition). Farrar, Straus and Giroux.
[15] Carney, J. (2020). Thinking avant la lettre: A Review of 4E Cognition. Evolutionary Studies in Imaginative Culture, 4(1), 77–90. https://doi.org/10.26613/esic/4.1.172
[16] Ibid.
[17] Gurri, M. (2018). The revolt of the public and the crisis of authority in the new millennium (Second edition). Stripe Press.
[18] Ibid. See Chapter 8. Quoted text on page 271.