Back to Hermetica.info




Cognitive Hygiene
and the Fountains of Human Ignorance


An in-depth look at the ways our own minds mislead us and why critical thinking alone won’t clean up the messes we make in our heads: we can’t just ignore how our brains work, and we really aren’t built to be rational. Thoughts and emotions have to work together.


  by Bradford Hatcher

© 2019, Bradford Hatcher, rev. 3-11-20

Click Here for 387-page PDF Version
 
Table of Contents


Part One: A General Survey of the Issues

1.0 - Preface and Introduction

1.1 - Truth Words, Taxa, and False Dichotomies
Amathia - the Deliberate Kind of Stupid
True as a Verb, and Being or Holding True
False Dichotomies and Delusional Schisms
Nature and Nurture, Affect and Cognition

1.2 - Would a Rational Being Think Man a Rational Being?
Human Nature and Reality's Nature
The Nature of Mind and Cognitive Science
Emergence, Qualia, and Consciousness
Pros and Cons of Ignorance, Delusion, & Self-Deception
A Duty to Culture and Paying Our Rent

1.3 - Why Critical Thinking Hurts Our Stupid Feelings
Contradiction and Cognitive Dizziness
Critical Thinking and Cognitive Hygiene
Stupid Feelings and Emotional Intelligence
Denial, Relativism, and Limits to Tolerance

1.4 - Science, and Some Other Ways to Be True
Scientific Naturalism, Mathematics, and Formal Logic
Informal Logic, Statistics, and Skepticism
Taxonomies, Categories, Scales, and Matrices
Analogies, Thought Experiments, and Parables
Exposition, Narrative, Anecdote, and Anecdata
Introspection, Phenomenology, and Vipassana Bhavana

1.5 - Framing Issues and Far Horizons
Framing and Perspective
Narrow-mindedness, Points of View and Perspective
Nearsightedness, Spatial Framing and Orders of Magnitude
Small-mindedness, Contextual and Conceptual Framing
Shortsightedness, Temporal Framing and Time Horizons

1.6 - Identity, Belief, and Belonging
Conviction and Commitment
Identity and Identification
Belief and Credulity
Belonging and Confidence
Secular and Sacred Values
Opening Up the System

1.7 - Conditioning, Persuasion, and Ideology
Being Told What to Think and Feel
Classical and Operant Conditioning
Persuasion, Public Relations, and Advertising
Ideology, Indoctrination, and Propaganda
Us-Them, Social Consensus, and Weltanschauung

1.8 - Infoglut, Enrichment, and Lifelong Learning
Critical Mindfulness and Cognitive Hygiene
Sapere Aude, Against the Great Dumbing Down
Infoglut, Selection, Enrichment, and Eclecticism
Objectivity, Perspective, Stereopsis, and Feedback
Unlearning and Overwriting Inferior Knowledge
Lifelong Learning, While Keeping Minds Nimble

Part Two: Cognitive Challenges Across Ten Domains

2.0 - Towards a Taxonomy of Anti-Cognitives
Cognitive Psychology, Bacon’s Idols, Maslow’s Needs,
Psychology’s Languaging Behavior, Gardner’s Intelligences,
Bloom’s Taxonomy, Piaget’s Stages, More Psychologists, Taxa

2.1 -  Sensorimotor Domain
Sensorium, Perception, Semantic Memory, Efference, Play, Art,
Imagination, Embodied Cognition, Sensory and Conceptual Metaphor

2.2    Native Domain
The Other Original Mind, The Evolving Mind, Our Big Brains,
Evolved Heuristics and Processes, Modest Modularity of Mind

2.3 - Accommodating Domain
Accommodation and Assimilation, Constructivism and Bricolage,
Apperceptive Mass and Inertia, Memory and its Plasticity,
Schemas and Scripts, Analogy and Modeling, Cognitive Reappraisal

2.4 - Situational Domain
Problems as Puzzles, Cognitive Development, Problems and Emotions,
Attitude of Approach, Sense of Agency, Processes and Heuristic Tools

2.5 - Emotional Domain
Affect and Emotion, Reason and Emotion, Setpoints and Treadmills,
Hydraulics and Other Fallacies, Emotional Self-Management,
Cognitive Behavioral Therapies, Reappraising and Revaluing Values,
Resentment and Neuroplasticity, Classifying Emotions

2.6 - Personal Domain
Intrapersonal Intelligence, Emergent Selfhood, Anatta or No Self,
The West Plays Catch Up, Self-Schemas and Scripts, Shifting Identity,
Ego Defense Made Easy, Integrity and Character

2.7 - Social Domain
Fellowship with Others, Social Emotions, Social Role and Behavioral Archetyping, Sociobiology, Belonging and Behaving, Individuality,
Consensus and Diversity, Us Versus Those Other People

2.8 - Cultural Domain
Idols of the Theater, Memetical Metaphors, Gene-Culture Coevolution,
Spandrels and Exaptations, Transmission, Narrative Form, Hive Mind,
Ideology, Persuasion, Signal-to-Noise Ratios

2.9 - Linguistic Domain
Idols of the Market, Protolanguage, Nativism, Cognitive Linguistics,
Language Development, Linguistic Relativity, Semantics and Syntax

2.10 - Metacognitive Domain
Metacognition and Metastrategic Knowledge, Agency and Free Would,
Mindfulness and Concentration, Heedful Diligence or Appamada

2.11 - Metacognitive Domain - Thoughts and Practices for Kids
Childhood Adversity, Neo-Piagetian Developmental Stages,
By Domain: Sensorimotor, Accommodating, Situational, Emotional,
Personal, Social, Cultural, Formal Education, Kindergarten,
Secondary School, Introducing the Metacognitive Domain to Kids

2.12 - Metacognitive Domain - Thoughts and Practices for Dults

Not Too Late for a Little Work, By Domain: Sensorimotor and Native,
Accommodating, Situational, Emotional, Personal, Social, Cultural,
Linguistic, Work in the Metacognitive Domain, Elucidogens


Part Three: Toolkits and Collected Anticognitives

3.0 - Toolkits and Anticognitives by Category and Domain

3.1 - Media Savvy and the Smell Test
Garbage In Garbage Out, Some Filters of Baloney and Craap [sic],
Source, Motivation, Evidence, Logic, Lacunae

3.2 - Evolved Heuristics and Processes
By Affected Domain: Sensorimotor, Native, Accommodating,
Situational, Emotional, Personal, Social, Cultural, Linguistic

3.3 - Emotions and Affective States
Affect Suggesting Approach, Affect Suggesting Avoidance

3.4 - Cognitive Biases
Anticognitives by Domain: Accommodating, Situational, Emotional, Personal, Social

3.5 - Coping Strategies
Anticognitives by Domain: Situational, Emotional, Personal, Social

3.6 - Defense Mechanisms
Anticognitives by Domain: Emotional, Personal, Social

3.7 - Logical Fallacies
Anticognitives by Domain: Native, Accommodating, Situational,
Emotional, Personal, Social, Cultural, Linguistic, Formal Fallacies

Bibliography and Links

++++++++++++++++++++++++++++++++++++++++++++++++++++



Part One:
A General Survey of the Issues

               
1.0 - Preface and Introduction

Mit der Dummheit kämpfen Götter selbst vergebens.
Against stupidity, the gods themselves contend in vain.
Friedrich von Schiller

     First, accept my apologies for the typo in the title. It should have read “Foundations.” Or maybe “Mountains.” It’s too late to change it now, so just think of it as irony or something. Apologies also for not being three writers, each with triple the amount of time to work on this. The scope of this book is ambitious, but it’s a scoping document, not a final manual, and it’s for a field of inquiry that doesn’t really exist yet. Several pieces of this field have been developing for ages now, for millennia in Greece, India, and China, but nobody to my knowledge has tried to stitch them all into a meaningful whole. It’s considerably broader in scope than Robert N. Proctor’s idea of agnotology, the study of culturally induced ignorance, although that’s an important aspect. Neuroscience is the latest major contributor, but that’s just getting started. The effort to understand how our minds work, in the broader effort to learn who or what we are, has been missing a crucial piece: an effort to understand how our minds Fail to work, in the broader effort to learn who or what we are Not. We can learn much about our minds from studying their weaknesses, and perhaps unlearn some of the illusions and delusions that we’ve collected about ourselves. Mapping the terrain of human ignorance is easily as broad a task as mapping that of human knowledge. At least it’s certain that we are ignorant of more than we know.
     We’ll be asking how much of human error and stupidity is preventable. But to be honest, it’s too late for the majority of adults to correct a majority of the errors we carry around. Catching them early in childhood gets a much better prognosis, which holds lessons for us as parents and teachers. For some time, I thought the words “critical thinking” might figure into the title. Critical thinking is supposed to be an objective or rational analysis of facts to form judgments. It means perspective taking, openness to new and disconfirming evidence and propositions, dispassionate thought, rigorous logic, and an array of problem solving skills. This would have required a great deal of backpedaling or apologizing early in the Introduction, however, as I tried to explain why instruction in critical thinking has so little effect, particularly when taught to grownups. The phrase itself is not only problematic: it suggests the very perspective that causes the problems. It suggests an exaltation of reason, with some hierarchal authority over other mental processes. Most of what humans have understood about critical thinking is centered on logical error and inconsistency. But the more we look into critical thinking research, the more we see that what's been written is way too heavy on the philosophy and pure logic and far too light on the psychology, the emotional components, ego defensiveness, cognitive inertia and bias, delusion, anxiety about peer pressure and belonging, whininess about having to unlearn, saving face, fear of the unknown, hot buttons, and whatnot.  Critical thinking alone, the way it’s normally approached, is about as effective as telling someone that you’ll quit smoking is in guaranteeing that you’ll quit. Thoughts and words with nothing behind them are just a sham, a pointless practice when there isn’t more of one’s being participating. Emotions have just as much say as reason in what and how we think. Mind is embodied, not some ghostly guest in the brain pulling levers. And it floats on gooey neurochemical substrates.
      Just to get away from the term critical thinking, I want to use “critical mindfulness” and/or “cognitive hygiene.” What will be meant by critical mindfulness is simply adding a bit of knowledge and analysis to the more purely attentive or contemplative process. In particular, this is knowledge of where, how, and why the human mind can effectively practice self-deception and error. Cognitive hygiene refers to an intention to correct or clean up some of this deception and error, both in our own minds and in those we can influence. So cognitive hygiene refers to any set of practices to clean the mind of clutter, crap, and unwanted parasites. While this cleaning still includes the skill sets of critical thinking, it also embraces tools of affective or emotional self-management, since the affective side of perception drives so much of human error.
    By the time we reach our mid-twenties, our prefrontal cortices have done most of their ripening and we now have a better idea of the kind of people we want coming into our homes. The hoodlums and ne’er-do-wells, the Jehovah’s Witlesses and vacuum cleaner salesmen, and police without warrants stay outside. Party guests have invitations and their plus-ones are trusted more than strangers. Roommates are more carefully vetted now, and hopefully spouses are even more so. But why have we still not learned to better vet the guests that come to live in our brains? Even uninvited ones, that just squat there eating our food, burping and farting, and telling us how to live our lives and how to spend our money? Do we really have this little respect for the place our minds call home? We at least need to start making better inquiries before we let them in. This is what’s meant by cognitive hygiene: not letting guests make messes in our minds, and cleaning up after those we’ve shown to the door.
    The term anticognitive is already in use as an adjective. It’s introduced here as a noun, as any mental process (sensory, perceptual, behavioral, cognitive, affective, or mnemonic) that comes between ourselves and the true, or being true, or knowing the world either as it is or as we really need to know it. There are useful anticognitives which limit unnecessary loads on our minds, and bad ones that lead to maladaptive behavior. The human toolkit for maintaining ignorance is impressive as hell. Not all anticognition is blameworthy or a defect of character. A number of issues are simply the failure of our evolved sensory, perceptual, and mental faculties to be perfect all of the time. They may be adapted to a simpler world, or only capable of getting our perceptions started with quick approximations of what we need to assess and respond to. A distinction is made here between these. Not all stupidity is blameworthy either, and much can’t be helped. It’s the deliberate kind we wish to assault.
     It should be noted that this is not, and doesn’t pretend to be, a science book, despite the many citations to the numerous sciencey books and articles in the Bibliography. There is much too much conjecture and original thought here, enough to give this more in common with 19th century philosophy. There is no institutional affiliation. Besides, the academic route only allows you to have one new idea per book or thesis. In some ways, I’m not even doing philosophy: I’m just organizing and writing down stuff, that hasn’t been peer-reviewed, or even edited. It isn’t really setting forth arguments either, although you might find several hypotheses here. Unsubstantiated assertions made here are to be taken as no more than hypotheses, if not just simple prophesies. Further, you will also encounter assertions made here that are just plain wrong. I just don’t know where they are yet, and in some cases, science doesn’t either, so be vigilant.
     The overall structure and taxonomic divisions are original. This had to be, in order to present the material coherently. The subject is encyclopedic in scope, so this book can’t be expected to treat its many subjects in all of the detail they may deserve. Again, this is a scoping document. Most readers can expect to run across plenty of obscure or foreign terms that are given no further explanation. These may be thought of as side journeys you may or may not wish to take. They aren’t hyperlinked to further definitions, so that has to be a choice to go off hunting. We’re necessarily skipping over the surface of large disciplines, with the idea to leave the reader with at least a keyword or two to get the further research started. But investigation and education are really all about following the tracks, traces, breadcrumbs, and clues. This work isn’t entirely dispassionate, either. It’s written because our ignorance has gotten the species into deep trouble with the environment and each other, and we’re running out of time to get our shit together. For this alone, a closer examination of the things we allow to come live in our minds is well-advised and in order.
    The importance of giving our future generations an early head start became increasingly obvious as research progressed here. This runs a lot deeper than simply declaring we should teach kids not what to think but how, not what to feel, but how. Given what we’re seeing in what America and too much of the developed world calls culture, the only real hope we have is getting to the young kids with cognitive hygiene skills before the religious people and ideologues can shut down their young minds. There are plenty of analogs to foot-binding out there, just made for tender, young brains. Cognitive bias sets up like concrete, and the only things that will eat through that in any reliable way will be many years of therapy, some years of mindfulness practice, accidental epiphanies, or psychedelic drugs (which are hereafter referred to as elucidogens). Reason alone isn't enough. Reason alone ignores the emotional components of delusion, like the peer pressures and the hurt egos and clinging to the years invested in error. I’m more convinced now than at the start of this research that early political, religious, and consumer indoctrination ought to be regarded as child abuse. There are better ways to raise ethical children that respect who we are as evolved biological organisms.
    Be advised that there are a number of statements made here with a little more attitude or assertion than perhaps necessary. These are usually found as clarifying examples of an idea being presented, and often on political or religious subjects. While these will almost certainly betray a certain philosophical leaning or prejudice of yours truly, many are written the way they are so that you might examine your own emotional reaction to them, more intended to call attention to any offense you might take than simply to assert or offend. That’s in addition to having some fun with words.
    This book is divided into three parts: Part One is a broad and general survey of the more important issues and themes we’re facing. Part Two develops a new taxonomy of ten mental domains, specifically for scoping purposes here. These domains are various regions of the human experience in which anticognitives can be seen to operate. They aren’t separate lobes of the brain. This is an artificial construction with a superficial resemblance to Gardner’s Multiple Intelligences, Bloom’s Taxonomy, Bacon’s Idols, Maslow’s Needs, the Big Five Personality Traits, Piaget’s developmental stages, and others. I don’t mean to reify rooms in the house of the mind here, although it may be that an experienced reader of f
MRIs might one day be able to distinguish between activities in these several domains. Primarily, however, this is to organize the discussion and not develop a new phrenology. Part Three consists of several cognitive toolkits, and a few enumerations of specific anticognitive processes to watch for, sorted by category and domain. These should not, however, be regarded as afterthoughts. Four types or classes of our anticognitives have long undergone some informal development: cognitive biases, defense mechanisms, coping strategies, and logical fallacies. To my knowledge, nobody has teased them apart and reorganized them into any kind of larger or more comprehensive structure, although Vasco Correia (2011) has made a beginning in relating some biases to fallacies. This task continues a little here. I don’t feel any particular need to be original with the categories themselves, except for moving a few items around, but it was about time that someone put all four in one place and dealt with them as a group belonging together in a larger category. The sorting of items within each, according to its prominent anticognitive domain, is original.




1.1 - Truth Words, Taxa, and False Dichotomies
   
Amathia - the Deliberate Kind of Stupid
    True as a Verb and Being or Holding True
    False Dichotomies and Delusional Schisms
    Nature and Nurture, Affect and Cognition

“Ignorance is not just a blank space on a person’s mental map. It has contours and coherence, and for all I know, rules of operation as well.”
Thomas Pynchon


Amathia - the Deliberate Kind of Stupid

    The word stupidity comes from the Latin verb stupere, for being numb or astonished, and is related to stupor. It’s the most general term for learning disability, and covers too much to be of use here, especially cognitive challenges due to genetic conditions and brain injury. The meanings, of little use to a serious inquiry, include stupefaction, idiocy, denseness, imbecility, foolishness, fatuousness, inanity, all of them affectively loaded. We need a better, less pejorative set of words for the cognitively handicapped. The word ignorance also has a broad range of meanings, from simply being unaware or unacquainted, to being gullible or deceived, to being too preoccupied to notice something of value or importance, to being biased against new information by prior learning, to open disregard or devaluation or new experience, to hostile rejection of disconfirming evidence. The latter parts of this spectrum are the most concerning and constitute a more willful or deliberate ignorance. The Greeks called this amathía, ἀμαθίᾳ, disknowledge, stupidity of the vilest kind, an unwillingness or refusal to learn, distinguishing this from agnoeís, ἀγνοεῖς, simply not knowing, from a lack of natural ability, or innocent delusion, as when a child grows up with an ideology with no exposure to alternatives. Amathía is a kind of mental cowardice that hides in closed ideologies out of fear of being different or stepping out of line, and is easily maintained in others by people in power by manipulating their fears and insecurities, and the strategic use of bogeymen.
    We can distinguish innocence from gullibility. Innocence can simply mean callowness, immaturity, inexperience, artlessness, unaffectedness, youth, guilelessness, and even simple sincerity, but it doesn’t imply that we lack the tools to learn. Gullibility, credulity, or credulousness imply that these critical features are absent.
    Agnoeís is distinct from the English agnosia, an inability to interpret sensations and recognize things, usually due to brain damage. Intelligence, or IQ, is suspiciously irrelevant to several forms of ignorance, and people with high IQs and post-graduate degrees can be both stupid and ignorant. A brain surgeon might still believe Earth to be 6000 years old. Intelligence isn’t proof to being fooled or self-deceived. Michael Shermer writes: “Folk wisdom has it that smart people are harder for magicians to fool because they are cleverer at figuring out how the tricks are done. But ask any magician (I have asked lots) and they will tell you that there is no better audience than a room full of scientists, academics, or, best of all, members of the high IQ club Mensa. Members of such cohorts, by virtue of their intelligence and education, think they will be better at discerning the secrets of the magician, but since they aren’t they are easier to fool because in watching the tricks so intensely they more easily fall for the misdirection cues.” Woody Belangia writes in his blog, “Where ignorance does become shameful is when (1) we are presented with evidence of our ignorance, (2) the matter about which we are ignorant is of great importance,  (3) we make no effort either to cure or mitigate the consequences of our ignorance, and, (4) we continue acting as if we were not ignorant.”
    Ignorance is from the Latin Latin ignorantia, want of knowledge. The word ignoration can be used in the sense of an act of ignoring. I think we should introduce a false, retroactive etymology for the word that suggests it derives, via back formation, from the verb “to ignore,” sort of a self-conscious genetic fallacy that has its own kind of truth. We can distinguish between several kinds of this, especially between simple ignorance, willful ignorance, and self-deception, and we can also look at the subject in terms of implicit vs explicit motivation. Rational ignorance, from Wikipedia, is “a voluntary state of ignorance that can occur when the cost of educating oneself on an issue exceeds the potential benefit that the knowledge would provide.” Some things, however true, are simply not worth the effort of learning, and these values vary between individuals for individual reasons. Many have great reasons to be ignorant of rumor and gossip, or the latest in fashion trends. And there is a charm in some forms of ignorance, especially in children and simple folk, the less quick, keen, and clever, that we simply call innocence instead.
    Our concern here isn’t so much with the innocent or harmless ignorance, but with that leading to species-wide maladaptive behavior that ruins the world for future generations and the remainder of life on earth. And this concerns such themes as mass delusions and the adoption of popular ideologies, which often contain their own defensive anticognitive armament. Illusion is something sensed or perceived wrongly, a deceptive appearance or impression. It largely happens prior to deploying our cognitive apparatus. Delusion is often thought of as ignorance maintained despite being contradicted by a generally accepted reality or rational argument. This kind of ignorance can be offensive, take offense, and take the offense (with deadly weapons and WMDs). Much will be said here about how this ignorance digs itself in. But we look first for a benefit of the doubt if we are looking to cure the ailment. Hanlon’s razor suggests “Never attribute to malice that which is adequately explained by stupidity.” Then we look for ways to sneak an education past the defensive line instead of battling our way through it. There is also a principle in law that reads ignorantia juris non excusat, or ignorance of the law is no excuse, meaning that the law applies even  to those who are unaware of it. So there are assumptions we make about a certain level of cultural literacy being required of us.
    Avidyā (Avijja in Pali), not-knowledge or blindness, is an important concept in Vedanta. It’s also a term with plenty of facets, like ignorance, nescience, unawareness, not knowing, delusion, and misunderstanding. Methodical work on our avijja was carried forward and developed extensively by Buddha, who at times used it interchangeably with moha. This has some slightly different connotations of delusion, mental dullness or darkness, infatuation, stupidity, bewilderment, confusion, ignorance, folly, and sentimentality.
    Naive realism, also called direct realism, or common sense realism, suggests that the senses provide us with direct awareness of objects as they really are, possessing the properties they appear to have, including solidity, color, smell, and permanence. Our perception is regarded as a more or less reliable guide to what’s out there. This approach might dismiss objections that our senses deceive us on the grounds that our perceptions are reliable enough to make our way through life. Fallibilism says human beings might be wrong about their beliefs, expectations, or their understanding of the world, but still be justified in holding mistaken beliefs. No proof or disproof is certain, but certainty isn’t required. We do what we need to do, and know what we need to know to get by, and that’s the only reality we really need.
    Perspectivism, advanced most enthusiastically by Nietzsche, suggests that all ideations are configured from a point of view, and other points of view support other conceptual schemes. But while assessments of truth are relative to perspective and cannot be taken as definitive for this reason, this doesn’t claim that all perspectives are valid, much less equally so. Relativism takes this to the extreme: every point of view is valid in its own way. Finally, several mystical traditions, and especially the new age movement, are loaded with platitudes about reality being the product of our minds. The extreme form of this is solipsism, where even you, dear reader, are but a figment of my hope or imagination. Robinson Jeffers objects, in Credo, “The beauty of things was born before eyes and sufficient to itself. The heart-breaking beauty will remain when there is no heart to break for it.”

True as a Verb, and Being or Holding True
    Four human professions get to use true as a verb. An archer trues his aim. A carpenter trues a wall. A marksman trues his sights. A wheelwright trues a wheel. How cool would it be if we could all use this, forget about finding the truth, and concentrate on adjusting ourselves to reality? It’s nothing but trouble and vexation as a noun. Truth is as bad as perfection, and often the same thing, only good for stupid platitudes that encourage us to stop growing. As a verb, closing in on the true is a process of optimization. We move ever closer to an ideal which we might even acknowledge to be unreal, or a fiction, or an asymptote that’s never really attained. We settle for various reasons when we’re close enough, when diminishing returns in achieving that final bit of perfection gets too pricey. It’s a lot more humbling, too, to acknowledge ourselves to be a little short of perfection. But sometimes we can also take pride in avoiding the smugness we get in having the final answers.
    In 2016, Oxford Dictionaries chose “post-truth” as its word of the year, defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” This joins Stephen Colbert’s neologism of truthiness (which came first): the quality of seeming or being felt to be true, even if not necessarily true. These are logical consequences of postmodernism, deconstructionism, and relativism. With regard to the truth, these are fads that are hopefully on their way out. They may linger a while longer, but not in those who matter. Meaning still exists intrinsically in text, apart from its interpretation and the additional meanings that come from that. Of course objectivism has seen its best days as well. Perspectivism still holds its ground, as long as it doesn’t slip into relativism and a democratization of knowledge that assigns equalitarian values to every contradictory datum. It’s true that human truth is interpretation, filtered through sensory, perceptual, mnemonic, personal, social, cultural, and linguistic templates, where our anticognitives are hard at work as well. We are right to be vigilant, and suspicious of our convictions.   
    The pre-Daoist philosopher Zhuangzi, in Chapter 6, writes of the True Man, zhēnrén. He has these two things to say: “There must first be the true man, and then there is the true knowledge.” And, “The breathing of the true man comes from his heels.” Five or so centuries prior to this, the 3000-year-old Book of Changes uses the phrase “be true” (yǒufú) 26 times, and yet nowhere does it spell out what true really means. In part, this is because it’s the nature of change to demand a situational ethic. But it also carries an assumption that the sense of what’s right is intrinsic or innate. Yǒu is to be, have, or hold; is an inner certainty or confidence in stance. There is much confusion here in the scholarship, both from etymological guesses and from the polysemous use of the word as “capture.” The graph depicts either a hand or a bird claw hovering above an egg or a young life form, such as a child. It’s claimed that this depicts a hand seizing a person, a capture. Yet other etymological speculations might make equal sense: if that claw in the character was mine, and the child too, I would be telling the predator: “I’ll take my final stand here: this is worth defending.” It would be a protective gesture, not a predatory one. Getting the meaning of the word from the pictures is not, and never has been, an exact science. But polysemy aside, the gloss of fú as true is the only one that makes any sense in all 26 contexts.
    Mahatma Gandhi used the same ideas in coining his term Satyagraha, for holding true. Satya is Sanskrit for the virtue of truthfulness in thought, word, and deed, from the root sat, for existence, being, or reality. Agraha is the root of the word grasp and means to hold firmly. Both yǒufú and satyagraha ask that conscience be at the heart of our choices and that we hold to that in our behavior. And both suggest that the true that they hold to is a force that derives from inner character. This of course calls to mind civil disobedience and “speaking truth to power,” for which the Greek, and especially Cynic, virtue of parrhēsía, παῤῥησία, outspokenness, candor, or fearless speech is the proper term. We stand and speak out for what’s right, even if we have to stand alone, blowing our little whistles at the walls until the walls come down. Despite the source of our conscience in our inner character, the immense courage that outspokenness often requires usually seems to require a sense of conviction to something not just outside ourselves but more important than we are as well. Otherwise the forces of peer pressure and the threat of no longer belonging are just too much. We have so far managed to skirt the noun truth, the question of being true to what, and the condition of possessing the truth. That’s the subject of the rest of this book, but not by way of figuring out or defining what truth is. We’re going at that backwards instead, by figuring out or defining what error is. Truth will be just a small part of what’s left after we get done with that.

False Dichotomies and Delusional Schisms
    We like to take our ideas, about anything and everything, and arrange them in piles. The number of piles varies with the job that we want to accomplish. Sometimes it’s as many as we need to get the job done, so the number grows as we progress. The number of atomic elements has gradually grown from 4 to 118. Sometimes the properties of the numbers are so interesting that they drive the content, like the number 360 did for all things temporal, astronomical, and circular in Babylon. Twelve and Ten are big favorites, but four and three are even better. The mind being what it is, our most favorite of all is Two, with an “or” or a “versus” between them. Common examples are nature vs nurture, biology vs culture, character vs environment, emotion vs reason, affect vs cognition, body vs mind, animal vs human, secular vs sacred, fate vs free will, conservative vs liberal, graduated vs punctual evolution (creeps vs jerks), and linguistic nativism vs linguistic relativity. In most cases, these dyads are both locations on a continuum and are ultimately found to be coevolutionary.
    Ancient China had its famous yīn and yáng, shadow and light, and its lesser known róu and gāng, flexibility and firmness. These, along with the wǔxíng, or Five Agents, were formed as categories in the Spring and Autumn period and were developed into philosophical concepts by Zou Yan (305-240 BCE) with his Yinyang School. The concepts were later sent backward in time to be made the foundation of everything, particularly in traditional medicine and the original Book of Changes, which in fact only mentions yīn once, as a shadow. Even Laozi, widely regarded as the founder of Daoism, only mentions yīn and yáng one time, in Chapter 42, and even here it’s a part of a triad with in the middle. Zhuangzi, Daoism’s “cofounder” mentions them only in the possibly apocryphal Books 3 & 5. This is what categories do: sometimes they obsess us. But we can’t sort all things into two piles. For one thing, there are different kinds of dichotomies proposing different kinds of relationships, and conflating these can get pretty absurd, as when we might assert that man is to woman as superior is to inferior; us is to them as good is to evil; caucasian is to negroid as light is to darkness; or self is to other as one is to zero. The world just isn’t binary in this way. True dichotomy is just a cognitive convenience a lot more often than it’s a reality.
    False dichotomy, also known as false dilemma, and the black-and-white or either-or fallacy, is ubiquitous in the soft sciences and journalistic reportings thereupon. The majority of the public battles between schools of thought in science seem to be founded in some way on false dichotomy, from nature vs nurture, to affect vs cognition, to gradual vs punctuated. It’s as embarrassing as the prevalence of non causa pro causa, or false causality inference. They feel compelled to pick a side. The process may be most visible elsewhere, in the adversarial justice system, where we see two very expensive, mealy-mouthed advocates arguing exaggerated half-truths and seldom if ever acknowledging, conceding, or stipulating any middle-ground. We are expected to choose sides in debates. And dualistic thinking is also a cognitive factor strongly associated with belief in a deity, particularly in the West, where the divine squares off against either the secular or pure evil. Of course, it’s also a fallacy to expect or demand that the truth lie in the middle, to claim that compromise is always the answer, or that the best truth or solution must lie between opposing sides in an argument. It’s often true that both sides of a debate have some piece of the answer or solution. The gray area, or the area between two poles, or the whole middle part of the continua in question are dismissed in false dichotomy. Finally, polarization maximizes the simplicity of conflict, and the need for recruitment maximizes the likelihood of the two sides being close to evenly matched. When losing members, one side of an argument will modify its position enough to recapture defectors. The opposition is self-adjusting.
    A number of the issues presented in this book are deeply entangled with polarizing issues and antagonistic theories. In most cases, you may find that I’ve taken up a position that acknowledges the merits of both arguments, usually asserting that these sworn enemies are in fact either coevolutionary or complementary. This will certainly be the case with the likes of body vs mind, animal vs human, secular vs sacred, fate vs free will, conservative vs liberal, graduated vs punctual evolution (creeps vs jerks), and linguistic nativism vs linguistic relativity. This position may wind up also asserting that those who have made the more extreme or polarized assertions are simply dead wrong to have done so, the exact opposite of their being right.

Nature and Nurture, Affect and Cognition
    Several of our more common false dichotomy issues can be discussed later as the need arises, but two of them are global enough in our theories to merit being sketched early.
    Nature vs nurture is an overall debate that subsumes biology vs culture, genetic vs cultural evolution, physis vs nomos, and character vs environment within its scope. In our attempts to find a balance between these, our inquiries have been hampered both by a too-primitive understanding of sentience in biological organisms and by a conceptual body-mind or body-spirit dualism grounded in both European philosophy and Western religion. The fallacy itself is evolving from the question which is greatest or most real into which is more predominant in which situations. The notion of the human mind as tabula rasa, or blank slate, thrived in ancient Greece and continued in force through Locke, Rousseau, and Freud. This asserted that the mind of man (unlike animals) really has no original nature, but is inscribed by culture. The stress placed on nurture left us dangerously out of touch with our basic biological functions, which include thinking. “Since I have known the body better,” said Nietzsche’s Zarathustra to one of his disciples, “the spirit hath only been to me symbolically spirit; and all the ‘imperishable’ - that is also but a simile.”
    Neuroscience is gradually taking the tabula rasa idea apart. More of us are no longer claiming that animals have instincts while humans have souls or reason. We are finding structures in both our endocrine and nervous systems that establish an innate human nature. Cybernetics and computer metaphors have given us the idea that whatever is not inborn in the structure and function of the brain may be regarded as the writing writ onto the slate. It just became software for our wetware. We are still somewhat programmable on top of our natural state, so the blank slate idea never entirely died. We have something akin to a nature, something somewhat malleable, and it even contains such processes as codes and cues for social and moral behavior. Evolutionary biology and psychology have since been trying to identify evolved perceptual, emotional, and cognitive processing modules or networks in the brain that represent inherited human universals. These would have to be perceptual processes, or schematic and behavioral “archetypes,” that we have had time to encode genetically, adaptations we have made over hundreds of millennia, to a life in a generally simpler world.
    All of this is gradually developing into a synthesis called dual inheritance theory, or biocultural evolution, or gene-culture evolution, and like any good synthesis, the whole is greater than the sum of its parts. Not only are the gray areas between the extremes being identified, without being forced to choose a side, there are also coevolutionary effects being identified. Clearly, mutations in the brain began at some point to permit better protolinguistic software to develop, which conferred adaptive value, and this got strengthened in the genome. Walking erect allowed us to carry tools and other goodies around. On the flip side, our culturally transmitted ability to manage fire and construct weapons and tools led to altered diets and digestive systems and allowed larger brains to evolve. Our mobility and cultural knowledge allowed us to migrate into wildly different environments, where we developed clusters of genetic adaptations to specific niches, several ecotypes that might one day have led to speciation, but which only led to the racial differences that are now being erased again by globalization.
    Our dual inheritance presents its greatest application puzzles in two areas. The first is in relation to attribution issues, such as deciding whether to blame a crime on innate character defects (like a cowardly refusal to learn) or on a defective environment and childhood adversity. Do we have to blame the parents or their culture whenever our good children turn out badly? Whose responsibility is the drawing out, education, or construction of the virtuous character? Does the individual have any responsibility for this, and if so, how much? How much carrot is appropriate (which assumes a basic appetite) and how much stick (which assumes a reticence to be trained)? The second puzzle would entail us scrapping or dismantling our programming and ideologies concerning who and what we are, which are largely based on cultural dogmas, and writing new software for the human brain that both respects and optimizes what we really are and how we really feel and think, without relying on myths and delusions. Can this even be attempted with partial success in adults, or do we need to start by experimenting on our offspring before they can learn what we’re up to? Do we have any sort of license or right to do that, or are we more obligated to bring them up entirely within the culture of their birth?
    Affect vs cognition, which also takes the form of emotion vs reason, and heart vs brain, is every bit as pervasive in our thinking as nature vs nurture. This, too, has one foot in the body vs mind-or-spirit dualism, and has deep roots in ideas of human exceptionalism, in which man is endowed by the creator with god-like reason, which is able to exercise authoritative, top-down, executive commands to brother ass, the meat machine that it occupies as ghost. The wicked emotions are damned nuisances, and even the happy ones should be suspected of trickery and treachery. Our reason or soul has divine powers to make choices independently of biology, provided we can properly resist the temptations of the flesh. And yet, somehow, “all of us have a very thin veneer of civilization painting over what underneath is a savage and marauding beast” (Harry Crews). We may owe the veneer idea (that masks what we really are) to Thucydides. But we just aren’t built like this at all.
    Cognition and reason also seem to be our truer nature to those who believe that computers will one day become sentient or conscious (as distinct from simply intelligent), and that computer matrices will one day accept data transfers of sentient beings into more durable forms. Any function that affect or emotion might have can be substituted or replaced with more insistent reiterations of data to take the place of meaning, salience, relevance, and values. But drawing a line between cognition and emotion is erroneous: we simply don’t have cognition without an affective component. Things don’t arise into consciousness without a little chemical push from below that’s based on personal meaning, salience, relevance, or value. Affective neutrality means there will be no meaning, salience, relevance, or value, and therefore no paying of attention or consciousness. That’s even true for such emotionally neutral tasks as adding two and two.
    The mind is actually distributed throughout the organism, concentrated but not localized in the brain, and includes all of the substrates found in the body’s endocrine system, the chemical juices and soups. The mind reaches out to the fingertips. It’s Zhuangzi’s  breathing from the heels. The mind is more like xīn, the Chinese understanding, often translated heart-mind. This word depicts a heart in silhouette, and combines the meanings of heart, mind, and core, and might best be understood as mind in the sense of “do you mind?” This implies care, and once again, personal meaning, salience, relevance, and value. Mind, like attention and consciousness, can be reimagined as emergent qualia rather than substance. And it emerges with affect and cognition together, out of biological, neurochemical, and neuroelectrical substrates. For our purposes here, there will be some subjects referred to as affect or emotion, and some referred to as cognition or thought. It should be stipulated here that we are speaking of the affective or cognitive side of what is really a combination of the two. Emotion and thinking cannot be separated any more than magnetism and electricity, or particles and waves, or space and time, or gravity and mass.



1.2 - Would a Rational Man Think Man
a Rational Being?
   
Human Nature and Reality’s Nature
    The Nature of Mind and Cognitive Science
Emergence, Qualia, and Consciousness
    Pros and Cons of Ignorance, Delusion, & Self-Deception
    A Duty to Culture and Paying Our Rent

“What a piece of work is man, how noble in reason, how infinite in faculty, in form and moving how express and admirable, in action how like an angel, in apprehension how like a god, the beauty of the world, the paragon of animals. [It’s telling that the next lines are usually omitted in quotations] And yet to me, what is this quintessence of dust? Man delights not me; no, nor woman neither; though by your smiling you seem to say so.” Hamlet, Act II, Scene 2

“One way of looking at the history of the human group is that it has been a continuing struggle against the veneration of crap.  Our intellectual history is a chronicle of the anguish and suffering of men who tried to help their contemporaries see that some part of their fondest beliefs were misconceptions, faulty assumptions, superstitions, and even outright lies.” Neil Postman and Charles Weingartner, Teaching as a Subversive Activity.


Human Nature and Reality’s Nature
    Culture tells humans what humans are, and what’s most widely circulated is generally believed. It’s a question of which version gets the best exposure. In the distant past, this was the storytelling, the making of myths, and re-telling of legends. This held and still holds the power of narrative, the most effective way to transmit cultural teachings to others (more on this later). Things began to change in earnest with the rise of the Bronze Age empires, with their laws and codes of behavior, and this was soon followed, 25 centuries ago, give or take, by the rise of both philosophers and religious founders. Our identities began to change from those told in narrative stories to those described by conceptual schemas and scripts. Through all of this, it was never the common man who told us what we are, it was never the farmer or goatherd. It was the thinker, and eventually the writer, and generally the more rational among us, who took the lead on behalf of the culturally creative. They saw that we were thinkers. The manipulators and moralizers were in there too, who would specialize more in nationalist and religious ideologies, and thereby influence the majority, although the romantics and poets would restore a little bit of color and comfort to our identities. It’s not surprising, therefore, that we’ve been duped for a very long time, with or without good intentions. An honest search for human nature, one that was stripped of both narrative and ideology, has remained the pursuit of only a very few, although this is at last gaining some momentum with the rise of science and other means of careful inquiry. However, even much of the best psychological research being done now is being done on white European and American undergraduates, or the economic man, which some might call a major sampling error.
    At some point in our cultural history, some great wit decided to assert than man was a rational animal, and managed to convince the literati, who spread the news to the intelligent, who convinced the less so by the exercise of their intelligence. Our capacity for self-delusion and susceptibility to flattery allowed this idea to stick. We are homo sapiens, wise and rational man. But its widespread acceptance is its own refutation. I think all we need do to verify this is take an honest look at our fellow man to see that he’s something quite other than rational. He is, however, predictable, and easy to manipulate, and there are rules in that for the finding. We find now that we’ve designed or developed much of human culture around delusional ideas of who and what we are, and this doesn’t seem to be serving us as well as it used to. A large percentage of our mental software is simply incompatible with our wetware, including our extended computer analogies to the human mind. Might it not serve us to get better ideas of what we are, to replace what we only wish we were? In order to do that, we have to look well beyond the rationality that we have in fact so poorly developed. We have to fully account for the many roles of affect and emotion, which are far too little acknowledged even in fields like psychology. Very little of what humanity thinks is rational. The name homo sapiens is a joke. Most humans have either neglected or refused to develop higher-order cognitive functions. People think instead in simple sensory and conceptual metaphors, and in scripts and stories.
    Human exceptionalism has become one of the major components in human parasitism, alongside overpopulation, overconsumption, and amathia or willful ignorance. It’s maladaptive, and rendering our future bleak. We are not made in any god’s image, and we are not wholly other than animal. At least above the level of medical testing, denying that we are animals has precluded a lot of research into what we are. We’ve hardly looked at animal communication, barely even primate communication, for zoological substrates to our linguistic abilities. Neuroscience is slowly coming to the rescue, but even here there are silly battles waged over false dichotomies, and, of course, the hard problem. Stubborn heirs to the Skinnerians, the reductionists and materialists, will still not admit that there is a subjective dimension to experience that in any way counts as real, even if this be no more than strongly emergent qualia that can somehow have effects on the real world. But qualia, by definition, have biological substrates, and don’t emerge or exist without them. If we want to understand whatever free will we might have, we won’t find it hovering in some ether, high above our biology, barking commands to the meat machine.
    Reality’s nature is different from ours. Given what we know about the brain, the chance of our ever experiencing a truly objective look is not a possibility. We even lack the right metaphors. Contrary to a plethora of platitudes, the mind does not create reality. Reality is independent of human thought. There exists a potentially objectifiable reality, but having only one set of senses, working from a single perspective, in a single point in time, with the ulterior motivation of a living being, leaves us inadequate to perceive it. The human mind only creates the human mind’s reality. Such is emergent subjectivity. But then singly and in groups, human minds may act to create changes to the larger reality, many of which may approach great existential significance. Such is emergent social construction. This can constitute governments. And it can damage the world in serious ways. This is the sociopolitical reality of human life within the human culture here on Earth. This is mistaken for the larger reality by an astonishing number of humans. Within this artificial world, however, we may yet infer a number of what might be thought natural laws of human behavior, also expressions of evolved universals. We may create natural and cultural rights out of these laws, and models for moral and social behavior.
    Imagine a young person on a large dose of some elucidogen, in the dark, in a large tub of warm water, perhaps with a lover alongside. The world has been transformed into ever-changing clusters of vibrating energy, flows moving through energy systems, integrated into the surrounding darkness through tensor fields. A man, looking to his lover, he sees only nerve structures of fire and light. Then, looking four-dimensionally down their evolutionary line, he returns them to when they were the same early human being, and still further to when they were both one fish. This experience is actually much closer to the real, objective reality than everyday, naive realism takes us, but it’s still built almost entirely on metaphors, human mental representations of human sensory perceptions. Our minds have no choice but to begin with a combination of our perceptions, constructed from sensorimotor neural inputs and whatever heuristics come standard with the human brain, and then to use these as metaphors for things not sensed as they are. We can take this further, with mathematics, thought experiments, and new models that we can present to our sensory apparatus, but we won’t bring the true reality of an atom or a galaxy into our minds for study. We make models, and then compare those to our observations, and then revise our models. But the model isn’t the real, and the map is not the territory.

The Nature of Mind and Cognitive Science
    The human mind is a wet, complicated mess, despite its contemplation of lofty ideals. It isn’t some aloof, disconnected observer, squatting behind our eyeballs. As with the Chinese term xīn, mind is actually a process of minding, whether at a cellular and preconscious level or emergent and conscious. We awaken to things that concern us, or threaten to, or promise to. We turn those stimuli into meanings to further mind or dismiss. We mind like we mind our manners, our Ps and Qs. And mind, since it’s really more like a transitive verb, doesn’t really exist without something to mind or attend. Minding begins with processed reconstructions and simulations from sensed input and other neuronal activities. It doesn’t begin in raw reality, to which we have no direct access. We will tend to take our sensations, stirrings, memories, cogitations, and inferences at face value. But there’s no real need to believe everything, or even anything, that we think and feel. The mind didn’t evolve to depict the truth. It evolved and persisted when it proved useful in helping organisms live long enough to make and raise babies. This meant an evolved ability to make quick assessments of the immediate environment, flee from deadly threats, run vicarious trial and error scenarios, infer probable outcomes, and get excited enough to pursue those outcomes.
    The mind has lots of native limitations. Without the prosthesis of cultural learning, it’s limited to what’s called naive or common sense realism. Naive realism is a world of the brain’s simulations, representing clues collected by senses, a world with no math or atoms or germs. Things don’t move according to laws of physics there: they are moved only by the things we’ve seen moving them, or things we thought, or feared we saw. But perception misses much. Infrared and ultraviolet, infrasound and ultrasound come readily to mind, but only as first examples. Even our most direct experiences of the world are already heavily modified, and we can only know the reality that’s represented in the mind, correctly or incorrectly.
     It’s a useful oversimplification that the human neocortex is a recent overlay on the old mammal brain, which in turn overlays a reptilian brain. We can look at the geometrical expansion of capabilities and potential operations from old to new, and note a parallel geometrical constriction from new to old, way down to where the fight-or-flight, emotional, and motivational lives are lived. To say that the older brains are simpler is of course a relative thing: there’s nothing simple there. But the human neocortex is too infinite in its potential and requires both limitation and management. It’s best used for what evolution kept it around for doing: to look at options, and run vicarious trial-and-error scenarios, not to take every least bit of data seriously. It doesn’t hurt to be selective, to judge what goes on high in the head, to unlearn on purpose, to dismiss nonsense, and avoid confusing the older, simpler parts of the brain with endless gibberish. The old limbic system appreciates this and life is lived more calmly.
    Dual process theory posits two different pathways for a general thought to arise, or two distinct systems which produce a thought. System One is implicit, preconscious, preverbal, and built from remembered associations and contexts. It reacts quickly to stimuli. There is a greater degree of affective or emotional response to a stimulus, and a greater involvement of the limbic system and the older brain. System Two is somewhat more explicit and conscious, more apt to be verbal, more obedient to learned rules of inference, and it regards itself as reasonable, if not downright rational. System One is far older, antedating the species and even the genus. It’s a cluster of evolved adaptations and inherited heuristics. It’s much faster than System Two, but normally not as precise. System Two depends on culture or cultural leaning and likely didn’t rise to much dominance until homo erectus, or even h, ergaster. Because of its speed and readiness to respond before the conscious mind gets involved, System One is usually the first responder to an experience, and may generally have some pretty strong and strongly felt opinions on the matter before the conscious mind arrives with its rational skill set. Decisions are frequently already made, and sometimes physical steps have already been taken. The conscious mind, with its vaunted reason, has to play catch up, and often has to convince a now conflicted being to change its hasty plan, and even the way it feels about things. Thinking is really only done at the surface of things. It’s often done after the fact to rationalize decisions made and paths taken unconsciously. And it’s often too late to be truly useful.
    Cognitive science is an interdisciplinary field, drawing on psychology, neuroscience, psychology, philosophy, anthropology, education, semiotics, linguistics, and computational modeling. It studies the mind and the nature of intelligence. It might be summarized as the study of everything involved in learning and knowing stuff, and in figuring new stuff out. It studies the processes by which nervous systems come to attend new inputs, represent raw information, form integrated perceptions, process new information, compare new representations with old, and transform old information to accommodate the new. It looks for rules that describe the innate behaviors of our brains and augmentations of those behaviors that arise from cultural learning. Per the Stanford Encyclopedia of Philosophy, “The central hypothesis of cognitive science is that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures.” In theory, the science can include everything it should include, such as affect, although in practice the focus tends to be on the cognitive side of knowing. There remain weaknesses in the field, especially  in addressing the roles of embodied cognition (sensory and conceptual metaphor), emotion (in relation to relevance, attention, and cognitive inertia), consciousness (as emergent qualia capable of metacognitive supervenience), and the limitations of social-cultural learning (including conformity bias and weak linguistic relativity). And of course it’s still lacking a comprehensive study of anticognitives in the broader spectrum of their forms.
    Cognitive psychology and cognitive neuroscience are progressively narrower terms. Cognitive neuroscience is an interdisciplinary field that explores one full step deeper than cognitive psychology into the living unconscious, the structure of the brain, the processing and recalling of sensations, the ones and zeros of information processing, the pluses and minuses of neurochemical states, and the synergies between these. It’s the study of how psychological and cognitive events emerge out of neural structures and functions. It is a little unfortunate that such a large portion of the information that’s easiest to collect comes from patients who have neurological damage, but now the study of healthy subjects by various means of neural imaging is growing rapidly. The study of these images is even making some room for the self-reported descriptions of subjective states. Buddhists and yogis are coming in quite handy here, since they’ve long been mindful of these states from the inner side and have also developed an extensive vocabulary to articulate them.
    When we speak of brains today, we should be training ourselves to think it a mistake to locate the brain entirely in the head, or to think that its sole function is the processing of data by neural nets in zeros and ones. The brain goes out to the fingertips, to pick up all of our sensory neurons, and its many functions include all the complexities of the organism’s blood-borne chemistry that have effects on our mental states. Western psychology is only now starting to de-marginalize the fundamental roles of the sensory world and affect in our cognition. Of course, fifty years ago the behaviorists wouldn’t even look at subjective states, so there’s some progress. Given these errors, by people who fancy themselves scientific, maybe inquiring within isn’t all that unscientific. At the very least, people who are paying attention to how their minds seem to work might now be consulted more often.
    We have come a long way from phrenology, the old attempt to locate mental functions in specific spots in the brain and expressed on the scalp. The spots tended to grow into areas, and the areas into lobes, and then people started to see that most mental functions arose from networks of processes happening in different parts of the brain. Eventually, it may be better understood that brain is the noun and mind is the verb, and that verbs must deal in dynamic processes. And emergent verb-like-processes will just not reduce entirely to the reductionist’s nouns.
    The computer or cybernetic analog models are used a lot in theoretical neuroscience, but they carry with them some of the more problematic pitfalls of argument from analogy. There is more to mind than the ones and the zeros and the programs that move them around. There remains a quite-common vision of an artificial intelligence or AI one day becoming large and complex enough to reach a tipping point and awaken as a sentient being. Sometimes the scientists will think as much too highly of information as others think too highly of consciousness. Certainly the AI devices will grow ever more proficient at crunching data and solving problems, according to designs and  parameters that might even be the inventions or outputs of older AI devices. And one of the persistent design goals here will be to create devices that perform increasingly better in Turing tests, tests of a device’s ability to convince its observers that it’s intelligent and ultimately self-aware. Of course, if you look around carefully, you may notice that human observers can be convinced of just about anything, especially if it conforms to their hopes or expectations.
    It’s still too reductionist to expand the computational concept of mind outwards to embrace our sensations, sense memories, feelings, emotions and imaginings as being further forms of digital information, even if we allow that they are of a different quality. As horrifying as it may be to science, we are probably still looking at synergy and strong emergence as the best terms to name the arising of sentience. This is horrifying because the theory really explains nothing: it merely gives a name to the process and announces the arrival of a new set of rules. Beware of rabbit hole. The hard problem remains hard. And ultimately, neuroscience will not be able to tell us all that we would like to hear about who and what we are. We may have to go on making up names for experiences that we may not be able to measure or locate in the physical being. But this is alright. This is how Theravada Buddhism and neuroscience can work together. The important thing is that we keep getting better at cognizing in ways that respect the way the brain operates, to develop a healthier relationship between the ideal and the real, a relationship that diminishes delusion and increases self-efficacy.
    Neurochemistry studies how our neurotransmitters, pharmaceuticals, hormones, elucidogens, and other chemicals can affect the functioning of neurons and their networks. The most obvious effects from the subjective perspective are on qualia such as the feeling of our feelings and emotions, the levels of our attentional arousal, and the maintenance of moods and other affective states of even longer duration. In studying addictive behavior, we’re especially concerned with such motivators and reward-system chemicals as dopamine, serotonin, oxytocin, noradrenaline and cortisol. More objectively, neural chemicals also direct a large range of processes such as the outgrowth of axons and dendrites, the rewiring of the brain, neuromodulation, sensory threshold modulation, the growth of new brain tissue and connections, memory regulation, neuroplasticity, and the specialization of neurons. Clearly there is too much of sentience in the activities of chemicals in the brain to reduce it all to digital ones and zeros, even without going down the rabbit hole of strong emergence.
    Thanks are due to neuroscience for bringing the importance of these, our sacred juices, to light, particularly the neuroscientists who emerged out of the psychedelic drug culture with their big new sets of chemistry questions. Our sentience is all about dynamic interaction, not a passive, contemplative recording of ideas on some equally ethereal medium. It is probably not even all digital until you get all the way down to the electron shells of atoms.
    Irritability and plasticity are two of the characteristics distinguishing neurons from other cells. Neurons are subject to changes in ways that do not signal injury. These changes occur on both cellular and macro levels. By birth, we’ve developed a vastly over-connected neural network, with many more possibilities and combinations than we’ll ever use. This over-connectedness is subsequently pruned back by a combination of use and neglect. Gross neural development was once considered pretty much a done deal by age five or so, but this thinking failed and a growing body of evidence tells us the brain continues to change, and retained even more potential to change. The most compelling examples are found in the redeployment of neural tissues to new functions following brain injures, but there are other intriguing examples, such as the potential to develop echolocation by the blind, brain-to-machine interfaces, chip implantation and the technological development of artificial senses.
    Data from experiments with meditation and neuroimaging suggest that physical reconfigurations of brain tissue can occur in ways that modify our levels of stress and anxiety, attention, levels of confidence, and other processes. Obviously, with each and every recallable memory, something has changed in the brain, however small that change may be. Given this, any old claims that neuroplasticity was an exceptional phenomenon had to refer to larger-scale changes in neural architecture. We will touch upon this in a few places later, suggesting dynamic memory as a descriptive term for the process. Bringing memories, desires, and aversions fully into awareness can augment, alter, or diminish the affective charges they carry by adding new associations such as equanimity or forgiveness. As long as we are practicing mindfulness, the memory or other mental object that we are still attending has yet to be fully experienced. The neural outcome of past events can still be changed.
    A lebenswelt, or life-world, is the world that’s subjectively experienced as the given or naively real world. For closely-related sentient entities, especially within similar environments, comparing multiple life-worlds can lead us to a sort of aggregated consensual world that approaches what some might call objective truth or reality. But there are a lot of conditions and assumptions here, and they don’t really help us out one bit if we’re trying to communicate with a bottlenose dolphin. This dolphin’s brain runs ten times the auditory data that ours does, but only a fifth of the visual. On the whole, his neocortex has about twice the surface area of ours, and is more fissured, but less deeply, and processes about double the overall data. We are worlds apart. He lives in his body in a way that’s sensed much differently: it’s simpler, and needs less computation, despite his extra, third dimension of vertical movement. If I were to give him a human IQ test, he might score at the kindergarten level, or maybe the chimpanzee’s. But if he were to give me a dolphin intelligence test, I would likely score well below squid, and not the clever kind of squid. We build our cognitive worlds partly out of original neural structures (neurognosis) but mostly out of experience that is originally sensual and sensory.
    Embodied cognition is the view that any creature’s mental experience is conditioned on its material form, which represents a cluster of limitations on the way an environment might be experienced were we given other or additional forms. Bat and cetacean echolocation, shark and platypus electroreception, and cephalopod communication with chromatophores are but three examples beyond the bounds of our own embodied cognition. There are creatures who see much farther into the infrared and ultraviolet, although no life form comes even close to sensing the fuller range of the E-M spectrum that our technological sensory extensions investigate. But even the data from the cleverest of our extrasensory devices needs to be translated back into data that lies within what's sensible to us. In other words, the fact that something makes no sense does not in itself make it untrue.
    Our conceptual metaphors, constructed largely of our sense memories or sensorimotor schemas, are the building blocks of much of our thought. Sensory and conceptual metaphors, together with our reasoning from analogies based on configurations presented to us by our senses, form the dramatis personae, stage, set, and theater of much of our cognitive world. And importantly, neuroscience is gradually teaching us with some greater conviction that this is not the whole of the world.

Emergence, Qualia, and Consciousness
    Science has a difficult time playing nice with the inner world. Even psychology,  the very -ology of the psyche, had its long behaviorist phase that denied the relevance of the subjective experience. And that's not entirely dead yet. Even some wings of modern neuroscience are following suit, explaining consciousness away as something that isn’t really there, or doesn’t really happen. It may be that there’s an underlying fear on the part of materialists to get anywhere near the top of a slippery slope back into Cartesian dualism, the ghost in the machine. The sense that we’re something that inhabits the brain, perhaps a little spirit-person lurking behind the eyeballs, a true self, or an atman, is known as the homuncular myth. This is easier for us to perceive than seeing ourselves as an ever-shifting coalition and straw poll of the biological processes that produce awareness. It’s still more difficult to think that such ever-shiftingness, combined with our dependence on ephemeral biology, might mean that we don’t really exist at all, except as passing fancies. But this was Buddha’s view of things. Regardless of any ontological status consciousness might claim, it's not what it seems to be, and its continuity is in all likelihood illusory. Nobody's getting around to solving the hard problem of what it is, no matter what the journalists say.
    With respect to the nature of mind and consciousness, this book will assume a handful of positions that currently enjoy a different degree of acceptance in neuroscience. Mind and consciousness are the productions of living organisms, and they cease when organisms die. They go to the same place your fist goes when you open your hand, or your lap goes when you stand up. A fist is both real and not. It’s very real when you face a boxing champ. Note that the word productions was used there instead of products. These are names of mental conditions or states. Mind and consciousness are not founding or fundamental properties or elements of the universe. The self, the soul, and the spirit have their origins in the assumptions that they are being perceived or attended as objects. None of this is to say, however, that mind, consciousness, self, soul, and spirit do not enjoy some form of conditioned or conditional reality, or even a real existence. Strict materialists deny this, and claim that these are nothing more than illusions. The nature and arising of mind or consciousness is referred to as the hard problem, for good reason, and there may be no problem harder, in any field. The philosophical position that permits the subjective experience as a conditioned reality is called emergentism and has roots in systems theory. It makes it’s home between Daniel Dennett’s neo-Skinnerian materialism and the Big Fanciful Wish for spiritual immortality, where false dichotomies are not permitted to go.
    Aristotle first pointed out that the whole is greater than the sum of its parts. When the parts are subtracted from the whole, which can only happen in thought experiments, the remainder is considered an emergent property. It seems to come from nothing, but it arises out of synergy. Much later, J. S. Mill began to develop the idea further: “All organized bodies are composed of parts, similar to those composing inorganic nature, and which have even themselves existed in an inorganic state; but the phenomena of life, which result from the juxtaposition of those parts in a certain manner, bear no analogy to any of the effects which would be produced by the action of the component substances considered as mere physical agents. To whatever degree we might imagine our knowledge of the properties of the several ingredients of a living body to be extended and perfected, it is certain that no mere summing up of the separate actions of those elements will ever amount to the action of the living body itself” (A System of Logic). Further, “Those bodies continue, as before, to obey mechanical and chemical laws, in so far as the operation of those laws is not counteracted by the new laws which govern them as organized beings” (1843). We might consider that there was no such thing as chemistry in the first epoch after a big bang, because there were no such things as molecules. Similarly, there was no such thing as biology until life began to take shape and propagate. Both chemistry and biology are emergent. It’s unknown to us whether, given a perfect knowledge of the nature of primordial reality, anyone could have predicted chemistry and biology. Were this possible, their arising would be termed “weak emergence.” With weak emergence, the resultant properties can be predicted or extrapolated from a knowledge of the parts, even if they are greater than the sum.
    The relationships between the emergent levels or orders of reality also follow natural “transordinal” laws. C.D. Broad, in in The Mind and Its Place in Nature (1925), clarifies, “A transordinal law is as good a law as any other; and, once it has been discovered, it can be used like any other to suggest experiments, to make predictions, and to give us practical control over external objects. The only peculiarity of it is that we must wait till we meet with an actual instance of an object of the higher order before we can discover such a law; and that we cannot possibly deduce it beforehand from any combination of laws which we have discovered by observing aggregates of a lower order.” Here he’s speaking of strong emergence.
    With strong emergence, the emergent property cannot be predicted from an understanding of the parts, no matter how complete. Strong emergence is going to be a big surprise. Waking up to wonder why we’re here, why anything exists, is just such a big surprise. Qualia is the term used for the strongly emergent properties in the human subjective experience. An example is the “blueness” of the 450-495 nanometer electromagnetic wavelengths when they strike our retinas, or the hotness of 100 degrees Celsius, or the loving-kindness we feel from a flood of oxytocin. Mind or consciousness can be regarded as nothing more, or less, than the sum of all qualia. Qualia cannot be emergent substances, as this is an oxymoron, since the term substance, meaning to support or stand beneath, names the prior conditions for emergence, not the result. The reality of consciousness as something immaterial doesn’t launch us into dualism. A language metaphor might be useful here: verbs are as real as nouns, processes as real as things. And nouns are still less real than the subatomic assemblies that constitute things.
    Qualia have one particular property that distinguishes them from illusion and grants them a place among reality's moving parts. They are able to turn around and have real effects on the “lower orders” of reality which produced them. Biological organisms can be both chemists and physicists. And the human mind, for better or worse, can alter the world that brought it into being. States of mind can turn back and influence the brain. Consciousness is something more than purely epiphenomenal. This property of a result acting on its own causes is known as supervenience. This of course applies to the question of agency or free will. Here again, we have some groups of scientists claiming that mind, as illusion, can have no real effect on the world, and that ultimately everything will be determined by the lower-but-real orders of existence. We are the products of our environment, deluded into thinking we have free will. If we knew biological organisms perfectly, we could perfectly predict our mood swings and the decisions they lead us to. But the fact that theses assertions are correct in so many cases fails to prove all cases. Dennett, dismissing both supervenience and qualia, is fond of pointing out (correctly) that there’s no physical manifestation of “I,” no ghost in the machine or little homunculus that witnesses and experiences the goings on in the brain. But if so, we’re still faced with asking what/who, if anything, is experiencing consciousness? While Dennett is almost Skinnerian in his denial of the worth and effects of subjective experience, he does still regard himself a compatibilist on the subject of free will, that determinism and agency coexist, and whatever lights are on in the brain can somehow regard options and light the way to real choices.
    Emergence is an easy fit with systems theory. The emergent property is the totality of a resultant system less the sum of its parts. In other words, it’s in the synergy. Philip Anderson writes, “Psychology is not applied biology, nor is biology applied chemistry.” At each level of complexity in nature, “entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other.” In other words, new sciences may need to arise to explain new realities. There could be no atomic physics until there were atoms, until some time after the Big Bang. There was no chemistry until there were molecules, some time after stars began to explode. There could be no biology until chemicals learned how to learn. George Gantz, in The How and the Why, tries to explain: “In the turbulent flows of energy and matter, sophisticated forms and structures seem to emerge spontaneously. Water going down the drain starts to spiral. So do galaxies, and sunflowers. This spontaneous ‘something else’ is called emergence. Sophisticated forms and structures emerge in complex systems. This counter-entropic result works by exploiting the ‘running down’ tendency, the entropy, from the local environment to the rest of the environment. So, scientists can now explain how more sophisticated and complex structures emerge in a universe that is, overall, still running down. But they still cannot explain why.”
    Emergence leaves the hard problem no less hard, but it seems to be pretty satisfying to Theravada Buddhists (as paticca samuppada or conditioned arising) and the skeptical neuroscientists (who needn’t see a world of qualia as anything more than what it is, or anything less). To the extent that qualia-like consciousness can affect other parts of the world, it’s plenty real enough, without having to insert it into the world as a primordial cause or to think it some substance that must go on forever. The emergentist model or school of thought successfully skirts the hard problem by not tackling it. Many would call it a kind of philosophical escapism, and it likely is. It’s a philosophical position, not a science. It just says: “Nice problem, but really hard. Maybe we can get to this when we know an awful lot more.” So many efforts to address it just keep it hard, like fluffers on a porn shoot.
    Emergentism assumes that if something can have some kind of effect on reality, then it too must have some kind of reality, even if it isn’t really a substance. Emergentism accepts consciousness as a kind of reality, but not something independent of a material basis and biological processes, or a fundamental property of existence. Something emergent arises out of prior conditions and is dependent upon those conditions for continued existence. Buddha was the first to suggest an emergentist theory, which he called “conditioned arising.” This general idea is also basic to the Zhouyi and Yixue BaguaLi,” symbolized by fire, that’s conditioned on and arises from the log. Dogen wrote, “To carry the self forward and realize the ten thousand dharmas is delusion. That the ten thousand dharmas advance and realize the self is enlightenment”  (i.e, we’re consequences first, of inheritance and experience, conditioned and dependent, like flames on a log, and only then are we at the beginning of things). There is, in fact, a great deal of disagreement on this point, and it’s still a minority view. Alan Watts summed it up nicely and chose a side in the process: “You did not come into this world, you came out of it, like a wave comes out of the ocean. You are not a stranger here.”
    Some especially pitiful and specious thinking has gone into assertions that agency or free will is purely illusion, simply based on the idea that mind is less than fundamental. It’s backed by a number of experiments that allegedly constitute proof. In the laboratory, decisions get made by subconscious neural processes hundreds of milliseconds before they reach consciousness. But the conclusion many draw that this disproves agency is a blatant straw man fallacy, because the definition of agency is limited to operations conducted within windows of a few hundred milliseconds and disallows any extended recursive loops between a conscious mind and subconscious neural activity. Refuting this, however, is not the same as saying that most people have free will most of the time, especially when there is so little evidence for that in general human behavior. It does at least allow that an unknown number of us might be capable of agency some of the time.
    Emergent properties can turn back on the more fundamental orders of existence and act causally upon them. We would have no civilization if our mental objects didn’t lead to the making of physical machines. Emergent qualities are something new under the sun, while the world’s fundamental dynamics remain unchanged by their arising alone. Emergence will only be ontologically fundamental to reality within supervenience, whenever it can demonstrate downward causal properties. If this were not a possibility, at least for some of the people some of the time, we would have a lot fewer people able to abandon alcohol, tobacco, heroin, and bank-robbing habits. This offers us a challenge to develop more effective mental technologies or toolkits for self direction or agency, that we might get a better grip on our behavior and straighten ourselves out on purpose.

Pros and Cons of Ignorance, Delusion, & Self-Deception
    We begin figuring out how to deceive other people at around age three and a half. It requires a little development of a theory of mind and some cognitive control first. A child learns to resist the urge to confess. Our manipulation gets more subtle after that, as we learn to flatter, misdirect, abuse others’ credulity and trust, blame others, and withhold key bits of information. Yet with all this first-hand, working knowledge we develop, we remain surprisingly vulnerable to deception ourselves. And when we don’t, we can go too far the other way, get ruled by our suspicions, and project wicked schemes onto others.
    Self-deception is the acquisition and maintenance of false, distorted, or motivationally biased beliefs held against disconfirming evidence. The implication is that this evidence has been at least dimly perceived and rejected out of hand. The extent to which it’s been perceived may be regarded as a measure of real commitment to the true. Self-deception is motivated, unlike simpler forms of nescience, but this may be habitual, or otherwise largely unconscious, or nearly subliminal. The motivation is likely distinct from conscious intent. It may develop over time with a repetition of thoughts and words, as prisoners may perceive themselves as being a little more innocent every year. False memories can be constructed of this.
    Brainworking is costly. Thinking takes time and energy. Attention, like currency, has to be paid, and it’s usually paid with a return expected on the investment. The brain may come with high levels of innate curiosity, but this isn’t the same as an appetite for excessive cognitive loads. We want to know enough, and just enough in most cases. We want to stop learning when we start to sense or predict diminishing returns for the effort. We  want to learn just enough to get by, to meet our anticipated challenges in life, and then rest our tired brains. Woe betide those with ambitions for a full, rich, and rewarding life before death comes a-knocking. We often stop thinking once we get a sense that we have the right idea. This process is innate. It reduces ambivalence and vacillation, and keeps us from getting overwhelmed by our options or choices, which reduces stress. Overthinking and over-questioning can be a bit like driving with the brakes on a bit. Putting a stop to further thought on the matter is only a problem in opportunity costs or benefits foregone. Of course, one of the things we might miss is an immanent threat to life or limb, so there’s a learning curve to watch for when closing the mind becomes maladaptive. Self-deception may reduce both cognitive dissonance and cognitive load and spare the brain valuable calories for more valuable sights than the things no longer seen, but we do run the risks of not seeing things unexpected.
    We should ask how we human beings came to have this ability to fool ourselves in the first place. Self-deception is useful or it wouldn’t have been conserved in evolution. The SEP notes, “Many argue that self-deceptively inflated views of ourselves, our abilities, our prospects, our control, so-called ‘positive illusions,’ confer direct benefits in terms of psychological wellbeing, physical health and social advancement that serve fitness.” Avoiding pain, even with self-delusion, can have a more positive outcome than facing a truth. White lies allow us to spare others unnecessarily hurt feelings. Self-deceived individuals may be more effective at persuading others to help meet their needs, which might serve as a definition of social or political power. Enhanced social acceptance can be a big plus that may or may not fully compensate for the costs of facade maintenance. Self-deception might encourage a faux sense of worthiness or entitlement, but thereby allow one to claim the resources by which they become worthy in fact. A little dishonesty can contribute to evolutionary fitness, survival, or reproductive success. A slightly inflated sense of self-esteem or overly positive self-image can give us the attitude we need to move forward. A fellow might get to mate with the girl of his dreams if he can convince himself and her that size doesn’t matter. This is a bootstrap psychology that has much in common with self-hypnosis. But it may also involve a psychic split, just as we speak to ourselves in the second person in self-hypnosis. We may be giving ourselves happy, positive thoughts and affirmations in the mirror. We pretend to be someone we’re not, at least not yet, and then need to believe in the pretense. We “fake it ’til we make it,” as is sometimes heard in 12-step recovery. Overconfidence, as implied by definition, risks taking this too far.
    Duplicity or deception can be cognitively costly. You may have to feign affect as well as support a representative structure that has no fallback in fact. It’s axiomatic, since Mark Twain at least, that the more you tell the truth, the less you need to remember. Keeping track of the lies one tells, and trying to maintain the plausibility of a fictional narrative as real-world events intrude, is mentally taxing. The fear of getting caught is a source of anxiety, and when it happens, the damage to our reputations can be lasting. For the people who are lied to, the costs of lying are also clear: lies undermine general trust, the most valuable of all social currencies, as well as relationships, organizations and institutions. Individuals can evade responsibility for their misdeeds, take credit for accomplishments that aren’t really theirs, and rally friends and allies to causes that might harm them. But they usually come to believe them to the extent that they work. While it may be the mark of a good mind to entertain contradictory ideas simultaneously, lesser minds can hold contradictory beliefs by partitioning their awareness. This will often take forms like hypocrisy, fundamental attribution errors, or us-them dichotomies. It might involve creating alternative self-schemas and dissociating from one to the other, rendering hypocrites utterly blind to their own hypocrisy. This is more than a little bit common.
    Social and cultural delusions can create ways for communities to cohere. From neighborhood to national scales, we draft and spread mythologies, set goals and objectives, claim missions sacred and secular, and constitute corporations and governments on confabulated ideas. Even the bad ones can hold us together, but their problems are legion. Problems are at their worst when the out-groups are caricatured and used to define what the in-group should be, or when in-group members are redefined as spies for the out-groups. Cultural illusions abound at the village scale. Chapter 22 of the Book of Changes uses flame at the foot of a mountain as a metaphor for cultural nearsightedness (which doesn’t need to be seen as the more pejorative shortsightedness). This chapter’s name refers to ornamentation, or fashion, the small frame of reference and the near-term perspective. This is fine for our unimportant decisions of little lasting consequence, and it’s vital for a cozy and homey sense of culture. But the larger perspective is still a bad thing to lose sight of, particularly as the human population and its civilization grows more dangerous. The fashion and art worlds exist at separate scales of reality than the geopolitical world, although the latter often behaves like the former in taking the petty too seriously and losing the bigger picture.
    At the larger state and national scales, we have political and religious ideologies that constrain the universe that’s available to our thought. There are occasions where a large collective like this has every reason to be proud, to be setting an example for other parts of the world. But we develop inflated views of ourselves and our groups, and correspondingly distorted views of those not us. National pride become patriotism can be a sickness, confining us to a small psychological playpen in one corner of the world, as befits the lack of emotional maturity that comes with that image. The dehumanization of others unfortunate enough to be born in the out-group can lead to sociopathy. And the utility is even more limited when this thinking causes us to fight wars, whether we win or lose. But to have that important sense of belonging, swallowing lies like nationalism or exceptionalism might be required. On a large scale, a society may come to hold a belief that excuses an agent by denying agency itself, e.g. my childhood or the devil made me do it. A “group mind,” though without being conscious or agentic, can retain and manipulate members by rendering fear, anxiety, intolerance, or insecurity contagious, as well as by threatening punitive measures and expulsion. This may be regarded as a higher order of anticognition that operates within its own socially constructed reality, in which disconfirming evidence is simply not allowed within the universe of discourse. An epiphany or two won’t awaken a whole group from such a delusion. This may require a full-scale revolution or a collapse of the society. These things do happen. Governments fall roughly at a rate of one nation per year, so that the average lifespan for a nation or dynasty averages about two hundred years. A good portion of these die of their own delusions and exceptionalism.
    Gullibility and credulity are more serious that just a readiness or willingness to trust, to give others the benefit of doubt, or presume them innocent of deception. The ring of truth is not admissible testimony, and a sense of rapport has little necessary relation to truth. Sometimes these traits fail to correct themselves because we learn at a young age that pretending is as good as real. Of course, sometimes we just like to be amazed and astounded. Most of us don’t go to see the magic show in order  to figure out how the illusions are done. We don’t want to know the trickster’s tricks. Some time ago, Facebook offered satire labeling for made up stories and satirical pages like the Onion and ClickHole. Users didn’t seem to like having the joke pointed out beforehand, preferring to be swept along for a while. But with the huge new wave of political and corporate fake news, with its truthiness, this has become a concern, and gullibility grows a little less funny by the day. Poe's law warns us that parody and satire of extreme views can be increasingly mistaken for authentic personal expressions. But many of us will still refuse to give ourselves away with emoticons.
    There are levels of error that we’d have to call harmless, and there are levels in children that we do call harmless, which in all probability do a great deal of harm because of the way minds get built from foundations upward. But we can learn to fantasize without believing in the fantasy, and this can be a creative process that envisions new possibilities. A delusion can still be salutary, as when it might protect us from truths that would make life unlivable, such as constant reminders that we’re all gonna die forever. A simple conceit, like the one bit of reason we surrender in reading or writing science fiction, can open whole new worlds to the imagination. Unsubstantiated conjecture can also serve as hypothesis. Unsubstantiated faith and belief can permit concentration on other things in life and it may add a needed sense of purpose, and the confidence and determination to pursue that, although these rarely need to be as absolute and unshakeable as often assumed. At the global scale, it might help us to maintain some optimism about the future of the human condition and the environment. But with too much false optimism, which seems about as rampant as the denial, an ignorant, deluded population will just continue to overpopulate, metastasize, and consume without vision or regard for consequences.
    By the time we’re grown, our errors are pretty firmly and deeply rooted, our traumas are sequestered out of sight and mind, and our lacunae of ignorance impressively walled off against threats of new input. By this time, self-correction becomes a question of triage, and picking our battles with care. We might prioritize our projects in cognitive hygiene around different varieties of harmfulness, ranging from self harm to social, to environmental or ecological. We might focus on not making the world a worse place to live, and change what thoughts we need to change to achieve that. And within this, we might have levels of care or concern that conform to our conscience and our values. Metaphorically, there’s mother-in-law dirt, that will show up on the white glove test, and bachelor dirt, that you don’t want to stub your toe on. We just have to ask whether bad or inferior thinking leads to bad or harmful behavior.
    The motivation for self-deception can be anxiety over loss of identity, or a portion of a self-schema, or a clung-to ideology, or a sense of belonging, or a hint of such a loss, or even a hint of anxiety over such a loss. To acknowledge a personal act of cheating may be an intolerable threat to a sense of self-worth, so once committed, the act must be rationalized. It might be negative or fearful motivation that distinguishes self-deception from its more positive cousins, wishful and magical thinking. But however these motives are met, the full scope of the costs of ignorance, delusion, are self-deception are seldom fully understood or calculated. Perhaps the biggest budget line item here is the cost of their defense. We rush to the aid of our errors, as often as not blinded or hijacked by reactive emotions. A firm but illusory sense of identity might give us some significant relief from self-doubt, but in proportion to its firmness, it will need defending every time it’s threatened, including from information and forces that would improve or correct it. The same is true for systems of belief and the confidence they give that allows us to question no further. These become playpens and cages that keep us from exploring the rest of the world.
   
A Duty to Culture and Paying Our Rent
    “When the values that support a moral stance are parochial, it is impossible to reach universal agreement on what is good or bad. The only value that all human beings can readily share is the continuation of life on Earth.” Mihaly Csikszentmihalyi
    Abraham Maslow asserted that we live to satisfy progressively higher orders of needs, with the implication that until our most basic, important, or fundamental needs have been met, it will be hard to dedicate ourselves to anything higher than ourselves. This is especially true when a culture creates a lot of new demands on the lower Maslovian levels of safety and security, belonging and love, and self-esteem. Much of modern first-world culture puts it’s people on these souped-up treadmills and leaves them there to power the economy. Not many seem able to raise their view above this, or find the time and energy for higher purposes. For all of its many flaws, one truly great concept came out of aristocracy: noblesse oblige, or noble obligation. Where we’ve been well-endowed by the world, there is a corresponding duty to give something back, and with some gratitude, particularly where we’re blessed with leisure. Thomas Jefferson claimed that aristocracy as it’s commonly known wasn’t necessary to the sense of noblesse oblige, and that there was in fact a “natural aristocracy,” of talent and virtuous character, that held that ethic as a matter of course. There are some incentives in our societies. We at least have tax deductions for charitable gifts. Churches ask for tithes, where at least some of the money goes to help others, even without ideological strings attached. Mohammad Ali claimed “Service to others is the rent you pay for your room here on Earth.” And Dave Foreman, of Earth First! infamy, advocates a similar ethic for the sake of the biosphere that gives us life and so much more. Try to leave the world a better place than you found it. To take and take and give nothing back is the very definition of parasite. Meanwhile, the great majority of humankind seems to believe that being born is all we require to have earned or be granted the full complement of human rights and privileges, indeed, all we need to have our full measure of worth. We're entitled by birthright to all we can take, subject only to our own human laws. We are perfect just as we are, as the new agers say. Let it be.
    Some of us, with more of this conscience and noblesse oblige, accept a duty in three areas: to the environment, to society, and to culture. Society and culture are different kinds of ecosystems. But we aren’t born with these duties of conscience. We may need to become mature enough before we see them, acknowledge, and accept them, grateful enough to want to give something back, and motivated enough to take them on. We owe much to those giants whose shoulders we stand on, and something even to the big piles of little morons who went before us. The biggest and hardest part of the task is to set a good enough lifestyle example, and inspire others to do the same. A particular challenge is making simple living look more attractive than opulence with all of its toys. Our concern here in this book is with the ecology of transmissible information in the culture. What duty might we undertake to keep this information accurate, aligned with whatever the heck truth might be, on track, and maintained? What more do we still have to learn about misinforming our children, regardless of our good intentions? We are partaking in cultural evolution, not just by creating, but by selecting as well.
    The second law of thermodynamics states that total entropy must increase over time for an isolated system, meaning a system which neither information, energy, nor matter can enter or leave. Negative entropy, the development of order and in-form-ation, is possible only in local processes which consume energy in maintaining dissipative structures. This is the energy that’s otherwise wasted at energy gradients, like when sunlight strikes the earth and changes form, or metaphorically, when information strikes the mind. Coherent culture is one of these dissipative structures, and of course, in the universe at large, it’s necessarily doomed to the heat death. Keeping information coherent, then, is a local affair that requires work. It requires a great deal more work than letting things go or fall apart. A restatement of this is found in Alberto Brandolini’s Law (2013), or the Bullshit Asymmetry Principle, which says: “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.” Negative entropy can only be local, temporary, and relatively small, but this is also the culture that we rely on to make the most of our lives here.
    Mark Twain wrote, “A lie can travel halfway around the world while the truth is putting on its shoes.” In the digital age, misinformation spreads at least ten times as fast as its correction. As anyone on social media should be aware, the battle to correct all of the misinformation out there was lost a long time ago. Populus vult decipi, as went the Roman saying, the people want to be deceived. Even to specialize in correcting the self-help nonsense falsely attributed to Buddha or Laozi would waste a whole lifetime and not make a dent: signal-to-noise would hold steady at around ten percent. Why should we be distrusting ourselves, questioning our own motives, second-guessing our decisions, or challenging our precious self-esteem? We have voices like Thomas Paine’s: “It is the duty of every man, as far as his ability extends, to detect and expose delusion and error.” How about a duty to our own posterity, and our future selves, to learn the right things and not learn the wrong ones before passing our legacies down? We can still bequeath traces of our negentropy, our own cultural contributions, and the examples of our lives.
    That we are all entitled to our opinions, that we all have rights to think and feel what we will, does not confer truth upon anything. Rumor and gossip aren’t entirely pointless: they do have evolved social functions in our predominantly social species. Issues of accuracy aside, they at least keep the group informed of the current state of social behavioral norms, mores, and morals, and of who might be likely to challenge those. The people’s concerns or fears for their reputations will tend to have them acting a bit on the safe or conservative side, to overcompensate for their whims. Similarly, viral memes and memeplexes, ideas and theories gone running amok, still have some entertainment value, and offer insights into what the human psyche wants to see, for those with eyes to see. So where is the harm in believing in UFOs, chemtrails, a flat Earth, astrology, or pseudoscience in general? Do we spend our valuable time just criticizing others, or picking their claims apart, or playing curmudgeon or troll to the true believers? Or do we just shrug?
    What we probably need is a sensible protocol for informational triage. There are certain categories of ignorances and delusions that truly threaten the future of both our species and the biosphere, with its climate and biodiversity. The senseless killing done in both trophy hunting and war makes it clear that the costs of our mental errors are often real and severe, and that they are paid not only by ourselves, but by others, by innocent bystanders, by our heirs and descendants, and by a lot of other defenseless species. Stephen Jay Gould added, “When people learn no tools of judgment and merely follow their hopes, the seeds of political manipulation are sown.” Why should this be our duty, since the dedicated individuals accomplish so little? If we each have our own individual responsibility in maintaining our culture, constraining our government’s abuses or metastatic growth, and protecting our environment for posterity, and only a small fraction of us care to take that responsibility on, where is the justice or fairness?
    We do have this going for us: culture is created by a minority, not by the majority. Only a few of us are needed to teach the teachers of the teachers. It’s the place where Archimedes would take a stand to move the world. And we few are a plucky lot, Loren Eiseley’s “Star Throwers,” at least saving one star at a time. And culture builds, if slowly. The good stuff is still growing, however deeply it might be buried. Despite the tall piles of horse shit we inherit, there still must be ponies in there. Yes, we have to work with poor and prohibited education, childhood adversity and indoctrination, media with agendas in alien interests, and cultural software that’s generally incompatible with our inherited wetware. But we can create, and that’s still a lot. And we have to keep over-quoting Margaret Mead here: “Never doubt that a small group of thoughtful, committed people can change the world.  Indeed it is the only thing that ever has.”



1.3 - Why Critical Thinking Hurts Our Stupid Feelings

  
 Contradiction and Cognitive Dizziness

    Critical Thinking and Cognitive Hygiene
    Stupid Feelings and Emotional Intelligence
    Denial, Relativism, and Limits to Tolerance  

“The trouble ain’t that people are ignorant. It’s that they know so much that ain’t so.” Josh Billings

Contradiction and Cognitive Dizzinesss
    Goshdarnit, another typo. That was supposed to say dissonance. The law of non-contradiction states that the proposition “A is B” is logically incompatible with “A is not B,” or that a thing cannot be both A and not-A. Plato held this as a law in the ideal realm, but not necessarily in the world. Heraclitus is said to have denied it. Aristotle affirmed it as law, and Russell and Whitehead as a theorem. And Korzybski insisted the real problem is with the word “is,” in effect agreeing with Heraclitus.
    We have a reactive homeostatic process in cognitive development that seeks a reduction in uncertainty or contradiction, a lessening of demands on our attention, and a return to cognitive consonance. The changes undergone in this reconciliation amount to what adult learning theorist Jack Mezirow called transformative learning. Social psychologist Leon Festinger developed the first theories of cognitive dissonance in the 1950s. The general idea is that holding two incompatible or dissonant thoughts at the same time is stressful, and this triggers the homeostatic response in our cognitive systems. When we are simultaneously presented with two or more contradictory representations, beliefs, or values, our attention goes here until the conflict is resolved. This may present most often as a rejection of any new information that conflicts with what’s already been learned, with no further inquiry into whether or not the new information is superior. We may have an emotionally charged defensive response or challenge to the new, often causing us to double down on a belief that’s already held, at the expense of more valuable knowledge. It’s an innate drive to reduce dissonance to zero. A lot of the anticognitives we’ll be looking at here might be assignable to specific domains, but this one is a more general process, one of only a few that might span all domains. The human dissatisfaction with experience that doesn’t fit our expectations begins early in life, as the mind seeks comfort in the known, and even derives pleasure in confirmation of getting things right. Infants just a few months old look longer at impossible and unexpected events than they do at the known and familiar. Renee Baillargeon (1994) reports that “we are already primed to notice cognitive dissonance in infancy, apparent impossible events and violations of expectation fascinate babies.” But it isn’t yet stressful or unpleasant. In the earlier stages, it’s intriguing, and to some extent it’s possible to learn how to recapture this way of responding.
    Dissonance often occurs when we ourselves hold, or have already admitted, contradictory beliefs into our database. This is the case with self-deception, hypocrisy, and denial. These contradictions have to be compartmentalized, like keeping the vinegar away from the baking soda, or the gasoline away from the furnace. We have to sort them quickly into different places, or process them in different parts of the brain. Neil Levy writes, “Defenders of the traditional conception of self-deception do not, of course, think that we succeed in lying to ourselves in precisely the same manner in which we might lie to another. Instead, they take self-deception to be an activity engaged in with some kind of reduced awareness. Moreover, they do not assert that the self-deceived believe their claims in precisely the same way that we generally believe our normal beliefs. Instead, they typically hold that the contradictory beliefs are somehow isolated from one another.” It’s OK for me to hate the X-people because I have a different set of reasons, but when you do it, it’s intolerance. When the partitioning breaks down, things explode. The most uncomfortable is the discrepancy between the believed ideal and the real, particularly in the interpersonal domain of self-appraisal. And of course we cleave to belief in the ideal and deny any real examples to the contrary. Hypocrisy has to remain invisible to hypocrites, and this is why they can’t see it.
    Despite this anticognitive process being something we’re born with, we are still able to learn different cognitive and emotional responses to dissonance. It’s usually possible to turn any sense of stress or discomfort into something more useful, and this will be developed at some length in the chapters on the Metacognitve domain. We can even learn to be intrigued, amused, or entertained by dissonance, like we were as infants. This is well used as a teaching strategy in Sufism, Daoism, and Zen, with both stories and exercises. Dissonance can be enjoyed and embraced as well as feared. This will require a cultivated appreciation of multiple perspectives, frames of reference, or a sense of paradox. As F. Scott Fitzgerald put it, “The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function.”
    Isaac Asimov wrote, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not “eureka!” but “that’s funny … .” And sometimes we see contradictions only when we come to the end of our prior knowledge. Many of us seem automatically afraid that this is going to require a larger reworking of things we’ve painstakingly learned, so we look for any excuse we can find to reject this new and disobedient idea, this stranger from out of town. The real meaning of the phrase “the exception proves the rule” uses the older meaning of prove as test. The exception puts the rule on trial. Sometimes all we have found is one of the rule’s borders or boundaries, and sometimes we uncover a need to scrap the rule and adopt a whole new paradigm. Sometimes it’s just easier to just calmly tell ourselves that the old knowledge ends here where this new knowledge begins.
     Opposite-looking things can exist on the same spectrum and so only appear to be opposing each other. We see this in false dichotomies a lot. Nature and nurture aren’t incompatible opposites: in fact, they are co-conspirators in our coevolution. Fate and free will are also compatible, according to the compatibilists, of all people. Sometimes an apparent opposition arises in paradox, where we are eventually forced to admit that these opposites are the same thing and therefore it’s our perceptions that have the deficiencies. We see this in the particle-wave, space-time, mass-gravity, and electricity-magnetism pairs. Sometimes two apparently dissonant representations simply exist in separate conceptual frames. Often a sudden recognition of this, when the two frames collide, triggers laughter, and comedians use this a lot, as in Steven Wright’s “It’s a small world, but I’d hate to paint it.” Lightheartedness, or Tom Robbins’ neologism erleichda, for “lightening up,” is often the best stance to take when such dissonance rears its heads. This doesn’t mean to never take anything seriously, but merely to give the humorous side of things a chance, and perhaps even a right of first refusal. With this, we can develop a self-schema that just eats dissonance for breakfast and doesn’t require us to automatically defend error or reject new knowledge.

Critical Thinking and Cognitive Hygiene
    Discernment can go by many names: fair dinkum, judgment, acumen, intelligence, skepticism, vigilance, sharpness, reasoning, discrimination, incisiveness, and negative entropy, to name a few. Critical thinking subsumes all of these synonyms for discernment, but there’s another cluster of meanings that it doesn’t quite cover, including understanding, mindfulness, sensitivity, panna, prajna, wisdom, and insight. Critical thinking is an objective analysis of representations of ideas and facts to form judgments. It requires an unbiased evaluation of information and evidence. The U.S. National Council for Excellence in Critical Thinking defines critical thinking as the “intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.” But there is a big problem here, and Robert Heinlein nails it: “Man is not a rational animal, he is a rationalizing animal.”
    The Collegiate Learning Assessment (CLA) Project of the Council for Aid to Education made up a list of skills that are important to critical thinking. A diligent student will: “determine what information is or is not pertinent; distinguish between rational claims and emotional ones; separate fact from opinion; recognize the ways in which evidence might be limited or compromised; spot deception and holes in the arguments of others; present his/her own analysis of the data or information; recognize logical flaws in arguments; draw connections between discrete sources of data and information; attend to contradictory, inadequate, or ambiguous information; construct cogent arguments rooted in data rather than opinion; select the strongest set of supporting data; avoid overstated conclusions; identify holes in the evidence and suggest additional information to collect; recognize that a problem may have no clear answer or single solution; propose other options and weigh them in the decision; consider all stakeholders or affected parties in suggesting a course of action; articulate the argument and the context for that argument; correctly and precisely use evidence to defend the argument; logically and cohesively organize the argument; avoid extraneous elements in an argument’s development; and present evidence in an order that contributes to a persuasive argument.”
    Alternatively, Edward M. Glaser, writing in An Experiment in the Development of Critical Thinking (1941), claims that critical thinking “requires ability to recognize problems, to find workable means for meeting those problems, to gather and marshal pertinent information, to recognize unstated assumptions and values, to comprehend and use language with accuracy, clarity, and discrimination, to interpret data, to appraise evidence and evaluate arguments, to recognize the existence (or non-existence) of logical relationships between propositions, to draw warranted conclusions and generalizations, to put to test the conclusions and generalizations at which one arrives, to reconstruct one’s patterns of beliefs on the basis of wider experience, and to render accurate judgments about specific things and qualities in everyday life.”
    To be sure, this particular version of critical thinking has its place in a couple of the cognitive domains. But can you guess already why so many courses in critical thinking wind up being so embarrassingly ineffective, despite providing an impressive toolkit of necessary cognitive skills?  As applicable to the broader spectrum of experience, Daniel Willingham (2007) writes, “After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill.” But he is still regarding critical thinking here as constituted from reasoning, cold cognition, making judgments and decisions, and problem solving. He wants to see these skills as avoiding “common pitfalls, such as seeing only one side of an issue, discounting new evidence that disconfirms your ideas, reasoning from passion rather than logic, failing to support statements with evidence, etc.” We should develop strategies like “look for a problem’s deep structure” or “consider both sides of an issue” or “deploy the right type of thinking at the right time.” Besides mentioning passion in passing, he also flirts with getting a hint of what’s wrong in the statement, “Prior knowledge and beliefs not only influence which hypotheses one chooses to test, they influence how one interprets data from an experiment.”
    I wouldn’t go so far as to say that these skills cannot be taught, simply because the way they are taught leaves them so useless and ineffective so much of the time. I would say instead that critical thinking, as it’s usually understood, taught, and practiced is most definitely missing some vital components. These approaches assume that we are rational beings. It’s a hangover from thinking ourselves god-like, creatures made in His image, endowed with powers to make top-down cognitive decisions to operate and manage our meat machines. But rational cognition won’t work well alone. It needs skills for the unconscious forces that can undermine and subvert it, the biological forces that permit reason to emerge in the first place, subject to affective approval. The bottom line is that man is not a rational animal, and humans are not rational actors. Critical thinking isn’t going to work unless we get “down and dirty” into what we really are, with all of that blood-borne chemistry and other nasty juices. It's for this reason I felt compelled to offer the term cognitive hygiene instead, to cover both critical thinking and all of the affective or emotional self-management that its effective application might require. We have lots of nonsense and crap to clean out.
    Cognitive hygiene can take a few forms. The most effective would be used in vetting new information before it comes to live in our heads. This means identifying incomplete, ambivalent, manipulated, biased, falsified, or irrelevant evidence on the way in, assigning a negative value to it, and choosing not to absorb it. Even here, though, it’s more than just thinking or critical interrogation. We not only have to beware of our own defenses and biases, and our own emotional reactions in support of or opposition to this new input, we also need to watch for signs of unconscious activity and reactivity that has this same agenda. Most of our cultures have found ways to separate the message from the messenger, so the poor fellow will live on to run more messages. This is an ancient and achievable wisdom. But being able to avoid taking any message personally is a lot trickier. We need a much bigger toolkit or skill set than reason or logic can provide, although we still need these as well. We need to know our feelings, and we need the names of our demons. To do it perfectly, we still need tools that psychology hasn’t even invented or discovered yet. No, we’re not going to do this perfectly. Adults can still accomplish a little more than they’re doing. Children can learn to accomplish a little more than that. It’s gonna be done by baby steps, for the species and its culture.
    The second form of cognitive hygiene is introspective, where the things we see may come to resemble a funhouse of mirrors. And when error is finally discovered here, and certified erroneous, it must then be somehow unlearned or overwritten. This is no easy feat, either. Gotta be some kind of Buddha there, and he’s still one of the main guys to go to for advice. Where he shines brightest is in developing precautionary principles to help explorers avoid the pitfalls of reification, or mistaking subjective experience for objective reality. Reification was the trend of the otherwise excellent yogic introspective tradition that his teaching (or Dhamma) emerged from. But the old ways failed to ask why we would want to see or experience things in certain ways, and what desperate motives we might have. There are several developed methods for digging deeply into our own minds. All of them occur in the metacognitive domain, and those will be summarized when we get there.
    The third form of cognitive hygiene is pedagogic, teaching cognitive hygiene skills to others, and adapting this teaching for appropriateness to age, and the stages of individual cognitive development. Getting these skills into younger minds, the culture’s heirs, is where the hope is, and where it becomes most necessary to expand from critical thinking into cognitive hygiene and critical mindfulness, in order to incorporate what understanding we have of unconscious and preconscious processes, and especially of affect, feelings and emotions. One of the key elements here is that skills  be taught in a context with specific and relevant examples. I close Part Two of this work with summaries or proposals for better-developed metacognitive toolkits, for the kids and for the dults.

Stupid Feelings and Emotional Intelligence
“Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” David Hume
    Emotion is the reason teachers of critical thinking skills complain about their programs not working. Most cognition involves emotion, and we ignore that fact only at our peril. The cocktail mixtures of our various neurochemicals drive most of our thoughts and all of our choices between thoughts. Feelings and emotions are what turn our reasoned options into decisions. When we have time to ponder, we select the projected scenario according to which one feels best. Emotions can turn rational beings into troubled adolescents and whiny three-year-olds. But emotion is only a demon to those who need to deny it. Otherwise, there are ways to manage emotions without suppressing them, just like the sorcerers manage their demons: by giving them names, figuring out what they’re good for, and making them run errands. Until critical thinking learns to adapt to the role of emotion and motivated bias in cognitive error, it’s going to remain almost pointless to teach it. We need a more integrated approach than rational analysis and logic. Most of what we want to understand and manage better is known as hot cognition, emotionally charged ideas. But even so-called cold cognition, like in adding two and two, has enough affective content to hold our attention. We need the neurochemistry that considers relevance and awakens us to pay attention to the process. True, there’s not a lot of emotion in arithmetic (except for those to whom it brings tears or terror), but there’s always some feeling there, or else we couldn’t even be conscious.
    We’ve vastly overestimated the roles of thinking and reason in both our inner lives and in our actions. Nearly everybody who’s in the business of figuring out what we are is more dismissive than they should be of the role of emotion in intelligence. Many correctly believe that artificial intelligence will one day surpass that of humans, and may soon even pass our Turing tests with ease. They will soon be able to convince us that they have real feelings as well. But I would prophesy that they’re in for a big disappointment if they think these will also awaken into sentience or consciousness once given that critical mass of structured or patterned information, no matter the voltage, and no matter the reiterations of data to mimic the function of value.
    Our prefrontal cortex squats atop the much older limbic system that’s responsible for the automatic reactions to situations and the emotions that those feel like. The result of this arrangement is that the two systems often offer up different solutions to the same problem. We are wired and primed to jump to immediate conclusions regarding our situations and make any required decisions, although usually the need to respond can wait for a few moments, pending a closer observation of the givens, and either assent or inhibition from the more rational faculties. Sometimes our  response just can’t wait, and this explains the persistence of mechanisms that jump to conclusions, and out of the path of charging rhinos. We might sprain an ankle leaping from the path of a perfectly harmless serpent or spider, but maybe we didn’t have time to ponder, or look at the field guide, and that same move may have saved one of our ancestors. “The amygdala in particular is known to be especially involved in perceiving threat and anxiety,” writes Jonas Kaplan. “The insular cortex processes feelings from the body, and it is important for detecting the emotional salience of stimuli. That is consistent with the idea that when we feel threatened, anxious or emotional, then we are less likely to change our minds.” But given more time, we can.
    It’s common among rational types to speak up for dispassion, ἀπάθεια, or apatheia. Such philosophical prescriptions have been seized upon by the neurotic, and the otherwise religious, from the beginning, and twisted into obsessions with purity, apathy, the life of the mind, imperturbable intellect, or the spiritual life. In so many cases, sense and sensuality were demonized in some way. Nietzsche lambasted these types: “‘Spirit is also voluptuousness,’ said they. Then broke they the wings of their spirit; and now it creepeth about, and defileth where it gnaweth.” But it’s probably safe to say that our better thinkers aren’t and never really were advocating numbness or apathy. Contrary to common wisdom, Buddha wasn’t. He spoke of a number of joys, especially those of good companions and the skillful exercise of our perceptive faculties. His non-attachment was from cravings and their discontents, not from feeling. Neither were the Stoics numb, even though they developed the virtue of apatheia. They didn’t think much of our emotional or egotistical overreactions to life’s events, but they still held a valued place (right alongside the rational faculties) for eupatheia, feeling good, and eudaimonia, the enjoyment of thriving or living well. Their objections were to dwelling on affective states that throw us off balance, distort our views of the world, lessen the quality of our experience, or muddle up the clarity of our knowledge.
    Deferred gratification is ever a busy research area. This may be the first emotional self-management skill we’re required to learn in life. And we still rarely have it perfected by our elder years. While affect or emotions underpin our thinking, nobody is saying they underpin our best thinking. They nearly always favor short-term gains over long-term goods, and they discount the future to overvalue our present satisfaction. It’s commonly thought that desires are best suppressed or repressed by concentrating on the higher rewards for waiting, and the higher value that we give to these rewards allows us to do this. But it’s usually more a matter of distracting ourselves, or finding something else to pay attention to, to maximize or optimize the meantime. We can also be thinking to ourselves that wanting something we can’t have yet, or something that isn’t ready for us yet, is really just a waste of time.
    Obsession is another trouble spot. We will allow an emotion to rule us, and recycle through our awareness in recursive loops. We will stay in a preoccupied or fascinated state, confusing our feelings with perceptions of truth. We mistake this for authenticity, perhaps justifying the passion with “the heart wants what it wants.” This phenomenon will play a big part in addictive behavior, although it’s only a part of that process. The Right Intention portion of Buddha’s Eightfold Path describes processes of self-intervention that involve or require the deliberate substitution (tadanga) of one affective state for another. This is not the good intention that the road to hell is paved with: it’s the intention to self-correct, not yet to fix the world. Unlike with deferred gratification, simple distraction isn’t the optimum solution because this doesn’t get the problem out by the root. We make this a question or matter of dignity to not be ruled by the inferior, but to choose a higher state. We replace the craving, ill will, or urge to harmfulness with gratitude, acceptance, or compassion. It sounds simplistic and unworkable from within an obsessive state, but this is a learnable skill that confers some affective self-management.
    Resentment derives from re-sentiment, to feel a thing again, but the word is used for feelings that are unpleasant to feel again. Yet these are memories that we replay, repeatedly, redundantly, repetitively, and over and over again, as though savoring their unpleasantness. Each time we remember them and don’t change the way we feel, the memory goes back even more emotionally charged. That memory is plastic holds the clue to discharging resentment: we face the memory directly, intentionally, while in a better frame of mind. As a metacognitive strategy, we can learn to add understanding or forgiveness to it before putting it away again.
    Emotional hijacking expresses itself most obviously in arguments, where it shows a lack of perspective and a move towards absolutes. If you listen to two people in the midst of a heated argument, you will hear lots of “you Always do this” and “you Never do that.” Emotions can’t be trusted to incorporate a reasonable sense of time. Sometimes and seldom just seem to lack the desired heat or dramatic force. It’s all or nothing, do or die, all in, and going for broke. A shortfall or shortcoming is seen as total failure. This is sometimes called catastrophizing. The old standbys for short-term management remain taking deep breaths and counting to ten. Telling yourself to grow up sometimes works, if you can hear it.
    Our defensiveness in protecting adopted cognitive self-schemas and scripts plays by far the largest role in cognitive error, and produces some of our strongest emotional reactions, even when the stimuli are just hot buttons, triggers and buzzwords. Defensiveness particularly applies to our identity, the ideas we’ve constructed about who we are; our beliefs, the collection of ideas that we’ve painstakingly affirmed as rock solid and which stand as testimonials to our good judgment; and our belongingness, with the things we’ve convinced ourselves are necessary to remain in the groups that give us security. This is the subject of an upcoming chapter.

Denial, Relativism, and Limits to Tolerance
    Denial, as it’s used here, is cognitive intolerance, a refusal to look at disconfirming evidence, or holding a prior judgment that such evidence is false. It’s more relevant to this particular meaning that a proposition be uncomfortable than it be untrue. Denial is not the same as detachment or distancing. The ostrich with buried head is only a myth (like the frog in boiling water) but it still works as metaphor. Denialists may claim faith as a reason for not looking, and so they are also referred to as true believers. The classic examples of denial include the alcoholic, the addict, and the religious zealot. To these, the unacceptable proposition that they have a personal problem doesn’t require discrediting because it can’t even be seen. Examples of specific kinds of denial are found in both our coping strategies and our defense mechanisms. In a Buddhist context, you might say that denial represents a formidable combination of two of the three unwholesome roots (akusala mulas): aversion (dosa) and delusion (moha), which form the armament and defense of the third, craving or addictive behavior (lobha).
    Denying unpleasant truths and not facing our hypocrisies both reduce cognitive dissonance. It doesn’t matter that disconfirming evidence is or should be overwhelming. In normal operation, denial demands that we reduce the value of one of two conflicting or incompatible propositions to zero. These can also be two conflicting facets of our own character or personality, as in “I hate hatred” plus “I hate Jews.” If both of these are to continue to exist, they must be partitioned away from each other and not allowed to touch, although sometimes simple rationalization can distort or disguise one of these incompatibles enough so that the two can seem to coexist. Willful blindness (or contrived ignorance) is a legal term with a somewhat narrower meaning, holding a person accountable “if they could have known, and should have known, something that instead they strove not to see” in order to escape liability. This is somewhat narrower than choosing ignorance because it feels better not to know.
    We do a lot to avoid changing the minds that we’ve spent our whole lives constructing. These cognitive structures, self-schemas, and scripts, represent an enormous investment in time and effort, and our pain and suffering. The threat of further effort in disassembling these structures (in addition to the discomfort and embarrassment of admitting our errors) is more often than not sufficient to override any promise of improvement. It seems so much easier to stay in line and keep things simple with the devil you know. We have sunk costs we’ll never get back if we just walk way. And then we’re just losers.
    Clearly, the universe of available experience is too big for one mind to comprehend, and cognitive self-limitation, editing, and prejudgment are necessary. What we choose to let live in our heads becomes the stuff of what we think we are are. And there is nothing necessarily wrong with choosing what allows us to feel our best. But we are on more questionable ground when filtering out threats and hints of threats to our egos and beliefs, especially to the ideologies with which we’ve identified. These are borrowed or adopted, not our own. But, no surprise, our blindness leaves us blind. It does nothing to diminish the thing we don’t want to see, just as our hatred doesn’t hurt the one we want to hurt. We merely shrink our own worlds down, becoming tiny masters of ever-tinier domains.
    Cultural relativism is the idea that one culture’s opinion is as good as the next, and that a culture’s thinking a claim is true makes it true within that culture. Relativism can either be a useful perspective or an erroneous shortcut to tolerance. Many versions of relativity have come a long way into the wrong since perspectivism (being largely right) proposed that whatever truths we might see depend on our point of view. The error is in expanding this to say that any truth perceived from any perspective is a real truth in its own right. This has led to a democratization of knowledge that implies both that one truth is as good as any other, and that truth might be established by a vote. This belief might be called epistemic equalitarianism. Cultural relativity at its extreme might claim there are no human moral universals, so that an accepted tribal practice of cannibalism is still moral within that tribe, even if that tribe now lives on a ranch in Canada. Another extreme example might see little Johnny claiming two and two are five and being praised for being close, since it’s much more important in his culture that he have self-esteem, earned or not, than that he be right. In an environment where such relativism is accepted, it almost takes someone like the kid in the Emperor’s New Clothes to assert that there are in fact mistakes and errors: duh.
    There are good arguments against having the mind fully open. Too much tolerance is as problematic as having too little. The problems are both with the random stuff that gets stuck in there, and with all the silly stuff that’s just passing through. There’s your gullibility and credulity. One becomes little more than a pump for the circulation of nonsense and a servant of the Heat Death. The fads in philosophy like postmodernism and deconstructionism, like the fads in fashion and art, will come and go the way of hairstyling’s mullet. It may be true that most opinions deserve some form of respect, but this is not because they are right, or even close to right. It’s simply best that we try to give things a second look (re-spect) to understand how they arise and what they arise out of. Error can often be seen as a diagnostic opportunity. After the elephant has been described six times in six very different ways, there is still the elephant, and then somewhere within that puzzled elephant is the experience of being an elephant groped by six fools.
    There are also good arguments for having a very open mind, a large mind, or at least one not stuffed with clutter. This is especially important in finding our common ground as human and living beings. We are all Terrans together: microbes, plantae, fungi, and us animals. Most tribal delusions could use some relativity, and more moral concern for outsiders. The widespread ignorance of our interdependence and interconnectedness is fast becoming a real threat to our future. To absorb this is to outgrow our playpens of ideology, race, and nationality. Further, an open mind may be more willing to look a bit longer at a paradox, and that’s good because a contradiction itself isn’t an invalidation. Sometimes two equal-but-opposite points of view open the way to a higher level of understanding. We don’t argue whether light is a particle or a wave: we ask what’s wrong with how we are seeing things.
    What is there in reality, what is subject to modification according to point of view, what is entirely subjective, and what is patently not true? Sometimes the answers seem easy. Sometimes the easy answers are wrong. Sometimes we have to use the judgment that relativism helps us avoid. And sometimes that looks snarky to others, who judge us for being judgmental. The opposite of denial is acceptance. But, as will likely be said a few times here, acceptance is not the same as approval. We aren’t validating this as the best of all possible worlds, or proposing to let it be, or asserting that anything, especially a human, is already perfect, just as it is. We’re just looking for a bit of terra firma on which to take a stand and make a beginning. Then, if need be, we can start changing things with greater effect.
    Finding the proper limits to tolerance requires us to get over whatever squeamishness we might have about being judgmental or negative, and whatever embarrassment we might have about seeming a Pollyanna, or simply positive. As opposing character traits, negativity and positivity are another false dichotomy. The first reaction for many is to relate negative to unpleasantness and positive to happiness, instead of simply saying no and yes, or relinquishing a position versus taking a position and positing something. In evolution, selection is negative and mutation is positive. The two belong together and evolution wouldn’t work without them both. Meme theory, which is more extended analogy than science, points out that both bad and good ideas propagate themselves in part by their attractiveness. They passively convince hosts to accept them and pass them along to others. Is it the task of the cognitive hygienist to be part of evolution’s selective process, to be an immune response? To be a janitor for the culture? Don’t we need something to mess with the spread of our attractive but bad ideas? If not us, who?



1.4 - Science, and Some Other Ways to Be True
   
Scientific Naturalism, Mathematics, and Formal Logic

    Informal Logic, Statistics, and Skepticism
    Taxonomies, Scales, and Matrices
    Analogies, Thought Experiments, and Parables
    Exposition, Narrative, Anecdote, and Anecdata
    Emergence, Qualia, and Consciousness
    Introspection, Phenomenology, and Vipassana Bhavana

Scientific Naturalism, Mathematics, and Formal Logic
    It may be that science proceeds out of a softer, more organic version of the scientific method that isn’t as clearly articulated, a native cognitive heuristic that underlies and suggests the more formalized structure. Infants begin in pre-science with inquiries about the world, and naturally construct theories to account for their observations. They start to take notice even in infancy when expectations are violated. They test their ideas rigorously, as anybody who’s had a two-year-old knows, and they develop a naive psychology that becomes  a “theory of mind” to predict the actions and reactions of others. Not being adults yet, a child’s inquiries may be somewhat more eager to test assumptions than to prove, validate, or defend them. Cognitive development theory calls this the theory-theory. This gradually becomes more causally savvy as questions of why get answered. Cognitive growth is a continual development and adjustment of theories, schemas, and scripts that tends to proceed through the stages of sophistication and depth described by Piaget and his successors. The point of it all is in learning to predict the future, learning what can be expected to follow from what, whether this be a sensory cue from a dynamic environment or one of our own purposeful actions. If we include getting fed, this may also be the primary point in having any brain at all.
    Acceptance of scientific insight is largely due to its inherent system of error correction, and the proofs expressed in technological success. However, good and useful ideas can be arrived at via a large number of alternate routes. The nature of some of the other routes might imply that an idea is more questionable, but questionable is not really the same thing as dismissible.  Propositions don’t need to be scientific or even logical to be true. There is much more to the subject of truth, more to right dinkum thinkum, more to cognition and knowledge, than science. But there are plenty of science snobs who think otherwise. Presently, there is an excess of stress being placed on science, skepticism, and reason, in part to counteract a much larger and frightening anti-intellectual trend driven by fundamentalist religions and shortsighted economic interests. Many of the things that call themselves science today aren’t really science at all. The word has become a rallying point in opposition to inferior forms of thinking, but its use in this way is problematic, and sometimes borders on a religious fervor of its own. Science is only one method of inquiry and verification. Logic, and more broadly, reason, are other means of inquiry. Other measures for what’s true can be a little less certain, or a little more relative. It’s certainly the best was to arrive at any truths we must agree on, but it’s not the only confirmation of the true.
    Scientific naturalism, which regards the world’s events and processes as governed by discoverable natural laws, is considered the reigning heuristic for discovering the true. The Arabic science in southern Europe, a thousand years ago, kept the lights on through the Christian Dark Ages. It thrived because it was thought that nature itself was another form of divine scripture, parallel with the Quran, and worthy of reverent study. It fell apart when it lost this, but by then it had enabled the European Renaissance to arise. Scientific naturalism considers just about everything ultimately discoverable, including the workings of the human mind, and perhaps even an optimum set of human moral standards. There is, in this view, a human nature, even though this might be overwritten by culture. The search for a natural human aesthetic often escapes this dragnet, so some of us make fun of art and fashion, and we won’t watch music award shows anymore.
    The word science will be used in its narrower sense here, as the application of the scientific method and the aggregated body of knowledge that results from that inquiry. Most people don’t really understand what science is. First of all, it’s a method of inquiry, not a system of belief or a collection of beliefs. You know that an author or journalist just doesn’t get it wherever you read “science proves …” or “scientists believe…” The true scientist’s answer to the question “do you believe in evolution?” is “No. I can only accept the theory conditionally, until it can be improved or falsified.” Believing is what religion does, and science snobs. Science uses what’s known as inductive method: it collects evidential data from the world according to rules of admissibility, and generalizes conclusions from these particulars. The method has a known series of steps, although the number of steps varies with the presenter: 1) ask a question of the world; 2) research what’s been done with this question in the past; 3) develop a hypothesis, one which can, at least in principle, be testable and falsified; 4) make a prediction based on the hypothesis; 5) design and perform and experiment to test the hypothesis, while controlling variables well enough to avoid ambiguous and ambivalent results; 6) record and analyze the data; 7) develop a conclusion as to the validity of the hypothesis, one which is unambiguous and (preferably) not inconsistent with reigning theory; 8) refine the hypothesis as necessary and retest if needed; 9) expose the conclusion to peers for review and critique; and 10) stay tuned, while others are repeating the experiment to check it for replicability and flaws. In science, disproof is as valuable as confirmation, even where it isn’t as welcome. Since the body of science is the aggregate of surviving hypotheses, failure is important in rejecting those that don’t belong. And as long as information supports multiple hypotheses, it’s data, not evidence, and it doesn’t give us a theory.
    Results obtained by these several steps may be regarded as scientific law if a predicted phenomenon of nature occurs invariably wherever the specified conditions are met, even though laws do not require explanations of why this occurs. Gravity can be a law without knowing what the heck it is. Laws are often expressed mathematically. Scientific theories are sets of propositions which have been repeatedly confirmed inductively through either structured observations or controlled experiments. Observational studies often take the place of controlled experiment in science. While experimental studies create models with limited extraneous factors and variables, observational studies do not, and researchers are more likely to be unaware of hidden or unknown variables. Observation foregoes the advantages of double blind experiment. It does have the advantage of less limited data sampling, and more research can be done retrospectively. But even by this definition, such ideas as dark matter and dark energy should not become theories until they can be confirmed by experiment. At present, these merely measure the discrepancy between our models and our observations, and should perhaps still be regarded more as placeholders than hypotheses.
    Both scientific laws and theories are subject to correction, revision, and replacement. Science as a whole adopts a conservative (and even defensive) stance, once ideas have been generally accepted. This is embodied in the process of peer review, which makes it difficult for original ideas to gain entry. The method will also tend to constrain new ideas to admission and acceptance one idea or notion at a time, further gating any more complex theories. This resistance to both change and improvement is a little more problematic in the soft and social sciences, where the popular vote can carry as much or more weight than evidence, and this can take on more ridiculous proportions in academia in general, where some analog of tenure can even trump the popular vote (“when you’re not the lead dog, the view’s always the same”). In any case, peer review tends to be especially unfriendly to any paper with more than one original thought. Max Planck suggested, “A new scientific truth does not triumph by convincing its opponents and making them to see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” This quote has been simplified to “Science progresses one funeral at a time.” Standing up for the more rigorous scientists, Richard Feynman offered, ‘Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation... As a matter of fact I can also define science another way: Science is the belief in the ignorance of experts.”
    Science is also subject to inappropriate socioeconomic pressures. Besides the innate conservatism of adherents to particular theories (which at least serves a purpose), science has begun to galvanize itself into shrillness in its running battle with religion, nowhere more than in America, becoming more like a religion every day. It has its credentialed celebrity spokespersons sermonizing on television. There remain the ancient problems with the ivory towers in academia, the fads and the cliques. There is the folie à plusieurs of peer pressure in review, but we should never think that the true is determined by popular vote. There are also the social and political pressures of political correctness, to alter lexicons and limit the discourse to be palatable to the masses and their leaders (notably research in drugs, sexuality, racial features, dimorphism, etc.). There is also corporate funding for research, with all of  the proprietary strings attached thereto. And there is the corporate funding for education in those fields in which the corporations are invested. The forest conservationists are trained indirectly by the timber companies.
    One of the most serious problems that scientists have in forming their conclusions stems from an inadequate attention to, or education in, informal logic, discussed at some length throughout this work. The numerous informal fallacies are gradually being articulated to address this problem, but it does less good if scientists don’t take time to learn them. To be fair, a lot of the most egregious examples occur in the reporting of scientific papers to lay audiences by imperfectly equipped journalists, overeagerly spreading news of the latest sensational findings. But even at the experimenter’s level, we find plenty of example of causal fallacies (cum and post hoc ergo propter hoc), false dichotomy, and others.
    Scientific theory can occasionally fail in spectacular ways, as it did with phlogiston and geocentric astronomy. It isn’t all steady accumulation and progress. Thomas S. Kuhn, in his 1962 Structure of Scientific Revolutions, describes a typical cycle of progress, now known as the Kuhn Cycle. Any scientific discipline starts with Prescience (you are here). This becomes Normal Science eventually, with proponents, champions, and maybe some smugness. Questions get answered in pleasing ways. Eventually, though, expectations may start to get violated and little cracks and flaws start to show. Maybe new entities have to be invented, in violation of Occam’s Razor, or fudge factors and new corrective constants need to be inserted, Spackle for the cracks and Bondo for the dings. This is the part where Isaac Asimov steps in: “The most exciting phrase to hear in science, the one that heralds new discoveries, is not “eureka!” but “that’s funny … .” This step is Model Drift. These can start to add up, as too may exceptions threaten to disprove rather than delimit the rule. The model is no longer regarded as trustworthy. This step is Model Crisis. The hunt for the dark, missing pieces becomes more desperate, and new experiments run the danger of getting designed to find things that just aren’t there. You don’t want to run experiments while wishing the hypothesis were true, even if you do want a prestigious prize. Now some of the scientists decide to step out of the old box and develop alternative schemes and hypotheses. New candidate explanations start to emerge, often with new lingo that’s mutually unintelligible with the old plan and those who are trying to save it. A Model Revolution occurs once the new ideas account for the data at least as well as the old. Once the new lingo becomes the new lingua franca, a Paradigm Shirt has occurred. This completes the cycle as the new paradigm becomes the new Normal Science.
    Around the edges of our narrow definition of science, there are two more fields that are even more trustworthy than science when it comes to reliable conclusions: mathematics and formal logic. Even these two, however, rely on an assumption that any representation they make of a real world is a reliable stand-in for reality. Nikola Tesla had issues with the use of math in physics: “Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.” And Alfred Korzybski had issues with formal and Aristotelian logic, particularly with the use of the verb “to be” and the confusion of levels of abstraction. As he put it, “The only usefulness of a map depends on similarity of structure between the empirical world and the map…” and “the map is not the territory.” A little more will be said of both of these later, but not much. It’s important to understand that both mathematics and formal logic are deductive, rather than inductive, and can operate in complete isolation from the external world, developing whole universes of their own, and tautological realities that merely define themselves as true. These can produce hugely improbable and unsupported “theories” like time travel and parallel or alternate universes that bifurcate with every “possible” choice. Both math and formal logic are intricate systems that stand alone and they embody their own means of self-correction that are beyond the scope of this work.

Informal Logic, Statistics, and Skepticism
    Formal logic is deductive in nature: particulars are inferred from general laws, and an argument’s conclusions follow with necessity if it’s structured properly and its premises are true. It’s harder and surer than informal logic. For an example, formal logic may use a common propositional form know as modus ponens: If A, then B. A. Therefore B. Or modus tollens: If A, then B. Not B. Therefore, not A. In categorical logic, premises are statements about an item’s inclusion in or exclusion from categories (all S are P, the universal affirmative; no S are P, the universal negative; some S are P, the particular affirmative; and some S are not P, the particular negative). Here you will also find syllogisms, or two-premise deductive arguments. A valid categorical syllogism, found in every logic class ever given, reads: All men are mortal. Socrates is a man. Therefore, Socrates is mortal. Both the categories and the syllogisms can be expressed in Venn diagrams (the intersecting circles). The system gets a lot more complicated than this, and only a few of the common formal logical fallacies will be discussed in Chapter 3.7. Statistical theory, probabilistic inference, Bayesian inference, decision theory, and game theory together comprise another major domain of logic that’s more closely related to formal logic for its reliance on mathematics, even though this domain is inductive in nature.
    The focus here will be on informal logic, applied to the everyday language used in conversation, explanation, and exposition. Informal logic is inductive in nature: general conclusions are inferred from an accumulation of particular instances, and an argument’s conclusions are only supported, or made more probably true, by the structure of the argument and by the soundness of its premises. Sloppy and unsound premises fail to support the conclusion, but they cannot be said to invalidate it. It’s through the vehicle or instrument of ordinary language that we are most commonly deceived. The SEP offers “We may take a fallacy to be an argument that seems to be better than it really is.” These arguments show a degree of plausibility to the naive, particularly when they are unaware of the shaky underlying structure. They’re often found in persuasive rhetoric, debate, journalism, advertising, and propaganda, and especially where exploiting affect and cognitive biases. “But what convinces us isn’t necessarily true,” Nietzsche wrote, “It is merely convincing. A note for asses.”
    The informal fallacies are specious inductive logical arguments. Inductive logic extends or extrapolates specific propositions about semblances and patterns into generalized conclusions. In informal argument, true premises or assertions strongly support the conclusion and false premises fail to do so. False premises do not negate the conclusion, however: the “fallacy fallacy” wrongly assumes that if an argument for some conclusion is fallacious, then the conclusion is false. A proposition can still be true even if the logic used to support it is faulty. Validity in informal argument may be described in terms of acceptability, relevance, and sufficiency (abbr. ARS). Generally, an informal  argument will begin with a premise or proposition that seeks a pattern in evidence, observation, or testimony, and then jump one step ahead of this supporting information to a generalization that hypothesizes or suggests an explanation, or an accounting for the premise or proposition.
    Argument itself doesn’t mean hostility any more than a problem means frustration. It’s simply a style of discourse that’s distinguished from others like narrative, description, and exposition. Informal logic is included in a more interdisciplinary study of reasoning called argumentation theory, which has developed approaches to argument applicable beyond straightforward logical propositions to include any debate, dialog, dialectic, negotiation, legal argument, deliberation, scientific argument, or rhetorical persuasion, where a conclusion is asserted to follow from premises. In court, informal logical sufficiency becomes “beyond a reasonable doubt.” There is still no perfect proof, unless a perp is caught in flagrante delicto on camera. Informal logic being regarded as distinct from the formal is a recent development, and there is as yet no accepted taxonomy for its structures and types. Fallacies are still being added to the toolkit, and a few are even being added here. So what good is it to know what these fallacies are? To know that an argument is specious doesn’t make it’s conclusion false. To know that an argument is valid does not make its premises true. This knowledge will be most useful in picking apart specious arguments used to convince the unwary of packaged ideologies, particularly political and religious, or alert us to deceptive rhetoric and advertising, or to the ignorance of its proponents.
    Statistics is an inductive branch of mathematics that’s suited to collecting, analyzing, interpreting, summarizing, and presenting large amounts of data. Being inductive, it works with probabilities more than with certainties, and accuracy improves with sample size and representativeness. Statistics are a common tool for refining conclusions from large amounts of mined data, and they have an important role in the scientific method in both experiment design and data analysis. But this is still something other than science, and has broader applications as well. Descriptive statistics uses such devices as averages, medians, and means, and the famous bell curve of probability density, with lesser probabilities measured in standard deviations. Inferential statistics makes inferences about relations between the samples and larger potential datasets by way of extrapolation. These are often causal.
    When statistics are derived from surveys, it’s of crucial importance to also investigate the metrics being used (e.g. why is GNP and not NDP used as a metric in assessing National Happiness? Why include the costs of crime in measures of happiness? Or: why does this analysis of corruption by country not include white collar crime or the legal bribery of legislators by corporate lobbyists?). It’s important to examine the questions being asked and compare them to any alternatives that might be less leading. And it’s important to look carefully at the way the data are presented by way of conclusion. There are a lot of graphic tricks to skew the appearance of conclusions. And there is a lot of trickiness to be found simply in the way statistics are phrased. President Dwight Eisenhower expressed his astonishment and alarm on discovering that fully half of all Americans have below-average intelligence. And just imagine that that half, with the addition of one perfectly mediocre mind, constitutes a democratic voting majority. There are also a number of closely related logical fallacies which may or may not be deliberately employed in statistics. Among the hazards discussed in Chapter 3.7, Logical Fallacies, are anecdotal fallacy, biased sample, cherry-picking data, ecological fallacy, hasty generalization, overprecision, questionable classification, and suppressed evidence.
    Skepticism, like science, is a methodical inquiry and not a system of belief. It’s primary tools now are informal logic and a general mistrust of statistical data presentation. And it’s also an attitude, one that might favor cognitive hygiene, discriminating mindfulness, and critical thinking (and perhaps even affective self-management, if it thought about it). Nothing made of words has any inherent or natural rights. These constructs are there to serve someone’s purpose, and this purpose might be misguided, deluded, or nefarious. The more words exist to influence our thoughts or behavior, the greater is our right, and even our duty, to question them, to see whether they have logical or ethical holes, or secret compartments. Skeptical thinking constructs reasoned, logical arguments, and watches out for specious or fraudulent ones. Carl Sagan suggests employing such tools as independent confirmation of facts, debate, the development of different hypotheses, quantification, the use of Occam’s razor, and the possibility of falsification. Although his “Baloney Detection Kit” provides the beginner with a place to begin, the available skill sets and toolkits are a lot more extensive than this, as Part Three will show. Skepticism is distinct from the modern understanding of cynicism. And it differs from denialism, which will tend to deploy fallacies such as moving the goalposts, in order to sow doubt and uncertainty to substitute for undertaking burdens of proof for its own arguments. Reactive cynicism and denialism are every bit as prone to ignorance as gullibility or credulity.
    Originally, Cynicism was an old school of Greek philosophy developed by Antisthenes, Diogenes, and Crates. Cynics advocated a simple, virtuous life, free of the great trappings of civilization like wealth, power and fame. They had a great deal in common with the proto-Daoist Chinese sages Laozi and Zhuangzi. They sought to develop eudaemonia (human flourishing) as did many schools, as well as adiaphora (indifference to troubles), and parrhesia (outspokenness or candor). Because they dismissed so much of civilization’s perks, their not wanting anything came to be seen as not liking anything (to borrow from Castaneda). The modern idea of cynicism represents this later view. This scorched-earth approach to cynicism is a nihilistic negation of everything. The old school might be thought of as skepticism with feeling, a prejudice against the subjective or objective value of anything, unless it can prove or demonstrate its worth. It certainly has its place in confronting things that are already suggesting their worthlessness, such as the great trappings of civilization. The ignorant may be able to dismiss an idea simply because they are mentally impoverished, but those with enough mental skills for a choice can find a happier medium between detachment and saturation, or suspicion and engagement (to borrow Philip Zimbardo’s words).

Taxonomies, Categories, Scales, and Matrices
    Taxonomies, categories, scales, and matrices are all forms of conceptually dividing the world into components, but they can use cognitive processes that aren’t necessarily scientific in the sense used here. Even though, in places, systematic classification is regarded as a branch of science, here it won’t be considered as such, except to the extent it formulates a falsifiable scientific hypothesis or theory. Mental categories may simply be linguistic, semiotic, or semantic. And of course they have a place in formal categorical logic. These categories all presume some form of cognitive pattern or trait recognition.
    Taxonomy is the sorting of a larger pile of ideas into a number of smaller piles, according to specified shared principles or characteristics of the items.  It presumes principles that underlie its order, and some value to the sorting criteria. A taxonomy is a scheme developed with such classification. A taxon is a population considered to be a unit. These systems can be multilevel, often taking the shape of either a pyramid or a tree (pyramidal or dendritic), even though a general taxonomy may be simply horizontal, with no multilevel structure. One of the best known is the system of Carolus Linnaeus (Systema Naturae, 1735). This system encompasses the description, identification, nomenclature, and classification of organisms. Organisms are sorted here into kingdom, phylum, class, order, family, genera, and species. The individual differences within a taxon, as between a chihuahua and a great Dane in the species taxon, may be disregarded on their own level or above. An alternative to this system is cladistics, the sorting life of forms according to common characteristics that first appear in a most recent common ancestor (with convergent evolution discounted). Systems are often dynamic, and change with new discoveries, and sometimes new intermediate hierarchical levels need to be posited, like subspecies and superfamilies.
    It’s common in the softer sciences to make lists and be content with what sounds most plausible. If these lists are remembered by readers and students, they will get stuck in the discipline until they prove no longer useful and get themselves repudiated. Psychology is famous, or infamous, for its linguistic taxonomies and classifications. While these cannot all be said to draw lines in arbitrary places, and usually they refer to functions that two communicators of the language can agree upon, their function is still primarily linguistic. “Id, ego, and superego” can enable a real conversation that may or may not lead somewhere. During the height of the behaviorist fad, psychologists were fond of defining their field as the study of behavior. But they seemed blissfully unaware that psychology, too, was behavior, and specifically, a languaging behavior. Much of the discipline of psychology simply parses the human experience or psyche into psychological concepts, often taking continua and dividing them into arbitrary numbers of sections. Sometimes this affirms a sort of linguistic relativity that creates categories as it names them, especially when it comes to words for mental objects, and emotions in particular.
    Where articulation of differences within a taxon is going to be useful, we should add a lower taxonomic level. This is, of course, now disallowed by the politically correct in the case of humans, even without values attached. As our world grows more complex, we’re seeing more unhelpful disarticulations in disciplines and the media. There are articles about the “humans” living 1.5 million years ago that should be saying “hominins” if they want their readers to have higher quality information. Elsewhere, the recent fusion or conflation in the DSM-5 of a number of distinct conditions on the autism spectrum into a single “Autism Spectrum Disorder” (ASD) was a truly boneheaded move by the APA. Both geneticists and neuroscientists will be identifying sample populations as “having ASD” and losing all of the finer detail that would help them tease out separate genes and neural processes for separate Autism Spectrum Conditions (ASCs). But mental health isn’t the point of the DSM-5, which is all about checking the right boxes on insurance forms to expedite payment and comply with drug regulations. This has little to do with science.
    A horizontal taxonomy forms the structure of Part Two of this book, where the totality of our anticognitive processes are sorted into ten domains. This is explicitly not a scientific enterprise. These categories are not meant to be reified as separate parts of the human psyche. The introduction to that section sets forth a number of attempts to do this in the field of cognitive science and some of its early predecessors. An example is Howard Gardner’s Theory of Multiple Intelligences, presented in Frames of Mind, which proposes seven intelligences, later expanded to eight, and then nine. Critics whine, as critics will, that this has too much of subjectivity and too little of science, so I’m making it clear that my ten domains are simply chapter titles for organizing this material, and that these domain ideas may or may not have some uses elsewhere.
    Scales, in the sense of the word used in music, have been with us since we began to count. For each new number N, there were suddenly not only N things to count: there was also now a universe which could be divided into N kinds of things. This problem was usually taken up first by the local wizard or shaman. A continuous spectrum, such as that of visible light, sound vibration, or the human experience, can be divided by any whole integer, resulting in a scale. This doesn’t mean that this division will make enough sense to hold human attention. There needs also to be a resonance somewhere in the human psyche, some ring of truth or plausibility, as well as enough simplicity for the scale to be remembered. When there is, the scale survives in our lore. For example, in the light spectrum, certain divisions feel more natural. The scale of two divides light into warm and cool colors. Three scales of three may be used: the light primaries (red, yellow and blue), the subtractive (magenta, cyan and yellow), and the additive RGB from the cones in our retina (red, green, and blue). Two scales of four are also apparent: the printer’s black, magenta, cyan and yellow and the human eyeball’s black (rods) and red, green and blue (cones). But six, not five, is the next most logical division, although some cultures live with fusing the colors of leaves and sky. With sound, the spectrum “divides itself” into specific ranges by laws of physics, at the doublings of vibration frequencies in physical objects such as taut strings. The further divisions within these ranges may seem more arbitrary. That these ranges are called octaves reflects only one of these: the pentatonic and the chromatic scales are two of many other options. But it’s a resonance within our own aesthetic sensitivities, and thus an accord with the neural substratum and physical structures of these senses, which gives a particular scale its longevity in our cultures and languages.
    Scales which survive do so when they both cover a spectrum well enough to describe a full class or category of experience, and resonate well enough within our beings that we may use them to communicate these experiences and so create mutual understanding. Seeing scales in terms of their longevity in human culture may tend to prejudice us against the newer but ultimately viable ones, but the uphill struggle to acceptance may also be seen as a good thing, as it is in science. The human mind, particularly when it’s seeking the security of belief, can extract significance from nearly any white noise or set of random events. Many of these can survive for quite a while though, as with the belief that there is meaning in the random assignment of decimal calendar dates to our days (numerology), or in the random sequencing of the letters of the various alphabets (gematria). But number symbolism is different from numerology, and doesn't rely on random sets of data. A criterion to judge the practical worth of this significance, like its utility or effectiveness in communication, should be part of our mental apparatus here. The gods of ancient Greece, who each had their well-defined dominions over the various aspects of human existence, survived not because they were immortal, but because of the unusual clarity of this domain definition and its resonance with the mortals who kept them alive. The relevant spectrum here was the broader range of human experience. An astrologer who didn't believe in astrology, and even said so, could still use the planets and signs of the zodiac as symbol systems to tell him something useful about how humans categorize their experience. The discipline of psychology attempts to accomplish a similar scaling with its terminologies, to cover the ranges of human behaviors, emotions, defense mechanisms, intelligences, and so forth.
    A matrix, in its broadest definition, like the mother for which it’s named, is a source within or from which something else originates, forms, or develops. More narrowly, it’s a way to express a field of possibilities that are products of two or more variables, or the combinations of two scales as described above. These are usually displayed as a grid, with a set of rows and a set of columns for each variable or scale. When the rows and columns are finite integers, the possibilities are also a finite set. The earliest of these, which we master in childhood, is the multiplication table. One that we may wish were just a bit simpler or elegant-looking is the periodic table of elements. A great deal more material can be placed on a single page in this way, and a large system of concepts or categories can be understood without examining each member of the entire set. It’s also possible to generate meanings or properties for missing elements in empty cells, and then go looking for these in the reality the grid is supposed to represent. We’ve done precisely this with most of the higher atomic numbers on the periodic table.

Analogies, Thought Experiments, and Parables
    Among our other means of gathering useful and accurate insights, we have extended analogy, a species of correlative thought. Superimposed analogies are compared for how they might inform each other, with one analog perhaps suggesting content for another’s lacuna. Models and analogies are perfectly legitimate heuristics for generating testable hypotheses, but this isn’t the same thing as science, and in fact, this method of inquiry underpins a great deal of magical thinking. Further, it proves nothing, and even constitutes an “appeal to analogy,” or argumentum ad analogiam fallacy (distinct from false analogy). Analogies can be seen as useful, enlightening, accurate, or inspiring of new ideas, but they aren’t science. They can provide hypotheses to be tested by science. These “metaphorical models have conceptual or logical entailments” (Balkin, 2003), meaning they can help us to travel outside the ordinary scope of one half of a nested analogy pair when or where the other half goes further. But this is also where confidence may be weakest. Since extended analogies themselves are neither true nor false, they also aren’t logical arguments, which require that premises be factual.
    A good example of extended analogy is the meme theory used to describe cultural evolution, although culture itself is the best analogy for describing evolution, Meme theory is but an analogy within that. Transmissible units of culture (whatever can be imitated, like words, ideas, or inventions) are called memes. These are likened to genes. Memes can be studied in their various behaviors in reproduction, propagation, mutation, parasitism, infectiousness, adaptive fitness, competition, defensive strategies, coalition, and vulnerability to selective pressures. The earliest memes built on evolved heuristics and skills. Memes are said to be different from simple facts that do nothing to propagate themselves, i.e. they need some appeal to needs or desires, and our own affect confers an illusion of intent. This is analogous to a con game that doesn’t work if it lacks a self-interested mark. We ask: what does this meme or  idea promise? More effective cognition, happiness, or security? There are also metamemes, meme systems, or memeplexes, such as ideologies, with whole lexicons of their own, and their own sociopolitical missions, goals, and objectives.
    Memetic or cultural evolution is both Lamarckian and Darwinian. The mutations arise out of miscommunication, accidental adaptations, creative invention, and interdisciplinary cross-fertilization. There is more horizontal meme transfer, and there is more convergence in memetic evolution than in genetic. The truth of a meme might be a factor in selection, but it isn’t always as important as its attractiveness. The cultural system has its evolutionary analogs in invasive species, heterosis or hybrid vigor, resilient diversity, cultural boundaries, and speciation events as schisms. Some memes need to be challenged, or have enemies, in order to survive or stay vibrant: the meme of war propagates itself via retributive sentiment. In aggregate, these are the genomes of culture. The gene pool is the great library, mass communication, recorded databases, and other “pattern buffers.” There are, of course, books about the “science of memes,” but this really isn’t science, even if defined loosely, despite this being a cognitive tool that a scientist might use.
    Another commonly used extended analogy likens the human brain to a computer. Our sensorimotor processes are the peripherals, macro neurological structures or wetware form the hardware. Native cognitive processes act as an operating system, and our learned cognitive processes as software. Neural reuse handles patches and workarounds. Working memory is our ram, and long-term memory is the hard drive. But there is also no clearer illustration than this of why analogy fails as science. The brain isn’t really binary, and affect, particularly when it’s self-aware, has no counterpart in computers. Running with this analogy as their primary guide has software designers believing that artificial intelligence is going to somehow wake up and smell the coffee, once it has a critical mass of patterned information and the right operating system. But many paths still cannot be a field. AI will certainly become intelligent enough to convince humans that it has awakened, but one has to suspect that such a soup will require more ingredients than water and heat.
    The thought experiment is a useful version of the extended analogy that appeals to our naive realism or common sense. It uses combinations of metaphors derived from our senses and perceptions, and concepts built from those (called sensory and conceptual metaphors, discussed at length later). Some of the more familiar examples come from Albert Einstein, with his trains traveling at the speed of light and shining headlights every which way. As implied, these are experiments carried out in the imagination. Other notable examples are Schrödinger’s Cat, Maxwell’s Demon, and the Chinese Room. We are necessarily limited here to mental objects derived from sensation and perception, or concepts constructed from these, and we are therefore subject to a great deal of limitations. It’s more than a little bit challenging, for instance, to conjure a perceptual metaphor for something that’s both a wave and a particle, or both space and time, or both electricity and magnetism. If you are trying to envision what it’s like at the very outer fringes of a flood, where the water is one centimeter deep, it will be easy to mentally calculate that that level of torrent won’t sweep buildings away. The Planning Commission won’t get that, though, because they aren’t the kind of planners with imagination, and will still require a costly engineering study. Galileo will still need to ascend the tower of Pisa. The term “hypothetical experiment” is synonymous, especially in law. Although such experiments aren’t science, they can still generate hypotheses for science to test, or suggest alternative theories, or envision solutions to technological design problems, or get others thinking “outside the box,” or simply be convincing enough for present purposes.
    Thought experiments can use our naive causal reasoning, together with all three kinds of explicit or declarative memory (semantic, episodic, and autobiographical). These things are experienced with greater intimacy and familiarity than mathematical formulae and scientific theory. Humans were storytellers long before we were scientists, and naive long before we got all sophisticated. In making use of naive realism, thought experiments and extended analogies can tell a story, and the stories themselves can access the more ancient parts of our being without having lots of educational or logical prerequisites. But there’s a warning to be inserted here. The silliest of myths and fables, when learned as children, can persist well into our adulthood as representations of the truth, and be next to impossible to uproot, regardless of a subject’s intelligence quotient in other domains of life. Obviously, religious scripture can be fraught with difficulties if followers aren’t explicitly trained in distinguishing children’s stories from documentaries.
    On occasion, a religious tradition will have the good sense to tell tales that are explicitly myths, fables, parables, or teaching stories, with no attempt to imply literalness. The Kabbalists, from the radical mystical wing of Judaism, are practiced at this method of teaching. Three other traditions take this a step further and include humor with their tales, with the getting of the punchline often coinciding with the getting of the wisdom embedded therein. These are the traditions of the Sufis (mystics of Islam), Daoists, and Zen Buddhists.
    Other teaching tales are found interspersed throughout our civilization’s database of myth and fable. Many are so “archetypal” in their character as to be applicable lessons in a myriad real-life situations, and so they embody a different level of truth than the scientific. Aesop gave us a few of these. Joel Chandler Harris’s story of Brer Rabbit, the Tar Baby, and the briar patch, is a good example. The East Indian story of “The Blind Men and the Elephant,” particularly as retold in a poem by John Godfrey Saxe, is another. It’s also impossible not to mention Hans Christian Andersen’s “The Emperor’s New Clothes,” and the anonymous fairy tale of “Goldilocks and the Three Bears.” Any skeptic (or post-modernist philosopher) who can deny that these stories contain a kind of truth and a lot of meaning, needs to have their credentials pulled.

Exposition, Narrative, Anecdote, and Anecdata
    Exposition, or expository writing, is explanatory prose. It’s distinguished from rhetoric, which is intended to sway, persuade, or convince. This book is expository (maybe with a dash of inflammatory). Exposition might attempt a comprehensive description or explanation of an idea or theory. It might present hypotheses for others to assess or test. This is different from both science and logical argument, and both scientists and logicians will tend to look down their noses at it, although journalists are permitted to cite them when exposing their theories to the public. Readers are thrown back upon their own skill sets for critical thinking or cognitive hygiene. It’s a case of caveat emptor, let the buyer beware. Because of this, credentials are often demanded, or at least requested. Autodidacts and polymaths have a difficult time providing these, unless pointing to a larger oeuvre will do. There are places in this culture where you aren’t allowed to add two to two without an engineering degree and insurance.
    Even though expository or explanatory writing doesn’t need to put forth any logical arguments, anything remotely close to philosophical or scientific assertion is still subject to analysis using most of the templates we can set up to look for anticognitives. There’s no internal structure to exposition to bolster any claims of truth being made, and conclusions are not required to follow from premises. You do have to be plausible, whatever that means, but you don’t need to convince. Exposition will still be subject to analysis with informal logic and other forms of inquiry outlined at the end of Chapter 1.7, Conditioning, Persuasion, and Ideology. It’s also important to look deeply, when in doubt, for subtext and implication. We can always ask such things as why this speaker or author is saying this stuff and whether there are better explanations, and was this explanation motivated by some insecurity? Are they trying to con me, or win me over to some cause? What information are they leaving out, and if so, is this deliberate? Such questioning gains in its importance when exposition and its explanations start to become become rhetoric, language intended to have an effective, impressive, or persuasive effect on the recipient. Rhetoric is still shy of proper logical argument, where the premises, if acceptable, more strongly suggest a conclusion, but it does assert the truth of a conclusion. The most frequent marker of rhetoric is an appeal to the emotions in order to persuade. This is regarded as fallacious in logic, even though there may yet be something true in a passionate assertion.
    Narrative has been with us as long as grammar, which is to say, we still have no clue for how long. It’s very likely not much older than homo erectus. It could be uniquely hominin, but we’re still awaiting much more intelligent studies of cetacean communication, and it may be that some elephants and birds can tell each other stories. Narrative requires no more than ordinary linguistic communication skills, although a more nuanced understanding of what’s being said wants both cultural literacy and some real life experience to relate this to. But one need not have a well-organized theory of everything. Narrative, even with all the limitations of vocabulary and grammar, is by far the easiest way to represent our life experiences to others, and it’s the easiest way to reconstruct the experience of others in our own minds. Following a narrative allows us to simulate social scenarios, ethical choices, cooperative structures, and problem solving strategies. Mimicry, along with the neural processes that facilitate it, parallels the functions of narrative, and is possibly ancestral to it. And, of course, we have filmmaking now, appealing more directly to more of the senses, although these are such a production as to require producers, and lots of time and money.
    Except for running accounts of actions in progress, such as those used for teaching, narrative does require an ability to abstract time away from the present, which we hominin have inherited. It also assumes there are situations other than the present one to which the narration might be relevant. Being necessarily linear and sequential, it summons mental representations in ways that parallel living experience, but this also includes narrative flashbacks that mimic our recollections from autobiographical memory. Naturally, the first questions to ask of a story are whether it’s fiction or not, and how much embellishment or confabulation is happening. Are the names merely changed to protect the innocent? We all know by now that there processes at work in our minds that can completely disable our ability to tell fiction from fact, especially when our anticognitive engines are fired up while we are still very young.
    An anecdote is a little story, an account of an incident presumed or asserted to be factual, normally presented as containing a truth, a general life lesson, or a basis for inference. Anecdotes substitute for episodic memories. The word derives from “things unpublished.” Anecdotes are usually assumed to stand alone, not integrated into a larger sample, pattern, or theory. To the scientific mind, anecdotes are inadmissible as evidence. It’s a sampling error, that comes from using a sample of one, that’s representative only of itself.  When these samples have been collected together, as an accumulation of multiple anecdotes, and used in support of a proposition, the collection is derisively called anecdata. Anecdote proposed as evidence has a number of inherent difficulties, particularly related to subjective appraisals, sampling errors, cognitive biases, hasty generalization, and others. When considered at all, it’s best considered as information rather than evidence. In a lot of cases, however, derision or dismissal will overlook the fact that things which do in fact occur may still warrant an explanation or accounting. There is more to the world to be studied than the bulk that fits inside our boxes. There is more to the bell curve than the bulk that lies between the two first standard deviations. Where do the truly deviant data fit in? That a thing is an anecdote doesn’t make it untrue, it only makes it not science. Other than its being unscientific, there isn’t any justification for dismissing anecdata out of hand. There is only reason to regard it as data rather than evidence.
    We run into our biggest problems here when studying the exceptional. What’s exceptional will almost necessarily, and by definition, be anecdotal, or standing more or less alone. It’s oxymoronic to study the exceptional as a group. This of course has the critics wagging their fingers and questioning whether any study of the exceptional can ever be objective science, capable of reliable measurements. Of course it cannot. But science isn’t everything. A dismissive attitude towards the exceptional, as being irrelevant to the norms we seek to study, has in many cases limited our studies of the human mind to the human norms, and all of the boring and disappointing things to be found there. Psychologists don’t seem all that motivated to study the exceptionally well-adjusted, Maslow’s “farther reaches of human nature,” preferring instead to build their database out of disappointing behavior and the malfunctioning of damaged brains. The normal functioning of the normal human brain tends to be mistaken for the very picture of mental health. There is no need for the healthy to get better. You can just get normal and then stop. This has had chilling effects on disciplines such as positive or Eupsychian psychology and research into elucidogens. There is a great mass of anecdata showing the usefulness of elucidogens in treating addictive disorders, PTSD, depression, and other psychological conditions. But the fact that anecdata can be suppressed as evidence is actually the reason that scientific research has been outlawed for so long. It has proved useful to the powers-that-be to keep further research from happening, despite the lives that stay ruined in the meantime. All those anecdotes of success can be dismissed. For another example, neuroscientists studying how the normal brain does numerical computation somehow just don’t see how it’s relevant to look at how an autistic savant multiplies six-digit numbers together in his head in seconds. These are dismissed from the sample, with the sampling error being made in the other direction.
    This obsession with the normal is so little studied that I’ve had to coin a new term here and add another cognitive bias. Normativity bias seeks the norm or center of the bell curve as the most valuable or useful information in a dataset, often ignoring what the exceptions have to tell or teach us. The exceptional is necessarily non-normative. Examples abound in psychology, where human norms are even used as the first measure of mental health, calling to mind the Krishnamurti quote, “It is no measure of health to be well-adjusted to a profoundly sick society.” Examples of this bias can be found in conclusions drawn about the Milgram and Stanford Prison experiments, and others like them, where a small percentage of subjects, let's just say one in ten, refuse to display the disappointing moral and behavioral characteristics exhibited in the norm. These can’t simply be dismissed: they still offer information that should be relevant to the hasty conclusions we draw about fundamental attribution, nurture-over-nature, inequalities of character, and agency. What is it about this ten percent, and can it be taught? Or is the question: can the learned behavior that’s so disappointing be unlearned? Philip Zimbardo, at least, suggests methods of unlearning. We need to stop dismissing the non-normative. Let's get that autistic savant into the fMRI machine and watch him multiply numbers. The normativity bias is consistent with a native heuristic that seeks to internalize observed and inferred norms, and will only be overridden with learning.

Introspection, Phenomenology, and Vipassana Bhavana
    For purposes here, and given the discussion in Chapter1.2, Emergence, Qualia, and Consciousness, it will be conditionally assumed that the personal, private, subjective, mental world has some kind of reality that’s distinct from the physical, chemical, biological, and neurological realities that precede and support it. It isn’t necessary to assume that this reality extends beyond the skin of living organisms. Qualia emerge out of our organism’s experience, and when those get attended, or minded, consciousness arises, as the sum of the qualia attended. Consciousness will not be assumed to be independent of having contents, or to be independent of the other layers of realities it arises out of. It remains subject to effects from these, according to their laws. It can be affected by ionizing radiation, heat, toxic chemicals, bacterial and viral infections, and neurological disorders. When the body is doing dreamless sleep, consciousness no longer exists. Jacob Boehme was asked ”where does the soul go when the body dies?” He answered, “There is no necessity for it to go anywhere.” It always surprises people to hear that Buddha did not teach reincarnation, which etymologically means “going back into meat.” Instead, he taught rebirth, meaning that consciousness arises again in other beings. This is somewhat harder to deny.
    Introspection provides us with direct, privileged access to our private states of mind, which themselves are qualia. To the extent that the minds of others emerge out of similar neurological processes, we can make some inferences and assumptions in communicating with others about common ground and even potential human universals. We can also make inferences deriving from experiences with other animals, sometimes with less precise communication. As with any body of knowledge, we build on this gradually, and we let go too slowly of what doesn’t work. Our introspective experience is subject to an impressive array of anticognitive processes. Nobody is saying here that inner truth will somehow distinguish itself from inner delusion if we squint in just the right way. Like with everything else, we live and we learn, and then have to unlearn some of that.
    It should perhaps be noted somewhere that even the softer sciences, like experimental psychology, have this thing about being dismissive and looking down stuck-up noses at introspection. Certainly people can’t just diagnose themselves with mental disorders. In many places, you need to be told who you are and what’s really going on in your head by someone with a college degree. But we ought to point out that even in neuroscience, sampling is often done using self-assessment and self-report surveys. Treatment and control groups are often separated or sorted according to introspective judgments. These judgments rely on the comprehension of the words being used and the universality of their understanding, and these will in turn rely on someone's taxonomy of psychological states, often according to what the system's author felt was correct. It would therefore serve science well to take its vocabulary of internal states more seriously and self-consciously than it has in the past.
    The subjective or mental world is a reality of its own. It appears to have some natural laws and rules, just like the levels of emergent existence that precede it. Because the nature of this world is unpredictable using the laws of earlier levels, it requires and deserves its own methods of investigation. This world is most often explored by introspection, and there are tricky things about this. Since it has the same sense of reality to us as the physical world, particularly in naive realism, we can often confuse the two. To take an object of subjective experience and mistake it for a reality in the larger world is called a reification fallacy. With enough drugs, one might reach a state that absolutely confirms that the most powerful force in the universe is love. We can only say for certain that at that time, in that state, in that particular mental universe, that may be true. One may not want to share that in public, though, lest one be called upon to explain how supermassive black holes embrace everything they can out of love.
    Deep reflection is often just that, a reflection of the things we project, and as with mirrors, what we see can get perfectly turned around. Above looks just like below, without looks like within, effects look like causes. The occult philosophies have a dictum to keep the various planes or levels of existence separate. Aleister Crowley understood better than the uninformed that what he called magick was ultimately about altering one’s own state of mind, and then experimenting with whether or not that might have some effects in the larger world. He cautioned his students and readers thus, “In this book it is spoken of the Sephiroth, and the paths, of spirits and conjurations, and many other things which may or may not exist.  It is immaterial whether they exist or not. By doing certain things, certain things follow; students are most earnestly warned against attributing objective reality or philosophical validity to any of them” (Magick in Theory and Practice).
    We can often make use of our reification fallacies in a diagnostic way. We can learn things about ourselves from the things we subjectively project onto the universe at large. The whole universe is our Rorschach ink-blot test. Take the Olympian gods, for example. Ultimately, each of these deities ruled over one or more domains of the human experience, and so these became projections of our own activities within these domains. First seven, and then nine of them became astronomical bodies, which received projections from relatively distinct parts of the human psyche, making them representatives of those parts. Each of these would always be found passing through one of the twelve signs of the zodiac, which received distinct projections of various qualities of human experience. These became like the adverbs for the planetary subjects. The domains of human activity were divided into twelve ideas and projected onto the horizons. These became prepositions, the where for the subjects acting like adverbs. We can take a system like astrology, that has the benefit of thousands of years of development in counseling situations, strip all the fate and other nonsense out of it, and examine what remains to see if it can’t tell us anything about the human psyche.
    Even more useful than looking at our reifications as a diagnostic tool is examining what kinds of states of mind lead to predictable changes in our behavior, whether these are emotional states or conceptual inferences. This is a key to developing self-direction and agency. But not all mental states work in this way, and perhaps very few really do. Agency or will isn’t just a matter of barking verbal commands from the dorsolateral prefrontal cortex down into the older parts of the brain. To simply think we can do this as some god-given gift of free will is just delusion. Unconscious neurological structures have to be coaxed into compliance, and this seems to be a technology that requires a lot of study and practice. We can, for instance, train ourselves slowly out of maladaptive emotional overreactions to triggers and stressors by reexamining the memories that go with those and reassociating them with more exalted emotional states than fear or resentment. It’s a process that requires getting to know ourselves and our own inner truths better. We’ve developed several interpersonal techniques to help us with these processes, guided meditation, pychotherapy, group therapy,  think-aloud protocols, and self-report surveys, for example. There is also a wide variety of meditation styles to choose from, probably one for every personal style or bent. And we ought not forget our more ancient shamanic techniques, along with their more progressive modern forms, with particular regard for elucidogens, and other mind-altering substances used to calm or excite ourselves. It’s important to remember that the great majority of our neurological activities and mental processes move along just fine without the spotlight of our attention and consciousness. Sometimes we have to use tricks to bring them closer to the surface, or set them on fire.
    Getting to know our own feelings and emotions may be the most useful aspect of introspection, especially considering that they drive so much of our perception and conceptualization, and move us to adopt ideas not our own. To this end, Chapter 3.3, Emotions and Affective States, attempts a reasonably comprehensive organization of their different forms. As an exercise, though, examining this array should encompass a great deal more than identifying states as they arise, or recalling the times we have felt them. We can play around with adopting emotions on purpose and letting them dissipate again, and with changing them on purpose, and with substituting one for the other, which the Buddha calls tadanga. Tadanga is particularly useful in setting our Samma Sankappa, or Right Intention, in managing craving, ill will, and rage. In the Western therapies, it may be found treating phobias and anxieties. It’s also likely that not many Olympic events have ever been won without some similar affective self-management.
    Phenomenology (the study of phenomena) is an introspective style of thought, or systematic reflection, that examines the properties and structures of experience as well as the consciousness that attends them. It’s sort of an objectification of the subjective. Observing that consciousness is invariably about something, Edmund Husserl referred to the attended as intentional objects. Naming and classifying these objects, which might be any form of capta (qualia), sensations, perceptions, emotions, significations, ideas, etc, is necessarily reductive for purposes of description and analysis, but our many experiences are explored in their uniqueness as well, with our judgments (in theory) suspended.
    Martin Heidegger redirected these practices by restoring the body, being (dasein) and its unconscious and subconscious processes, to an ontological primacy, regarding the consciousness that’s doing the investigating as an effect (i.e., as emergent). Husserl had undergone some mission creep as his version moved on to a “transcendental and eidetic science of consciousness.” Heidegger brought it back into closer alignment with what psychology would become. And this isn’t a whole lot different from what psychology does, at least when it’s not being carefully supervised by experimental research and neuroscience. Psychology, unsupervised, however, is more enthusiastic about slicing and dicing the psyche into its constituent parts and naming them. This is where we get the six basic emotions, the five senses, the hard lines drawn between emotion and cognition, the eight kinds of intelligence, the id-ego-superego triptych, and so on. We need to do stuff like this, of course, if we are going to articulate anything about our subjective states, but aside from the processes arising in distinct functional regions of the nervous system, or using distinct neural networking patterns, these are often lines that can arbitrarily divide continua.
    The Stanford Encyclopedia of Philosophy (SEP) delineates the dimensions of Phenomenology thus: “The basic intentional structure of consciousness, we find in reflection or analysis, involves further forms of experience. Thus, phenomenology develops a complex account of temporal awareness (within the stream of consciousness), spatial awareness (notably in perception), attention (distinguishing focal and marginal or “horizonal” awareness), awareness of one’s own experience (self-consciousness, in one sense), self-awareness (awareness-of-oneself), the self in different roles (as thinking, acting, etc.), embodied action (including kinesthetic awareness of one’s movement), purpose or intention in action (more or less explicit), awareness of other persons (in empathy, intersubjectivity, collectivity), linguistic activity (involving meaning, communication, understanding others), social interaction (including collective action), and everyday activity in our surrounding life-world (in a particular culture).” On the plus side, the better understanding we have of the mechanics and natural laws by which our consciousness operates, the better we can make inferences about what it is we’re really seeing. On the down side, let’s see if you can’t guess what it is this list is missing. And that’s in addition to all of those categories that lie entirely outside Phenomenology’s culture and vocabulary.
    Vipassana means insight, and vipassana bhavana is the Buddhist practice for its cultivation. It shouldn’t be surprising that Buddhism takes a central part of the stage here, or that it’s frequently discussed by neuroscientists, even though few people in the West know much of these teachings beyond what the new age, self-help writers reinterpret so poorly. The most original form still thriving is found in the Theravada branch, where Buddhism is known by its real name, Dhamma-Vinaya, doctrine and discipline. In this form, as well as in Chan or Zen, it’s only been mistakenly called a religion. There’s no god, no spirit, and not even reincarnation. It is, rather, a first-person science, a careful and systematic examination of the facts of our existence. Buddha either dismissed or shied away from issues of metaphysics, or pretensions to truth about how the levels of existence are constituted. His premise was that as long as we are insecure about continuing beyond death, as long as we’re dissatisfied, suffering, thirsting, and hungering, we are not going to get a clear picture of reality anyway. We will only get more ignorance and delusion. His was a first-things-first proposition: cure the mental discomfort and understand the mind. It’s a first-person science, like introspection and phenomenology want to be, only developed at length in many thousands of pages of doctrine and practiced for 25 centuries. Not surprisingly, it isn't billed as a quick and easy path to salvation. Nor is it surprising that this is a field of inquiry that attracts skeptics, atheist philosophers, and neuroscientists.
    The final two steps of the Buddha’s Eightfold Way most concern us here. The Way means the way out of the mental mess we’re in, not the way to glory or divinity. These steps are Samma Sati, Right Mindfulnesss, and Sama Samadhi, Right Concentration. The first is a systematic observational tour through our physical being, our feelings and sensations, our mental processes and activities, and the objects of our thoughts. Such an examination can well include all of the anticognitive processes outlined in their gist in Part Three. These are given in detail to better recognise them as they arise on their own and to seek them out in our memories for a second look. It helps to name our demons if we want to make them run errands instead of letting them terrorize our dreams and thoughts.
    Sama Samadhi, Right Concentration, takes two major forms, one to develop comprehension, and the other to develop discernment. The first is Samatha Bhavana, the development of tranquility and fixed concentration. This exercises and stretches the mind to accommodate greater experience. It’s the attainment of the unitive experience, and letting go of habits of mind that like and dislike, that get us worked up and work us over, that maintain our many illusions about who we are. We try to accept what is, reality unfiltered by our anxieties. Meditations are prescribed with names like the four Formless Absorptions (arupa jhanas); the dimensions of boundless space, of boundless consciousness, of nothingness, and of neither perception nor not perception. These are emphatically not to be reified into metaphysical truths, like so many do with their projections of expanded consciousness onto the universe as a whole. They are merely to enlarge our mental capacity, gaining some room in there to move around.
    The second form is Vipassana Bhavana, the development of insight by introspection, being unblinkingly watchful, seeing or knowing phenomena for what they are as they arise and disappear, the vision of every specific thing formed as being impermanent, imperfect, and having no independent existence. The insights of vipassana are to be regarded as tools for living skillfully, not states of attainment. Insight isn’t as passive as serenity: it may require a dynamic reorganization of our perceptions, feelings, and cognitions. Critical analysis is permitted, but after something is seen for what it is and not before. An insight that doesn’t get down and dirty and start shifting things around just doesn’t get the job done. This wisdom isn’t attained, it’s lived and practiced. Discernment must be learned first hand, and not given by a teacher, as this passage from the Kalama Sutta clarifies:
    “It is proper for you, Kalamas, to doubt, to be uncertain; uncertainty has arisen in you about what is doubtful. Come, Kalamas. Do not go upon what has been acquired by repeated hearing; nor upon tradition; nor upon rumor; nor upon what is in a scripture; nor upon surmise; nor upon an axiom; nor upon specious reasoning; nor upon a bias towards a notion that has been pondered over; nor upon another’s seeming ability; nor upon the the consideration, ’The monk is our teacher.’ Kalamas, when you yourselves know: ’These things are bad; these things are blamable; these things are censured by the wise; undertaken and observed, these things lead to harm and ill,’ abandon them. [And] Kalamas, when you yourselves know: ’These things are good; these things are not blamable; these things are praised by the wise; undertaken and observed, these things lead to benefit and happiness,’ enter on and abide in them.”



1.5 - Framing Issues and Far Horizons

    Framing and Perspective

    Narrow-mindedness, Points of View and Perspective
    Nearsightedness, Spatial Framing and Orders of Magnitude
    Small-mindedness, Contextual and Conceptual Framing
    Shortsightedness, Temporal Framing and Time Horizons

Framing and Perspective
“Great wisdom, in observing the far and the near, recognizes the small without thinking it insignificant, recognizes the great without thinking it unwieldy.”  Zhuangzi #17
    The concept of reframing in cognitive psychology has been somewhat underdeveloped relative to its potential. Neither does it seem to make the best use of its own primary metaphor. A common definition from Wikipedia calls it “a psychological technique that consists of identifying and then disputing irrational or maladaptive thoughts. Reframing is a way of viewing and experiencing events, ideas, concepts and emotions to find more positive alternatives.” What this means, and should have said, is a way of making more conscious shifts in our mental perspectives and frames of reference. Within this lies the idea of cognitive restructuring, which is specifically to rearrange schemas and scripts in order to produce more positive, or at least more productive, emotions, thoughts and behaviors. This is Polyanna’s skill for finding reasons to be glad. Working with cognitive distortions constitutes another aspect. These are maladaptive manic or depressive exaggerations that lack rational support. This would be a good place to call them affective-cognitive distortions instead. Paul Watzlawick, in Change: Principles of Problem Formation and Problem Resolution (2011) offers, “To reframe, then, means to change the conceptual and/or emotional setting or viewpoint in relation to which a situation is experienced and to place it in another frame which fits the ‘facts’ of the same concrete situation equally well or even better, and thereby changing its entire meaning.” In other words, altering an interpretive scheme can change either the meaning or the value of facts, without changing the facts themselves.
    To reframe is to place something into a new frame of reference, another context in which perceptions, emotions, thoughts, or actions are discovered or situated, and then interpreted. A frame can be changed without changing the facts it surrounds. It can be seen as a lens or interpretive scheme through which facts are understood. Reframing leaves the facts alone but may well challenge the assumptions we make about them. And when we get a different perspective on facts, their meanings and values can seem to change. Facts may take on greater or lesser importance or value as the frame around them alters their relative scale. A particular obsession might be all-consuming relative to this day, and relatively silly in relation to the course of life as a whole. Reframing, then, is often nothing more complicated than getting some perspective, or seeing the bigger picture, or taking a closer look, or seeing both forest and trees. In a small frame of reference, we may find ourselves in an unfortunate or untenable position, and feeling appropriately miserable. In a larger frame, we may be thinking about what this experience can teach us, and what we can pass on to others afterward.
    While frames provide at least some context for an object, they also isolate it from other contexts at the same time. They act in much the same way as attention, protecting the object from irrelevant distractions as well as threats of disconfirmation. When frames are set in advance of an experience, as we see in expectation, they can anchor us to certain prejudicial assumptions and prime us for specific responses that may have little to do with the facts being brought under consideration. The frame may set or establish a universe of discourse, a set of talking points, or a limited set of allowable references, which sets limits on an entire analysis. Getting stuck in a particular frame can interfere with how the brain processes magnitudes, altering the notion of what things are worth. Paul Thagard (2011) speaks of frame blindness as a “tendency to solve the wrong problem because your mental framework prevents you from seeing the best options and important objectives.” The deliberate manipulation of frames is particularly important in the techniques of cultural persuasion, and is a lot better understood by those who use it than by the subjects it operates on.
    As a skill set, reframing is an ability to examine a perception, emotion, conception, or memory in multiple contexts. This includes frame versatility and perspective shifting. It’s particularly valuable as a metacognitive set of skills. It can give us either a larger mind or relatively less clutter in the one we have. It’s characterized by mental elasticity, and it’s often seen in the company of humor. It’s at ease with paradox and ambiguity, even though it may take these on with resolution in mind. It’s ready to challenge even some widely-accepted premises and other assumptions. The skilled use of frames, scales, horizons, and perspectives are useful keys to appreciating, devaluing, and revaluing with some degree of personal control.
    The Buddha shouldn’t be left out of this discussion either, as he celebrated several of these skills as “beautiful mental functions” (sobhana cetasikas), including a balance of mind (tatramajjhittata), a buoyant agility of mind (kayalahuta), a pliant adaptability of mind (kayamuduta), and a readiness or efficiency of mind (kayakammannata). These depict a mind able to move at will among frames, perspectives, and other mental options.

Narrow-mindedness, Points of View and Perspective
    Narrow-mindedness, as understood here, has a few dimensions to it: the single point of view, the limited angle of view, and one-dimensional thinking. The separate issues related to specialization vs interdisciplinarity and vertical vs horizontal thinking will be taken up elsewhere in this chapter.
    We begin with the first. Limitation to a single point of view or perspective can be a serious defect in our theory of mind, an inability to get outside of ourselves and see things from another’s position. Emotionally, it can signal a lack of sympathy or empathy. The usefulness of multiple perspectives can be analogized in the way our own two eyes work. The fact that each eyeball receives light from a different angle gives us two different pictures of the world. This is called stereopsis or retinal disparity. The brain isn’t troubled by this at all: it uses the discrepancy to generate a perception of depth in the field of vision. Two points of view can also give different meanings to the same object. The hawk and the rabbit have entirely different opinions of that hole in the ground. The same thing happens with our ears. The different signal strengths and sound-wave phases in our binaural hearing make the sense directional, a perceptual dimension we would lack without that. This is even more sophisticated in cetaceans, who hear through their teeth and jaws. The row of teeth on one jaw is set half a space differently from the teeth on the other, further fine-tuning their phase differentiation. On a larger scale, the limitation to one point of view can stand as a good metaphor for a lack of biodiversity, for monoculture, and the genetic implications of incest. The variety doesn’t only add texture and dimension: it also adds resilience and adaptive capability to novel and unexpected situations and environmental insults. Tolerance, or its opposite, is the well-known issue with narrow-mindedness. Unfortunately, it’s still seen as something we ought to have, show, or practice, as some sort of social or cultural duty. We might try role playing to practice getting out of ourselves. But we continue to stereotype and dehumanize those other people. We still learn grudgingly that reducing in-group biases and discrimination improves our situation. But it’s not yet something we know to revel in, something to have an appetite for, something to seek out and seize upon because it makes us better people, or because it furthers our adaptive intelligence.
    The limited angle of view constrains our perception to a narrower band of interests. It keeps us from seeing other sides of the issues. A good example is found in what nationalism does to a global or Terran view. To Americans, for instance, the 9-11 attack on the WTC towers was among the worst things that ever happened in all of history. 3000 people died senseless deaths. But the attack on the symbolism took an even greater precedence as the real affront to the narrow-minded American point of view. It was the symbolism that drove the nation insane. America lost the last of its perspective here, and let itself be terrified, thereby losing its “war on terror” before it had even begun. 3000 people died? Globally, 3000 children starve to death every three hours. This happens eight times a day. That number is also nothing compared to the number of civilians that the American military has killed in the years since that event in raging and thrashing blindly about. Meanwhile, the surviving world trade centers want humankind to continue overpopulating the world, starving children be damned.
    The Book of Changes has a chapter titled Inner Truth, or the Truth Within, Zhōngfú. Readers almost invariably seize on their first response to this title and conclude too soon that Within is where the Truth lies. This certainly rings true in the Western narcissistic new age. But as this book often does, this is a trick. Yes, perceiving from deep within ourselves is good, but it also limits us. There is much more to knowing than knowing ourselves. Piglets will have piglet points of view, and fishes will have fishy points of view. And you don’t learn to hunt or fish by thinking like a human. And if the piglets and fishes are  euphemisms for sons and daughters, you’ll want to figure out who they are first, before you can parent them properly. The subject of our narrow perception is also taken up in Chapter 20, Perspective, Guān, and this title appears in the name of the goddess of compassion, Guānyīn, which translates “attending the cries.” She pays attention to beings outside of herself.
    One-dimensional thinking is becoming ever more common as culture gets more complex and people are able to concentrate on only one problem at a time. Yet until now, nobody has identified this as an informal logical fallacy. One-dimensional thought is an argument that concentrates solely or primarily on a single dimension of a multidimensional problem. It can be related to the fallacy of the single cause or causal oversimplification, but it’s trained on solutions or outcomes instead of causes. Complex systems want holistic systems analysis, not reductive or narrow perspectives. The most frightening example is the typical human approach to the overpopulation issue, which will likely look at only a single dimension of the problem, like agricultural production, overconsumption, Ponzi economics, standard of living, wealth distribution, food distribution, women’s education, family planning, etc., leaving the others ignored. Even lay greens or environmentalists will largely neglect the big picture and longer time horizons, to concentrate on a single cause or effect. Whack-a-mole, an arcade game in which players strike toy popup moles with mallets, provides a good analogy to non-comprehensive, piecemeal solutions that result only in temporary gains. The struggle never ends, and the r-strategist moles like it that way.
    There are frequently different hierarchal levels to problems, which might be either defined or distinguished by leverage, efficacy, agency, or levels of abstraction. A 2015 cartoon by Hilary B. Price pictures a happy rat walking atop the walls of a maze and wondering “Why didn’t I think of this earlier?” The human predicament is often like this: we crawl on our bellies through mazes with knee-high walls, believing somehow that it’s somehow forbidden to walk erect. On the same topic, Einstein wrote, “The significant problems we face cannot be solved at the same level of thinking we were at when we created them.”

Nearsightedness, Spatial Framing and Orders of Magnitude
“We are so small between the stars, so large against the sky. And lost among the subway crowd, I try to catch your eye.” Leonard Cohen
    We live in worlds of varying sizes. The size of our own world changes as we move through our days and our years. And it changes from person to person. Some people live in very small worlds and do no harm to anyone, and others live in very small worlds and do massive damage to the larger world they can’t see. Most worlds don’t extend very far beyond personal concerns, either in space or in time. People tend to stay enveloped in their personal lives, meeting needs that never seem to get met, or in their family lives, or their smaller societies and congregations, and only venture out occasionally, relying more on media for news about what else is out there. Or they go exploring on packaged travel tours that drag a big, expensive shell of the familiar along with them to keep from going native, or having their heads hunted by natives. Yet our available worlds have never been larger, and our ability to move around never greater.
    Our first world is a little bag of muscle that gradually becomes unbearably tight, and so we have to move out and onward. We graduate from there to playpens. At some point we walk all the way around the block, or all the way to town or school, all by ourselves, maybe crossing some roads and streets on the way. For most of us, though, it seems like we just move into larger and larger playpens. These can be metaphorical, as with ideologies and religions, or physical, as with nations and patriotic duties thereto. Their boundaries are where we stop growing, and often start fearing whatever lies beyond.
    We ought to do more exercises to keep stretching our minds, reaching for larger scales, for higher orders of magnitude, and larger units of unity and reunification. We’re more than black or white: we’re human. We’re more than human: we’re primates. We’re more than primates: we’re animals. We’re more than animals: we’re life. And we’re more than Americans or Europeans: we’re Terrans. There are a number of well-done videos available free online that answer to the search terms “powers of ten.” We should see them all, and see them again. We appear to need the reminding of how damn small we are, and how big and clumsy relative to the details, where, apparently, both god and the devil live. Neither would it hurt us to practice the eight jhanas in Buddha’s Right Concentration, to learn how to flex our infinitudes. These are different meditations than Zazen, done with the intent of opening up and enlarging the mind, rather than simply letting go of fixations and clutter.
    We owe much to astronomer Edwin Hubble for getting our minds out past the edges of our own galaxy, and making our infinity so much bigger. Those weren’t nebulae at all: many were galaxies even larger than our own. The photograph “Earthrise,” taken from lunar orbit by astronaut Bill Anders in 1968, marked another important step in the evolution of our spatial sense. Astronaut Edgar D. Mitchell had this to say about the view from up there: “You develop an instant global consciousness, a people orientation, an intense dissatisfaction with the state of the world, and a compulsion to do something about it. From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, ‘Look at that, you son of a bitch.’” And then there was plucky Voyager 1, who sent us a snapshot of our home in 1990, from 6 billion kms out, that came to be known as the “Pale Blue Dot.” Carl Sagan, a Big Picture guy if there ever was one, had this to say about that, clarifying perfectly what we mean here by nearsightedness:
    “We succeeded in taking that picture, and, if you look at it, you see a dot. That’s here. That’s home. That’s us. On it, everyone you ever heard of, every human being who ever lived, lived out their lives. The aggregate of all our joys and sufferings, thousands of confident religions, ideologies and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilizations, every king and peasant, every young couple in love, every hopeful child, every mother and father, every inventor and explorer, every teacher of morals, every corrupt politician, every superstar, every supreme leader, every saint and sinner in the history of our species, lived there on a mote of dust, suspended in a sunbeam.
    “The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that in glory and in triumph they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of the dot on scarcely distinguishable inhabitants of some other corner of the dot. How frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds. Our posturings, our imagined self-importance, the delusion that we have some privileged position in the universe, are challenged by this point of pale light.
    “Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity – in all this vastness – there is no hint that help will come from elsewhere to save us from ourselves. It is up to us. It’s been said that astronomy is a humbling, and I might add, a character-building experience. To my mind, there is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly and compassionately with one another and to preserve and cherish that pale blue dot, the only home we’ve ever known.” We miss you, sir.
    Along similar lines, the Buddha, as reported in the Dhammapada, said “The world doses not know that we must all came to an end here. But those who know it, their quarrels cease at once.”
    Distancing, or getting perspective from a larger frame of reference, isn’t escape. In fact, it almost always leads to greater involvement in the near. The Sufis, who live in larger worlds than most, say, “Local activity is the keynote of the dervish path.” Think globally, act locally, someone once said. Think galactically, act terrestrially, someone replied. The French say, “reculer pour mieux sauter: draw back in order to leap better.” But it doesn’t seem to come naturally. We have to train for it. We have to stretch and expand our minds. Sometimes we need to break open our heads. And the media isn’t helping us much at all these days, with its sound-bite appeal to the simpler and stupider minds. I recently read an article in a lay journal, supposedly written for the scientifically literate, informing us that the the old impact that created our moon threw “tons of material” into space. Tons would fit in the back of a truck. The moon’s 33,300,000,000,000,000,000 tons would not. Yet another article claimed that another Yellowstone supervolcanic event like those we’ve had, or like Toba, would likely kill thousands of people. Maybe that could say thousands a million times over, or simply, billions. It doesn’t help us to think any better when we receive scalar data that’s off by six to sixteen orders of magnitude, six to sixteen powers of ten. We have to start thinking bigger than this, we need bigger numbers. This might also help more of us to grasp the scope of our economic debt and the fraud that got us there.

Small-mindedness, Contextual and Conceptual Framing
“The only people who see the whole picture … are the ones who step out of the frame.” Salman Rushdie, The Ground Beneath Her Feet
    Things appear differently in smaller and larger contexts. Contextual and conceptual framing, as used here, takes the spatial framing, just discussed, as a metaphor for the context of a conceptual schema, or a behavioral script, or an emotional reaction. As said earlier, a frame may set or establish a universe of discourse, a set of talking points, or a limited set of allowable references, which sets limits on the entire analysis. With the framing effect, the same problem will receive different responses depending on how it’s described. Different conclusions are drawn from the same information, or different emotional reactions are felt, depending on how information is presented. You have your context of the forest, and you have your context of the trees, and it’s also OK to have both at once.
    Philip Zimbardo cautions us to maintain our frame vigilance. “Who makes the frame becomes the artist, or the con artist. The way issues are framed is often more influential than the persuasive arguments within their boundaries. Moreover, effective frames can seem not to be frames at all, just sound bites, visual images, slogans, and logos. They influence us without our being conscious of them, and they shape our orientation toward the ideas or issues they promote… . We desire things that are framed as being ‘scarce,’ even when they are plentiful. We are averse to things that are framed as potential losses, and prefer what is presented to us as a gain, even when the ratio of positive to negative prognoses is the same.” Zimbardo’s pet concern is that the framing of an out-group in disparaging terms and dehumanizing ways permits moral disengagement, and even the sociopathic behavior that comes from habitually regarding others as sub-human.
    Specialization within a professional or academic field, or other cognitive pursuit, is a good example of a conceptual frame. These usually develop their own sets of hypotheses, theories, postulates, axioms, and laws. They often have their own lingo, lexicon, or language. Many come with strange sigils and glyphs, and some with secret teachings and initiatory rites, like the Hippocratic Oath. Specialists are proud to say “This is my field. Within this field I’m an expert. I go all the way from this fence to that one over there. They say there’s greener grass on the other side, but that isn’t my concern, and besides, there’s a fence.” But there are reasons for pride in being really good at something specific, and there is a lot of valuable detail denied to the polymath, or the jack of all trades: ars longa, vita brevis. The term squint is unflattering slang for a person, usually professional, who can see only what’s directly in front of his nose, or is limited by a highly constraining frame of reference. This is subject to a déformation professionnelle, seeing things only according to one’s own professional lens. This is the opposite of a big picture, outside-the-box figure. Interdisciplinarity becomes a necessary complement to specialization. Higher and more general levels of organization stand over multiple frames, see how they fit together, and have a sense of what belongs or goes where, and when to make the boundaries between those frames more permeable. The grunt who does nothing but grease the chariot’s axles needs someone to connect him with the one who supplies the grease. This would be the General, and that’s why he’s called that.
    Thinking outside the box is perhaps the best known expression for stepping out of bounds to find other solutions to problems. Edward deBono termed it lateral thinking. While a metaphor of sideways mobility makes enough sense, it’s harder to understand why he contrasted it using the term vertical thinking to apply to conventional in-the-box thought, as this can imply jumping to other levels, like metalevels above the box. But there it sits. Thinking outside the box applies just as well to transcending the boundaries of conceptual frames in general, as it does to just solving problems. As such, it calls the creative process itself to mind. Arthur Koestler referred to reference frames as matrices, and then went on to define creativity in terms of the juxtaposition of two matrices, which he called bisociation. This is putting frames together that were separate before. Extended and nested analogies are two examples, the analog and the thing studied each representing a matrix. The juxtaposition was also important to his understanding of humor, where matrices collide in surprising or incongruous ways. The ability to see a thing simultaneously as lying within multiple frames is illuminating, and new associations are tasty things for our brains to munch on. Perhaps history’s (or legend’s) greatest example of lateral thinking is found in Alexander the Great’s approach to Phrygia at Gordium, where he was informed that anyone who could undo an impossibly complicated knot, tied there long ago, was destined to rule all of Asia. He drew his sword and undid the knot the metalevel way.
    When the box to be thought outside of is one of our parochial traditions, a local culture, a church, a nation, or another of our expanded playpens, it can be a wonderful thing to get out of, especially when those who would keep us in have carefully cultivated some insecurity or anxiety about outreach, a fear of the other, or those other people, or suspicions about their dark designs and animal impulses. We really ought to see for ourselves just how inhuman they are, or whether they might not be a lot more like us than we’re told. Mark Twain wrote: “Travel is fatal to prejudice, and narrow-mindedness, and many of our people need it sorely on these accounts. Broad, wholesome, charitable views of men and things cannot be acquired by vegetating in one little corner of the earth all one’s lifetime.” And we really ought to go native whenever we can, if we really want to see the world for ourselves. Everything just said is also a metaphor for mental, emotional, and cognitive exploration as well. And both apply to anthropologists who fancy themselves objective and look down their noses at verstehen, the empathetic approach to the study of others. The strictly objective approach does little but drag an alien interpretive scheme into the foreign village. Why bother crossing the great stream at all?
     In a chapter entitled Adornment, , the Book of Changes examines the small-mindedness, or metaphorical near-sightedness, of human culture. This is the dazzling effect of the very nearby, symbolized there by flame (and the eye) down at the foot of the mountain, illuminating the nearby in glory and splendor, but preventing the distant, the long-term consequence, from being seen at all. The book celebrates this closeness, but only in its place, and states quite clearly, “The  young noble … clarifies numerous policies, but does not presume to execute justice.” Richard Wilhelm wrongly translates the title as Grace, missing the point and misleading his readers. Fine: dress up in the latest fashion, follow the latest fads in art, dance only the current dances, spend all your money on foolishness, and honestly, enjoy your life this way. There are things that are only true locally, but are true nonetheless. But don’t think you’re living in any real world, and don’t forget about any long-term damage your excesses are doing to that real world, the one our grandkids are set to  inherit. We will still need to remember the longer view, beyond all the baubles and bling.
    In polemical political discourse, a particular frame may be seized upon to make a point while another, still larger frame, with some ironic or opposite implications, gets left out of the discussion. A politician may receive a lot of criticism for avoiding service to his country during the Vietnam war. But in a larger frame, the Vietnam war was actually an unimaginably idiotic thing to get involved in, and avoiding service there might have shown both admirable intelligence and good conscience. Is this provided he did it for reasons of conscience? Elsewhere, a national Senator who stands for Christian family values, opposes all vice, and sponsors legislation to outlaw homosexuality, gets busted soliciting a homosexual prostitute. The public is outraged. But there’s a larger frame here. If you’re reading this, you might think that both prostitution and homosexuality should be legalized, at the very least for public health reasons. From this frame, there should still be some outrage, but it should be directed more appropriately at hypocrisy, and most especially at allowing this hypocrisy into government.

Shortsightedness: Temporal Frames and Time Horizons
“Live unknowing of that which your age deems most important. Lay between yourself and today at least the skin of three centuries.”
Nietzsche, The Joyful Wisdom

    Our time horizons are the distances we see into the past and the future. In many cases, the opposite of having time horizons is being here now, living in the moment, a popular form of new age cognitive bondage. However, there are cases where living in the moment and going with the flow don’t mean that the moment is limited to now, or that the stream can’t still connect the glacier or cloud with the sea. And time, regarded as spacetime, can still have several dimensions: the moment can be a point of view rather than simply a limit or anchor. We can live our years broadly, widely, expansively, exploring lots of options and choices, and we can live our years both deeply and loftily. So we aren’t just limited to a handful of decades of linear years. We can also live our lives in cubic years.
    We sit in the middle of Time, in a universe that’s only half done, so things could go twice this far. But only a few don’t forget, and only a few try to see just how big it all is. Over millions of years, small changes accumulate, but it’s not intuitive. The mountain is still our symbol of steadfastness, not of eroding to plains. Horizons this vast need to be taught and learned, but we can do that. Loren Eiseley wrote, “It gives one a feeling of confidence to see nature still busy with experiments, still dynamic, and not through nor satisfied because a Devonian fish managed to end as a two-legged character with a straw hat. There are other things brewing and growing in the oceanic vat. It pays to know this. It pays to know that there is just as much future as there is past. The only thing that doesn’t pay is to be sure of man’s own part in it. There are things down there still coming ashore. Never make the mistake of thinking life is now adjusted for eternity.” There are things down there still coming ashore. At least remember that.
    Our ability to look back into the prehistoric past has evolved slowly since Lucretius, in De Rerum Natura, first proposed natural selection:
“And other prodigies and monsters earth
Was then begetting of this sort- in vain
Since Nature banned with horror their increase
And powerless were they to reach unto
The coveted flower of fair maturity
Or to find aliment, or to intertwine
In works of Venus. For we see there must
Concur in life conditions manifold
If life is ever by begetting life
To forge the generations one by one.”
    Old English at least had the word dustsceawung, contemplation of the dust, of former civilizations and peoples. Much credit is still due to the Hindus, whose epochs or Kalpas run surprisingly close in numbers of years to those of modern cosmology. But generally speaking, the backward-looking time horizons, in excess of the few piddly millennia that Abrahamic religious thought allows, would have to wait for geology, paleontology, Darwin, and Wallace. Even thinking historically, we are weak, as witnessed by all those things we seem doomed to repeat. And the lessons of history remain more of an art form for the conqueror or survivor. Ancestor worship is seen in several of our ancient cultures, and this has given us ties to a modestly distant past. Unfortunately, the main point of ancestor worship seems to have largely been missed: an encouragement to become worthier ancestors ourselves. Perhaps our own descendants will take up a new pursuit, like ancestor vilification, with curses for our current myopic generations, instead of heartfelt sacrifices and prayers.
    In looking forward, we have a mass of cognitive biases working against a rational view of our own futures, such as problems of duration neglect and hyperbolic discounting, to be discussed later. The corresponding affective or emotional components don’t help us hardly ever, with all their right-nows and nevers. We exhibit foreshadowings of these difficulties as young children, already wrestling with problems of deferred gratification. We will have that one marshmallow now, not the two we’ve been promised for waiting another five minutes. Grownups don’t look much further ahead. Part of the problem seems to be in an underdeveloped PFC, which is ill-prepared to see remote consequences. But we still have to believe we can train ourselves out of this. Dame Rebecca West wrote, “If the whole human race lay in one grave, the epitaph on its headstone might well be: ‘It seemed a good idea at the time.’” Those who do look ahead are like Cassandras to their fellows. We see things you just wouldn’t believe, and won’t ever.
    The hungry live for their next meal, the recovering addict lives one day at a time, the mortgaged breadwinner lives from paycheck to paycheck, the board members live for the quarterly profit reports. The politician lives for the next election and, big surprise, rarely has any vision longer than that, though he’s busy designing the future, and entrusted with 100,000-year decisions about the disposal of nuclear waste, and the still longer-term extinction of species. In the nearer term, the failure to educate children and provide adequately for the maintenance of civilization’s infrastructure can promise big catastrophes within a generation or two. Most things economic and political are myopic. A Greek proverb has it that “A society grows great when old men plant trees whose shade they know they will never sit in.” But how many of us can think beyond our own little lives and their measly handful of decades? Even those who profess to believe in reincarnation seem as careless as the rest about the quality of life in the world they’re leaving to themselves.
    In the big picture, things pass. The constitutions of nations, their bedrock foundations, are little more than ephemera, passing globally at a rate of about one a year. The average lifespan of a nation or dynasty runs about 200 years. Yet we can ruin something wonderful for the next ten-thousand years out of misplaced devotion to these. The seven generations social and environmental ethic is often said to come from the founding law of the Iroquois federation, although it isn’t written there. Oren Lyons, Chief of the Onondaga Nation, writes in American Indian Environments: “We are looking ahead, as is one of the first mandates given us as chiefs, to make sure and to make every decision that we make relate to the welfare and well-being of the seventh generation to come. ... What about the seventh generation? Where are you taking them? What will they have?” This is certainly a far better approach to the question of long-term sustainability than the modern abuse of the term sustainable would indicate. At least it has a 140-year reach. That ad about sustainable petrochemistry is not to be trusted. But really, unsustainable behavior leads, by definition, to the extinction of that behavior, and extinction is forever. The word is actually as serious as a heart attack. To sustain means to uphold from below, to maintain the ground we stand on, the same system that permits our emergence. This is going to require some longer time horizons, or enough of a population crash to get this lesson across.
    The prophets in the holy books have been generally a lot less useful to us than the authors of our science fiction, thanks of course to the science part. Francis Bacon’s New Atlantis (1626) showed us engines, movie theaters, radios, amplifiers, and electrical wiring. Jules Verne did some good prophetic inventing as well. Yet it may be that their most useful contributions will be in social and cultural engineering, provided the ground for their thought tracks the increasing discoveries regarding the nature of human nature, and that utopias can be envisioned around something other than pure fantasy. The future can be designed, but only when we have better ideas of what really works and what’s too alien to our nature to work.



1.6 - Identity, Belief, and Belonging
   
Conviction and Commitment

    Identity and Identification
    Belief and Credulity
    Belonging and Confidence
    Secular and Sacred Values
    Opening Up the System

Conviction and Commitment
    “Certainty and similar states of ‘knowing what we know’ arise out of primary brain mechanisms that, like love or anger, function independently of rationality or reason. Feeling correct or certain isn’t a deliberate conclusion or conscious choice. It is a mental sensation that happens to us…. To reward learning, we need feelings of being on the right track, or of being correct… . To be an effective, powerful reward, the feeling of conviction must feel like a conscious and deliberate conclusion.”  Robert Burton
    Having the courage of our convictions is celebrated a great deal more in our cultures than having the good sense to adapt our thinking to improved information. The latter makes you inconsistent, wishy-washy, a flip-flopper, or a vacillator for exercising your adaptive intelligence. If at first you don’t succeed, you try, try again. You don’t take a moment to reflect on your failure to see if something was wrong with your original thinking. You just put your head down again and butt. Revising one’s mind is even taboo in places, and apostates are sent straight to hell. This doesn’t mean that there’s anything fundamentally wrong with conviction and commitment themselves, or that firmness of purpose isn’t an admirable trait. Without clarity of conscience, we don’t get right livelihood. But all this is dangerous stuff when it’s blind to its own errors, and remaining open to feedback is how to discover these errors. The Nazis, Jihadists, and new Crusaders of all faiths all have conviction and commitment. As Blaise Pascal said, “Men never do evil so completely and cheerfully as when they do it from a religious conviction.” Firmness wants help. Firmness and error are a bad combination. Firmness and flexibility are the pair to beat, other than Time paired with anything else (El tiempo y yo contra cualquier dos).
    Most humans seem to quit their investigations at the earliest opportunity. Self-satisfaction sets in and hardens to psychosclerosis, Ashley Montagu’s coinage for cognitive inflexibility. Faith in our convictions, even if premature, even if in the supernatural, may to some extent be an evolved mental state that keeps us from wasting time on difficult or unanswerable questions, or steers us clear of diminishing returns on investigative efforts. This could be an overthinking cutoff heuristic called “enough is enough,” but there already is one called satisficing. Mount Olympus in Greece is an easy mountain to climb, but who actually went up there to verify the presence of gods? Closer to home, Aristotle insisted that women had fewer teeth than men. He was married, but never bothered to count his wife’s teeth to verify that. We aren’t always convinced because something is true. Nietzsche noted that we are convinced merely because something is convincing. And its convincingness suggests a willing cooperation from something inside us.
    It isn’t all that surprising that we tend to jump to conclusions, and cling to those, once we feel we’ve learned enough. Learning is an investment. We put time and energy into it, or at least it’s the result of time and energy spent. And we like to regard ourselves as savvy investors. Yet truly savvy investors will know to cut their losses when they start to see their investments going to hell. Staying in that game and betting on a change of luck is called an escalation of commitment, or sometimes simply stubbornness, or throwing good money after bad. We will rely on the choices, decisions, evaluations, or conclusions already made, even when these have been less than optimum, because those costs are already paid. Making a change imperils our profitable return on a prior investment, along with the extra costs of having to acquire something new and then dispose of something old. New investments not only require additional effort: they also threaten to require unlearning the older ones. Where this occurs in specious reasoning, it’s called the sunk cost fallacy, and it’s related to the choice-supportive cognitive bias. It’s the push to keep on pushing because you’ve gone this far, and time or energy might otherwise be lost. Choice-supportive bias is the tendency to reaffirm prior choices even in the presence of reason to doubt. It may also entail an excessive devaluation of the choices forgone. It’s also called a post-purchase rationalization, and sometimes buyers’ Stockholm syndrome. We’re the captives of our previous choices. We will rationalize the value of our new money pit, or try to find justifications for our newly-acquired black hole, even though it just sucks and gives nothing back. This all reflects a desire to remain consistent, as though this were an important marker of character. Consistency is something that others want to see in us, and we get social rewards for that, because others like being able to predict our behavior.
    The larger problems here arise when we become emotionally defensive of our convictions and commitments. Attitude polarization is a sort of refutation bias, the opposite of confirmation bias, but much hotter. It’s also known as belief polarization and polarization effect. Beliefs become more hardened and extreme as the discussion, argument, or investigation continues. Our own side of the argument is unsurprisingly persuasive to us, while the disconfirming side is just rubbish, unworthy of notice, or a maybe a sign of mental illness, or something that only idiots think. This is related to the backfire effect, a defense mechanism, used when our personal beliefs are challenged. Things escalate quickly from questions to be addressed to trenches worth dying in. The latter is a lot more work, and it’s nearly always a further investment of questionable value. A challenge of any strength is met with an entrenchment deeper into the thought to be defended. It’s most often found in overreactions in polemical speech, and it shows a natural affinity for false dilemma. You’re either with us or against us, either friend or foe. When the “security” held in a conviction is threatened, the true believers will simply dig in their heels, or double down on an error. It seems that once the deluded, the ignorant, and the stupid manage to make delusion, ignorance, and stupidity into points of pride, or matters of faith, there can be no turning back or around. They will have to self-destruct. It’s unlikely that anyone has ever converted a Jehovah’s Witless who’s knocked on their door.
    Unpleasant emotional responses, from anger to anxiety, are fairly reliable signs that something else is going on here, something perhaps unrelated or irrelevant to the actual issue at hand. But the sense of certainty in the reaction is also just another emotion. Where is the emotion that tells us that we now have a chance to learn more, or learn something new? Do we even have one that excites us to housekeeping chores and tidying up our own minds? One clue to the unpleasant reactions might be found in Mark Twain’s note that “In religion and politics, people’s beliefs and convictions are in almost every case gotten at second hand, and without examination.” These convictions have been adopted because they have promised us things, perhaps to shore up our sense of identity, to give us a more secure sense of who we are. And these commitments are either promises we’ve made to others, or have made for the purpose of getting accepted by others. We will seldom have anywhere near as much confidence in our own original ideas. What happens is that our adopted ideas get deeply entangled with other things that meet important personal and social needs, such that threats to these ideas get perceived as threats to getting our needs met. Another problem is that in many cases, these ideas themselves are extraneous and unnecessary, and defending them costs a great deal more than they’re worth. Ideas may be useful in guiding our choices, but we make better choices when we have a range of ideas to choose from. What matters is how we behave, not what we believe.

Identity and Identification
    “[We have] a sense of the mystery of life, the mystery of the universe that surrounds us, and the mystery that is within us. It is within these vast unknowns that we try to establish our identities. We strive to carve out a place that is known, a place that we can manage, a place that is safe, a place that allows us to grow our unique Selves. This is nothing less than our struggle for psychic survival, a need for identity: tribal identity, national identity, group identity, a family identity, and finally, an individual identity.” Annemarie Roeper
    A bit was said earlier about being true, and this purposely left open the question of being true to what. How do we identify ourselves, that we might be true to ourselves? Just who do we think we are? In the larger scheme, of course, self is just a place where matter, energy, and information get knotted together for a while, before getting conserved elsewhere, all over the place. The organism persists while the organs come and go, and then the organism submits. But even to the more rigorous Buddhists, and most neuroscientists, our identities still have a conditioned and temporary sort of existence that’s somehow and somewhat more real than mere illusion and delusion. They just aren’t what we like to think they are, such as a way to protect ourselves. Generally speaking, our identities are more trouble than they’re worth. But specifically speaking, we really do have needs to come up with something to arrange our subjective parts within, and some arrangements actually work for whole lifetimes. Regardless of questions of efficacy, we wake up each day and start running that little reel or tape all over again, our autobiographical narrative. Oh yeah, I’m back again, coffee, pee, and onward into the day.
    What parts of ourselves do we identify with? We aren’t our sensations, perceptions, feelings, emotions, thoughts, beliefs, names, careers, family roles, memories, or plans. When asked, we offer as little as we can get away with: I’m a: [insert job title, role, hobby, or sexual orientation here]. Within, we seem to want to embrace as much as we can in a schema of self and its handy set of operating scripts. Even if this is just a dynamic cluster of things that’s continuously changing, that cluster still shares our name. We resist becoming “not the same person anymore,” partly in order to hang onto our friends. It feels more secure to feel consistent and continuous. What seems a need for self-consistency drive our decision-making like a kind of inertia or momentum. We like being somewhat predictable to ourselves as well as our friends. Where is our basic existential anchor? What can we think about ourselves that isn’t in constant danger of getting falsified? Fear for the boundary will come with the boundary, and counterintuitively to many, better security is had from making these boundaries permeable than by making them ever more impregnable. An opposite direction is to to identify with less, in order to have less to lose, to refine ourselves down to some essence or philosopher’s stone, something that will make gold wherever we go. Our character can be something like a Swiss army knife, lightweight, portable, and versatile. Although identity may give us a sense of security, it can be more of a problem in what it denies us than a boon in what it secures. And it can commit us to frequent emotionally and cognitively expensive defensive reactions. Our greatest anxieties and insecurities arise over who or what we think we are.
    The old theologians set up a mighty lofty identity for their deity: he was omnipresent, so therefore he couldn’t move; he was eternal, so therefore he couldn’t change; he was already perfect, so therefore he couldn’t grow; he was already omniscient, so therefore he couldn’t learn. Some of this bigness and timelessness was supposed to rub off on his followers. But all he really was was stuck, and it shows in his writing and the character of his followers. Psychosclerosis is a fun new word for getting ourselves stuck like this in rigid identities. Philosophy often emphasizes the importance of consistency as well, of being the same person, despite change. In an alternate version of Laozi’s Daodejing, héng (continuity) is used for cháng (constancy). As an ideal, this is a much better choice for identity than cháng (and it could well be the original word). Héng is also a chapter in the Book of Changes, with the meaning of enduring. It implies a process of adaptive adjustments in a being, instead of staying the same, and thus it’s what allows continuing. For a self, or personal identity, it’s a better choice in a non-dual or non-Cartesian world. Resilience, as an ability to adapt, can take cues from ecology, and embrace diversification, some exploration of alternate selves and points of view.
    There’s a difference in vulnerability between someone who says “I am a butterfly collector” and another who says “I like butterfly collecting” and yet another who says “sometimes I will collect butterflies.” In the first case, any criticism of the profession, hobby, or sport of butterfly collecting becomes a personal attack on the person who does this. In the second and third cases, the criticism is reduced to feedback on a behavior that’s separate from the sicko who goes out slaughtering beautiful things for pointless entertainment. We put ourselves in defensive positions less often when we leave a little distance between ourselves and what we do or believe. Does this make us less? Does this diminish our involvement in life? One could in fact argue that greater permeability in our identity increases our involvement and spends less time and energy in self-defense.
    How much of our notion of self is all about getting up sufficient esteem, confidence, and courage to keep going? And might not this be all that we need to fight for and defend? Even an attachment as a partial identification can become something we can’t let go of, lest we feel less whole and secure. But is that not less wholesome and more insecure? It seems a little peculiar that we get as invested as we do in ideas that can be attacked, such that we have to take the attacks on these ideas personally, as attacks on our own identities. Perhaps this is why we favor ideas of who we are that are shared by others, or given to us by others (with all due allowances made for our specialness), that we might have more safety in numbers. But there is still no safety in numbers when the errors are really big. Maybe identifying as a Nazi wasn’t such a good idea.
    Jung wrote, “The world will ask you who you are, and if you don’t know, the world will tell you.” The most rigid and defensive identities seem to come from without, from being told who we are, and what we need, and usually from a very young and formative age. Many will find it the most important thing to declare what nation they were born in, or what deity their parents got them to worship, or what party they were taught to support, and will declare these identities to be truly exceptional for obvious reasons. They will identify with certain ideas and institutions which they must then fight and perhaps even die for, to keep themselves free from both criticism and change. We also borrow much of our identity from our heroes and role models. This adds to our sense of prestige and legitimacy. When we do this collectively, let’s say as a species, identification with our best and brightest might inspire us, but it largely ignores what we really are and can easily turn delusional. At bottom, human is as human does, no matter what the poets and the philosophers say. Finally, there are the identities that derive from the things that we ourselves have felt, thought, and done. Becoming who we are is also in part becoming who we want to become, an artifice or art form, that reveals more of who we already are in the process of creating. In higher gears, this involves a sense of purpose, and our purpose itself is also at least half a chosen thing, not always something discovered. It may be a summation of previous vectors in life. Creating like this bodes well for letting go of the outdated, and even for maintaining a need to do this regularly or continuously. And letting go of the old stuff is a bit like sending for the Salvation Army truck every time you move. It’s freeing in a way, with just a pinch of bittersweet.

Belief and Credulity
    “Now faith is the substance of things hoped for, the evidence of things not seen.” Heb 11:1. WTF? And this is what, good, or what? Explain yourself.
“Some things have to be believed to be seen.” Ralph Hodgson
“For every credibility gap, there is a gullibility fill.” (Richard) Clopton’s Law
    Belief is a state of mind, position, or attitude wherein we think something is the case, with or without supporting evidence. It nearly always has an affective component that may be related to the effort of its acquisition, to a personal sense of identity, or to social relationships. Affect also motivates belief. We wish things were true because we will feel better if they are, and then we employ our reason to rationalize this. The SEP explains one theory about it: “According to connectionism, cognition proceeds by activation streaming through a series of ‘nodes’ connected by adjustable ‘connection weights’—somewhat as neural networks in the brain can have different levels of activation and different strengths of connection between each other.” Belief is thus represented here by high activation and strong connections.
    Belief is the confidence or readiness to investigate no further, or the go-to condition when we decide we’ve had enough of learning for now. At some point along most learning curves, the brain will signal “enough is enough, we’re done here.” When a belief is being held, the necessary and sufficient conditions for investigation or testing have been satisfied. It also seems to signal a readiness to react in support or defense of a particular representation of reality. Some beliefs are based on hearsay, some are lucky guesses, and some are well-grounded in evidence. Nothing is really wrong with taking vetted knowledge seriously, or even with depending on it, sometimes even with our lives. But our neocortex, particularly the prefrontal part, is set up differently, to run vicarious trial and error scenarios and then weigh options, mostly on how we imagine those will feel when completed. Ideas have no real right to be smug in the face of this. They ought to be perpetually ready for testing. Putting an idea to the test is the real meaning of the word prove. With belief, the idea is already thought proven.
    Most people might claim that some degree of unconditional belief will be necessary to get through our lives, to have the requisite faith or confidence to persevere through troubled times and our dark nights of the soul. Losing this becomes a crisis of faith. Our nature does seem to demand that we either believe in some things or do something else that serves the same purpose and supports confidence. While belief may be the default setting in our operating systems, substitutes are available as learned alternatives for people who are able to learn them, and these substitutes may bring additional cognitive and emotional benefits. I would argue that, rather than belief, some degree of conditional acceptance is necessary, or we would be paralyzed by doubt. We could call this skeptical vetting and verification, with full commitment held in abeyance pending some future disconfirmation, a probation period that could last a very long time. This acceptance is not the same thing as approval, but it still gives us our place to stand. And we have our way out when we’re told to die for our beliefs.
    We may still have to ask if there is a difference in neurological makeup or temperament between a believer and a skeptic. In other words, is skepticism even available to a born believer? Might some kind of innate predisposition or bias be involved? We do know that anyone who can believe in talking snakes can be made to believe anything, except that snakes can’t talk. But if we get to them before the snake does, can we get them to conditionally accept skepticism? We would still have to watch for those Trojan horses of credulity, though, which could make Santa as problematic as Satan. A child of four may already grasp that mental states can differ from reality, that the beliefs and desires of others will drive their behavior, that knowing these states enables prediction of behavior, and that these may differ from their own, opening a door to questioning the truth or falsity of their own beliefs and desires as well as those of others. This may be a good time to start working with this ability they have. In cases, it may be the best time to have that talk about liars.
    A degree of conditional acceptance might yet be 99.99%, particularly for things that work, answer questions, or solve problems repeatedly and without fail. A belief is really only as good as its predictive value. We might score the theory of evolution at 95%, but this primarily to leave it some more room to evolve, since we know it’s still doing so. Unconditional disbelief might be considered. What score could we give to the world being created in 6 days, 6000 years ago, by a mid-Eastern, Bronze Age tribal deity, who looks just like the prototypical image of man? Can we at least give it maybe 5%, if only for its metaphorical and diagnostic utility? It does say a lot about us, most of it kind of embarrassing. We can accept that this is a story, and that it appeals to people who can’t think very well, and we can look for why that is. But technically, and logically, you can’t prove a negative. Many will claim that there can be nothing certain either way, that all speech will speak some kind of truth, and this way of thinking has become quite the fad. An absolute truth would have to be true from many different perspectives. But there is quite likely such a thing as absolute error, and it seems it can be found even in the minds of human majorities. The best weavers of traditional tapestries and rugs would deliberately weave in an error, that wove its way to the edge of the fabric, a little something to let the evil spirits out. Even the firmest of our conditional acceptances should have one of these, a way out, and a way for new, penetrating questions to enter. In theory at least, error will fail from its own weaknesses and consequences, but sometimes this requires geologic time.
    Folk beliefs, superstitions, and myths have been with us for ages. While non-believers and agnostics have been present in our societies all the while, however incognito and fearful of exposure, the anthropologists will tend to homogenize these cultures and assume the beliefs to have been culture-wide. This does us a disservice, but it says some things about anthropologists, and this will allow us to question them. The ideas of a philosopher situated in a culture and history may have one set of ideas, the state religion another, and the folk religions practiced out at the edge of the wildlands yet another. The philosopher’s account will say one thing, the stone inscriptions at the palace another, and the primitive rural artifacts yet another. They do not tell the same story. Looking for the beliefs of a culture, therefore, can often get a little misguided where monoculture is assumed.
    Today, pseudoscience and conspiracy theories are attracting ever-larger followings in uncensored social media and its echo chambers. Some studies are suggesting that this is something quite other than the bandwagon effect, but still a matter of personal identity. The credulous folk are being attracted specifically to minority points of view, at least in part because it lets them feel special, or especially woke and in the know. Lingo borrowed from quantum physics seems to get used a lot by people who can’t do high school algebra. It isn’t a new phenomenon that consciousness is given an honored place at the very start of creation, but this blossoms now at a time when we’re learning a lot more about how limited consciousness is. Gullibility and sloppy learning add up quickly to sloppy thinking, and that stuff soon shows. So far we don’t seem to have much of an ethic regarding the circulation of bad information. It’s not widely regarded as a social or cultural wrong, even to spread it to our children. This may have to change, but it’s hard to say at this point whether the motivation will be a matter of shame and embarrassment or conscience and integrity. Shame doesn’t seem to be working at all in the very ignorant: they’re just doubling down and defending their faith, even claiming that any hard evidence must be the work of the Devil.
    Marjaana Lindeman (2016) has found strong correlations between poor understanding of the physical world and religious and paranormal beliefs. Questionable cognition is particularly correlated with “low systemizing, poor intuitive physics skills, poor mechanical ability, poor mental rotation, low school grades in mathematics and physics, poor common knowledge about physical and biological phenomena, intuitive and analytical thinking styles, and in particular, with assigning mentality to non-mental phenomena [like physical processes, lifeless matter, artificial objects, and living but inanimate phenomena].”
    To many, even a challenge to the very idea of belief, without reference to a specific tenet, will evoke a defensive posture, a closing of the mind. Belief systems and structures can get so interconnected, complex, and fragile, with unpredictable consequences from the collapse of any portion, that anxiety over potential crises of faith overtakes all reason. This is especially so if one has identified with a faith, claiming “I am such and such” rather than “I like such and such.” Then the crisis of faith becomes a crisis of identity, or an existential threat. Where a conditional acceptance is substituted for belief, the crisis is a little more of a critique to be examined and not something to be taken personally, although we may still get excited when arguing the case. Delusionally rigid belief assumes that all evidence to the contrary is there to be ignored or resisted, or placed there by the Adversary to test, torment, or seduce us.
    Belief, certainty, and conviction can be real mind killers. Like foot-binding for the brain, these are most effective when started or applied early in life. And beyond some critical point, it’s mighty difficult to go back to not being crippled. There seems to be a common belief that belief itself (or faith) is prerequisite to mystical, unitive, spiritual, or religious experience. But this only loads us up with more baggage to drag along on our journey beyond and into the One True Thing. It’s the worst kind of cosmic tourism, and the opposite of going native, or naked. And all we see and learn there is what we expected to see and reaffirm. This is just second-hand mysticism. We ought not settle for anything less than the good stuff, the real deal.
    Neither is any belief superior to conditional acceptance when it comes to good behavior. Atheists and agnostics are notably underrepresented in prison populations, particularly in relation to conviction for crimes against others. Moral dogmatism will correlate somewhat with moral concern, but much less strongly with empathy or actual moral behavior, except towards in-groups, thanks in part to our gift for hypocrisy. Behavioral beliefs can come to us via social or cultural channels, or they can evolve through personal reflection and reasoning. This is perhaps a good place to draw a distinction between morals and ethics. Morals tend to derive and get adopted, without a lot of reflection, from ambient social mores, and from the evolved behavioral underpinnings that primatologists are studying. Ethics is a branch of philosophy and implies examination, including study of the primatologist’s underpinnings. Many of the social aspects of belief can be thought a part of belonging. Their adoption is traded for social favors, security in numbers, access, and privilege. There is plenty more on this in the next section.
    Belief persistence in the face of contradiction or disconfirming evidence is even more of a problem than belief itself, and it’s usually both cognitively and emotionally expensive to maintain. Belief modification can be a sudden process, as with epiphany, samvega, or elucidogenic inspirations, or a gradual process, as with recovery, deprogramming, or simply growing up. We might not get fully rid of our old beliefs. A belief can be defused of affect without losing its associated representations, which may then be accessed ever less often, or disappear from memory altogether as alternatives are used instead. Extinction will take some time, even with the more sudden processes. And even with repudiation or apostasy, our chances of learning to either manage or forget our longer-held or entrenched beliefs will diminish with both our age and with their interdependence or interconnectedness with our personal bodies of knowledge and experience. Beliefs can come with fail safes to secure and protect themselves, such as the threat of burning forever in hell. It seems only fair that we find ways to integrate them in ways that will let us disintegrate them.

Belonging and Confidence
    “Man is a social animal; only in the herd is he happy. It is all one to him whether it is the profoundest nonsense or the greatest villainy - he feels completely at ease with it - so long as it is the view of the herd, and he is able to join the herd.” Soren Kierkegaard
    Like most primates, we’ve evolved in highly social contexts and generally have a difficult time living in isolation, let alone thriving. If the evolutionary psychologists are correct, it’s really no wonder that the loss of a home tribe is terrifying. In ancient times, this usually meant death. With the silliness of the war on sociobiology having calmed down a bit, we can start looking at innate social skills as traits evolving through group selection, with tribes who got it out-adapting those who didn’t. And we can see some obvious roots for these skills in social primate societies. The benefits of group living must on average exceed the costs, but these are statistics and individual lives are specifics. It doesn’t always work out for individuals. We’ve evolved to read each others’ minds without the use of words, although this is less than perfected. We can also learn much by mimicking each other on a number of levels, thanks in part to specialized multifunctional neurons in “mirror neuron” or monkey-see-monkey-do networks (not precisely the same as saying mirror neurons). With these, both seeing an action or imagining it prepares us to perform it. We have also evolved a large array of emotions to tune us into getting along, or end what prevents us from doing so. We tend to suffer when others suffer, and support them when they show skills. We’ve learned in our genes to feel insulted, betrayed, used, and angry, that we might better punish the betrayers and cheaters.
    Our need for connection varies, but with the exception of hermits, shut-ins, and some aspies or other autistics, most adults will go to extreme lengths to maintain a sense of belonging to some tribe or group. Maslow identified a number of needs that are usually best met by thinking and doing whatever it takes to maintain belonging in some group, including finding and starting a family, the safety and security needs, belonging and love needs, and esteem needs. Aside from our common ground, needs for belonging differ somewhat between sexes, with F-types tending to favor the close, intimate, catty, and chatty, and M-types, the shallow, numerous, boastful, and competitive. We’ll talk about your reaction to that somewhat later. Both may take the form of some declaration of membership in a society, along with the acceptance of that society’s credo or mission. Unfortunately, such membership nearly always entails a non-membership or anti-membership in at least one other group with a different mission or set of values. Apathy or antipathy is a usual result, and the roots of these go deep into neurological processes, affecting the ways we perceive or discount those others at unconscious levels. Our fellowship is here in this clan, not with the out-groups, those people, who represent things we are not, and do things we would never do. And we would have to go live with them if we ever got banished from us, or else have to wander alone in the wilderness. Both of these could kill us to death, so we become who we need to be to prevent this. We frequently have to accept incredibly stupid myths and theories, and eventually come to believe them, and defend them with lethal force if need be. We check ourselves so often for compliance with our adopted norms, that these norms become the reference for who we must be, or who we are when we’re worthy, and therefore what we become.
    A tenet central to our sense of belongingness doesn’t have to be a majority opinion. Even fringe beliefs (consensus be damned) like pseudoscience and conspiracy theories, may be matters of belonging, to a small, esoteric, and elite group, that’s ahead of its time, ahead of the learning curve. Challengers become, in Scientology’s terms, suppressive persons, and so they’re part of the conspiracy. Attempts to shame are suppressive. The absence of evidence is evidence of a coverup. The intelligentsia, including scientists and guild monopolies, have a bit too much of this exclusive minority attitude as well. The independent scholar is treated pretty badly by academia and its journals. You are nothing if you aren’t affiliated with an institution. You can’t be out toeing your own lines in the sand - that desert you’re in is a banishment, not a canvas. If you don't belong, you're suspicious, a ronin, or a rogue.
    Usually the access to culture comes next to membership in importance. If you think about it, before we had inter-generationally transmissible culture, humans were lucky to come up with a new stone tool every thousand years. We don’t know when articulate language got started, but before that, we had neither verbal instructions nor stories. We could just gesture, point at stuff, and make noises. Culture is super-organic, a hive mind, and a genius without consciousness. With this, we stand on the shoulders of giants, or tall piles of littler people. Given this importance, it’s a big deal for us to disagree with our fellows, particularly outwardly and against a consensus. Solomon Asch ran some well-known experiments in the 1950s that  demonstrated just how far people were willing to go to conform to a majority opinion, against the clear evidence of their senses. And Philip Zimbardo, with his infamous Stanford Prison Experiment, provided us with a truly frightening demonstration of how quickly this could degenerate into unacceptable behavior. Kant saw an ethical theory in our servility and how we socialize ourselves into conformity, at the expense of our own self-respect and moral inclinations, in order to curry the favor of others and groups as a whole. It’s done out of a kind of ambition that’s partitioned some distance away from our moral sense.
    The rewards of belonging are plentiful enough, especially for those who get started in a healthy, functional family, and grow up among siblings and friends. The biggest downsides are found in what we give up to belong, the freedoms that we surrender, the thoughts that we now must never think, and the feelings that we now must never feel. Never mind that it’s in our nature to exercise those prohibited freedoms, think those forbidden thoughts, and feel those forbidden feelings. And when they poke their little heads up out of the unconscious, we are terrified, and see them as monsters from the Id, and fight them back with our guilt and our shame. Big parts of who we could be are pruned or trimmed away, and often for reasons that make little rational sense. We conform, pledge loyalty or allegiance, and submit to peer pressure that’s strong enough to kill us, or ruin our mental health. We have authorities and experts to answer all of our questions. We just haven’t been all that great at congregating on the basis of our diversity, or celebrating our individual differences, congregating face-to-face with each other. For the most part our congregations have to focus on something else, an objective, a third party, that elevated someone or something up on the dais, something that we all have in common, like a central tenet or leader, to unify and homogenize us. Some diversity creeps back in as we develop sub-cultures within a culture, as coalitions form and dissolve to explore what freedoms and diversities are allowed to remain. But the permissible range of these subcultures is normally well constrained.
    The exit from a state of belonging can be terrifying, but it’s become far less so with globalization. It’s easier now to reach out to others of like mind, instead of forcing our minds to like others. In some cases, we can now join physically with these others to create new lifestyles according to shared ideas or principles, or move to another province or country where we would feel more welcome. Small-scale lifestyle experiments like intentional community or ecovillages are becoming more acceptable in a few places, but they’re still largely perceived as threatening to the status quo they want to replace. An exit from faith-based groups comes with its own set of difficulties, especially if it’s been part of believing that apostasy or infidelity means damnation. A loss of religion or crisis of faith threatens identity, belief, and belonging all at once. Exiting isn’t just the reverse of converting and joining. In such cases, the leap to take can be to something only slightly better, but forgoing any real promise of freedom.
   
Secular and Sacred Values
    Axiology studies both aesthetic and ethical values, the prognostic tests we have for what experiences we deem worth having, and for what behavioral approaches are best suited to getting to them. We treat our values, identities, beliefs, and memberships like belongings, some more precious than others. They seem to add to our substance, our weight, our girth, and of course, our own value or worth. Values are normally thought of as normative qualities, shared more often than not, and generally consensual ideas about goodness, worth, and truth. We try to see them as more objective than they really are. Values invented by the manipulative as a means to their own ends, or blindly assigned to individuals by their culture, can be counterproductive and toxic to both the individual and to culture. This applies to both aesthetic and ethical values. Morals are related to mores, social norms which are often adopted implicitly, without contract or question. Ethics, on the other hand, is a branch of philosophy, and implies that the subject has been more reflected upon and conclusions have been more explicitly drawn.
    Values can be sorted into intrinsic, reasoning about ends, vs instrumental, reasoning about means. Thinkers think that, but values are much more than reasoning about anything. Only aspects of them are objective assessments, or declarations of personal and cultural standards. Much more than this, they are the affective components of decision-making processes in the PFC. Values are the emotional weights of the choices under consideration. Intrinsic values are the instruments in our decisions or choices, they’re the twin goads of carrot and stick. Instrumental values are those that get us to the goods, to our optimal thriving and adaptive success. These are behavioral or ethical values and can be highly individualized. But individuality is a position which has received some questionable justification from numerous sources, including objectivism,  postmodernism, and moral relativism. Thankfully, we seem to be consistently driven back to two ancient maxims: Should it harm none, do what you will, and the Confucian law of reciprocity: What you do not wish for yourself, do not do to others (Analects 15:24).
    We should begin any search for an objective assessment of value, if there is any, in our shared biology, and include evolutionary zoology, primatology, and evolutionary neuroscience. This gets us embroiled right away in the is-ought problem: if our nature has adapted to a certain way of valuing (and hence, behaving), is being true to this nature usually the best choice? Going against our evolved nature will set up internal conflicts, but might keep us from questionable behaviors, like murder. Being true to our evolved nature might forego some of the useful behavioral options our dlPFC affords us. We do at least know now that disrespecting and violating our nature is a really bad idea. The clearest place to begin a study is with the values that help us to address and meet real needs. In societies constructed largely on the creation of artificial needs, which tend to arrive in our heads via propaganda and advertising, determining what real needs are becomes problematic. After this, we can be concerned with assessments of behavior leading to the well-being of individuals and the societies they inhabit. This is the Utilitarian approach. It wants an assessment of what happiness means (or better, eudaemonia) and also what we ought to consider regarding the long-term success of our future generations. Our consuming more than we need for contrived and irrational reasons might be examined both morally and ethically here.
    It’s interesting, in a scary way, how the worth of things in life is assigned, and usually by others. Worth is too often thought of as given or found, not made or taken. It’s interesting, too, how the terms used in economics derive from things that should be more important than material gain. It’s easy to find better uses for the following words: appraisal, appreciation, appropriate, assessment, balance, charity, contribution, credit, economy, endowment, enrichment, enterprise, equity, fortune, indebtedness, inheritance, interest, legacy, leverage, liquidity, pledge, precious, premium, proceeds, purchase, realize, redeem, reserve, resource, reward, richness, right, security, solvency, speculate, treasure, trust, value, venture, and wealth. Sometimes the parts of speech can change: the words prize, trust, treasure, and value, for instance, work better as verbs, and more to our benefit. Every one of these words from economics refers to something we can get or do for free: they hold the keys to being rich, satisfied, and grateful without spending much money, or working hard at anything besides our own attitudes.
    Using valuation in deliberate ways means working out our own values independently of public opinion, relative scarcity, supply and demand, and what others are paying. We would have to quit reasoning so automatically about which thing is better, or worth more. We would need some affective self-management, some ability to defer gratification, and some reframing abilities. The first great words on our ability to adjust, revalue, or reevaluate our own values, to take the bold step away from conformity and decide for ourselves what has worth, belong to Nietzsche, whose Zarathustra spake thus: “From the Sun did I learn this, when it goeth down, the exuberant one: gold doth it then pour into the sea, out of inexhaustible riches, so that even the poorest fisherman roweth even with golden oars!  For this did I once see, and did not tire of weeping in beholding it.” A capacity for revaluation of values is central to his philosophy. This often conflicts directly with pressures to conformity, where values are consensual units of measure, and rendered more or less objective by spoken and unspoken social contract, and even inscribed on tablets of law.
    A sacred value is regarded as a different sort of creature, “defined as any value that a moral community implicitly or explicitly treats as possessing infinite or transcendental significance that precludes comparisons, trade-offs, or indeed any other mingling with bounded or secular values” (Tetlock). This isn’t always what we do with god on our side, or that dynamic duo of god and country, or for the revolution, but if it’s infinite and transcendental, it will likely be regarded as worth more than life itself. Sometimes it matters little if it’s my life at stake or yours: there’s glory, paradise, and immortality in there at the end somewhere, for me at least. And it’s hell for you and your kind. With promise like that, the value is often immovable or unshakable until its holder has gone to glory, or just gone. Nietzsche, of course, would step in here, with a hammer, and declare a Twilight of the Idols. Sacred values, more than any others, warrant a thumping, to sound them out for hollowness. Truly sacred values should firmly demand a taking charge of the power to value, and refuse to serve any unquestioned thing. Nothing is worth more than life itself, except more life itself.
    What would you be willing to die for? What would you be willing to kill for? Or kill your child for? And why in the hell doesn’t everybody just call Abraham batshit crazy? It’s just not thinking straight. Sacred values inspired Napoleon and Hitler to march on Moscow in the wintertime, and Boudica to battle the Romans on Roman terms. Most of the soldiers who have died in our wars have died for sacred values, for god, king, and country. And many more than that have been innocent bystanders who have died as collateral damage. On the truer side of sacredness lies the value of our loved ones. Is this infinite? An often-seen movie trope these days, in spy, adventure, and dystopian films, features someone close to the hero whose wife or child gets kidnapped by the terrorists or evil masterminds. This character is a decent person, in a position of power or authority, with the ability to make things happen on a large scale, but he’s “forced” to assist the evildoers to save his loved ones, even knowing that the deaths of thousands, or maybe millions, are at stake. This is what Telock means by infinite significance. The conflict between a rational approach and one that’s more subjective is more calmly illustrated in the strictly hypothetical trolly experiment, where subjects might flick a switch to divert a runaway trolly to kill one person instead of five, but would hardly ever personally push that one person onto the tracks to achieve the same result. Sacred values confuse our senses of scale.
    Secular values can be contrasted with sacred values in much the same way as personal purpose can be contrasted with higher purpose. Personal purpose can move a person along towards personal fulfillment or self-actualization. It’s a calling or vocation, and some personal rewards can usually be expected. Higher purpose is in service to something greater than ourselves, such that the agent doing the serving becomes less important than the cause, and even expendable as needed. Any rewards, or even happiness, are beside the point. Secular values operating in decision making will want a due proportion of cognition and affect, a balance in an appropriate and rational ratio, to arrive at a proper investment for a reward. But service to sacred values may not be rewarding at all to the individual. These are causes people may be prepared, even willing, to die for. Or maybe they will just merit a lifetime of effort with little or no extrinsic reward. They are beyond cost-benefit calculations. They normally hail from the culture at large, often from political and religious belief. But sometimes this is just from a general sense of indebtedness or gratitude. The condition isn’t always dire, particularly when the sacred value emerges from character, within the individual. A scientist might forego finding a mate and family life and stay buried in vital work for a lifetime, in service to the culture, or in search of a much needed medical cure. Albert Schweitzer, a fine an example of this higher kind of higher purpose, wrote, “I don’t know what your destiny will be, but one thing I do know: the only ones among you who will be really happy are those who have sought and found how to serve.”
    Sacred values can be troublesome, demanding perfection and promoting intolerance, even intolerance of most of humanity. People might view them as defining who and what they are, providing a core to be held in common, on common ground, as common cause. It would be amazing if we could find a set of them that we could all or almost all agree on, perhaps a consensual secular ethic that we could make sacred, that could make us all an “us” and put a permanent end to war, ecocide, and human parasitism.

Opening Up the System
    It’s a part of bounded rationality to put boundaries around our own reason, to say that our learning stops here because now we’ve learned enough. Belief may be the default condition that our minds seek to attain when seeking out new information, the wish to claim “this much was necessary, but now, this much is sufficient. I can put the questions away now.” This is homeostatic, and avoids needless cognitive loads, as well as some needed ones. There is almost always more to be learned of the matter, although there is also a good chance of diminishing returns in pursuing the matter much further. How can we override this default condition with a newer program that leaves these boundaries of ours permeable to relevant new information, to disconfirming evidence, or even reasons to abandon the belief?
    Firmness of belief or conviction gets a great deal of public praise in this civilization. Changing the mind, conversely, tends to get publicly shamed. Doubt is a thing to be conquered, even if by faith alone. Most of our tyrants, fanatics, and other ideologues have firmness of belief and conviction. That an idea must continue to prove itself worthy just isn’t part of their programs. The Han Dynasty and earlier Chinese used a pair of concepts called, gāng and róu, firmness and flexibility, around the time that yáng and yīn captured their imagination. These were considered equals, and complements, neither more valuable than the other, as long as they appeared where appropriate. If we could learn from a younger age that it was fine, even praiseworthy, to be able to change our minds, we might grow into more knowledgeable and intelligent adults. But we would need to learn to own our errors and even take some pride in standing corrected. Donald Foster writes, “No one who cannot rejoice in the discovery of his own mistakes deserves to be called a scholar.” And Emerson: “Let me never fall into the vulgar mistake of dreaming that I am persecuted whenever I am contradicted.”
    Is belief itself the problem, or is it identification with a belief? If I were to say “I am a Christian” or “I am a Republican” [and please shoot me if I ever do], then I’m setting myself up to take any criticism of those ideologies as a personal attack on myself, on this thing that I am, on what I have chosen to be, on what I’ve put my heart into. And my response will almost certainly be defensive, and probably either smug, angry, or desperate. The worst cases of rigid boundaries involves claims of infallibility, where a teaching cannot be questioned, or one of exclusivity, where a teaching is claimed to be the only truth. Belief alone isn’t really as dangerous to a life of personal growth as identification with belief. The combination brings the worst of both worlds. If instead, I were to say “I like Christianity” or “I like the Republicans” [just hose me down and fetch my meds], then I still have plenty of room to choose a more rational approach to this criticism. This renders the boundary around my belief a little more permeable, and it implies that there may be something beyond the boundary that I’m still willing to learn. Simply to claim to be free of belief isn’t enough: we could have a Trojan horse in this belief that we’re free, and give quarter to toxic ideologies that exploit the word freedom.
    Life without belief doesn’t mean that disbelief or cultural apostasy must be openly declared. The social consequences of being a maverick in this are well known and often rightly feared. Continuing on in the uniform, complicit on the outside, can be lifesaving as well as nerve-wracking. But the word won’t get spread this way, and the movement towards better ideas won’t grow. Once a “standard model” begins to show a more obvious inability to come to terms with new data, it becomes a lot more acceptable to don the loincloth, grab the spear, and join up with the rebel forces.
    The extended metaphor of thermodynamics in cognitive systems suggests that closed systems are doomed to decay. Systems require inputs of energy and information from outside the system in order to self-organize in healthy ways. And it’s out of this that our sentience emerges. A great two-sided tool for opening up these cognitive systems is a combination of suspension of belief with suspension of disbelief. Consider this something of an airlock and  mudroom in the homes of our minds. We let a thing partway in, where we can greet it and check its ID, shake paws with the thing, and get to know it a bit. We’re hedging our bets here. We’re diversifying. The legend of Baucis and Philemon tells us that these strangers might just be Zeus and Hermes, going incognito through the neighborhood. It just doesn’t always pay to be rude to everybody, and wondrous things often hide in the ordinary and unexpected.
    The readiest alternative to the fixed belief is a provisional or conditional acceptance. Skepticism isn’t the enemy here. It’s just the Magic Rub eraser that every good author needs. We might suppose critical thinking would then be the pencil sharpener. It’s an easy substitute in theory, but it needs a lot of practice. Even some of our better scientists have been known to declare belief in this or that, but it’s alien to real science to claim a belief in a theory, or even a law. The word prove didn’t originally mean to establish beyond doubt. It meant to test or evaluate, to find the limits of a proposition. This is why “the exception proves the rule” used to make a lot more sense.
    Eclecticism is a learning approach that will search and research a number of sources, and not hesitate to pick them apart, and take away only the best bits. It was first developed during China’s Warring States Period (475-221 BCE), where it was called Zájiā, the Miscellaneous School. Packagers and defenders of fully assembled systems of thought and belief don’t much like eclecticism. Obviously the Catholics won’t want to have Buddha quoted in catechism. Christians will want to claim the Confucian Golden Rule as their own. 12-step programs warn about picking and choosing which steps to take: that’s the supermarket approach. A true eclectic might still want go to 12-step meetings, but might instead look for a different, tailored set of twelve useful things there, that don’t require inauthenticity, a belief in a deity, or a disease mentality. He can still be in a room where nobody will believe his bullshit excuses, which may be the main point of having such meetings. But one who can separate the germinal from the chaff can still go to church and sing in the choir, dump the bits about faith and religious belief, and still keep, in the words of Sam Harris the Godless, the “spiritual experience, ethical behavior, and strong community.” The eclectics will at least be a little freer to unbuckle their seat belts and move about the cabin.
    The wisdom of placing and keeping ourselves in more optimal information environments is an important lesson to learn. When we’re simply comfortable behind our more-or-less impermeable membranes, we are said to be in an epistemic bubble. We don’t receive data from without, and we might not even know it’s there. The errors we make here are those of omission. We simply don’t investigate beyond our own fields of interest or disciplines. The more sinister condition is when we’re within a belief system, or a group organized around one, that actively rejects good information from the greater beyond. Disconfirming evidence is attacked before it gets presented. The system is rigged to preclude new data, a process called evidential preemption. This is sometimes called an echo chamber, and it’s becoming increasingly common with the rise of corporate news and social media. The only first step to take here is removal from the echo chamber, and an extended period of exposure to a broader spectrum of information.
    We have to ask what people really get in return for certainty or conviction. Sometimes it will be important to find replacement sources for the security that abandoned systems used to provide. Or else we might try learning how to hold to uncertainty and still maintain some self-confidence. The “wisdom of insecurity” is held as a value in some non-theistic Asian traditions. It can be done. To maintain a distance between ourselves and our thoughts, including our thoughts of who and what we are and where we belong, gives us room to move and look around. It leaves us with windows and doors to the rest of the world. Our affect, our feelings and emotions, the processes that support our self-schemas and scripts, those aspects of ours that are based on the past and emerging from unconscious processes, could use a little distancing while the different parts of the prefrontal cortex sort things out, to be more sure before choosing a path. But we have to unlearn the urge to react and defend. Fear for the boundary comes with the boundary.



1.7 - Conditioning, Persuasion, and Ideology
  
 Being Told What to Think and Feel

    Classical and Operant Conditioning
    Persuasion, Public Relations, and Advertising
    Ideology, Indoctrination, and Propaganda
    Us-Them, Social Consensus and Weltanschauung

“At least two-thirds of our miseries spring from human stupidity, human malice, and those great motivators and justifiers of malice and stupidity: idealism, dogmatism and proselytizing zeal on behalf of religious or political ideas.” Aldous Huxley

Being Told What to Think and Feel
    Over the first couple of decades of our lives, most of us are deliberately cultivated to become adapted members of our society and culture. Much to most of this cultivating is done with ulterior motive, a concern more with cultural functionalities than with authenticity and the individual’s well-being. In the more civilized world, a large percentage of this training is given to participation in the economy, and conformity with the ambient political and religious ideologies.
    On a more primitive level than culture, society hivemindedly controls an ambient system of rewards and punishments. Rewards are generally the same as in other primate societies: prosocial experiences, elevated status, and opportunities to fornicate. Punishments might be nothing more than induced anxiety over the threat of punishment, or powerlessness, insecurity, or fear of any number of need deprivations. But they can range upwards to loss of life and limb. Approval will always be conditional and probationary. Violations and breaches are nearly always some form of setback, if only for several hours. Distrust or hatred of out-groups is easy enough to manipulate, and the boundary between us and them can be used to tell someone what it means to belong to us. One of the most effective tools is to develop an anxiety that has no resolution, and then offer vague promises of its relief. Normalization of the norms is accomplished by baby steps.
    Entrainment to the desired norms is no longer just from subtle nudgings of social rewards and disapprovals. The arts employed to get us on track are getting more sophisticated every year. Since Gutenberg, and the subsequent development of the pamphlet, newspaper, and catalog, the proselytizing of political agendas, religious beliefs, and commercial enterprises has gone ahead full bore. But the potential became considerably more frightening after Edward Bernays turned public relations and propaganda into a more effective technology, and his contemporary, David Ogilvy, did the same thing for advertising. The effectiveness of these evolved with psychology, and the understanding that influence was not just a matter of writing wishes onto a blank slate. It required savvy with the target demographic’s egoic and social needs, emotional reactions, and cognitive biases, as well as the development of persuasive rhetoric that could capitalize, with or without evil intent, on the public’s poor grasp of informal logic. When specious logic is being used, either the persuader doesn’t know the fallacies he’s using, or else he does. Either case should call his credibility into question.
    A real need to develop some kind of social control arose when populations outgrew our adapted social abilities, beyond the hunter-gatherer tribe, the extended family, and the village, when we had to live with people we didn’t know, or didn’t know well. The ability to predict the behavior of others is vital to reliable decision making and the lessened stress or anxiety that goes with that. Our chief social currency is trust, reliance on that predictive ability, and we need some kind of backing for that. It doesn’t work as a fiat currency. Urbanization, specialization, personal property, and the logistics of large-scale cooperative projects intensified the problem. And most of our civilized cultures hit upon the wrong solution: an omniscient, omnipotent deity would appoint a king to make up the rules and enforce them. Violators could incur the wrath of the king, who ruled this life, and the wrath of the deity, who ruled the rest of eternity and our fate therein. This deity would often have an adversary, or at least some wicked competition, who would use enticement, beguilement, and seduction to lure us off the correct path, although we would still be to blame for leaving the path.
    Now we have conditioning and persuasion down to something of a science. Behaviorism gave us the tools for classic and operant conditioning, which still work pretty well, even if there is a subjective side to sentience after all. The exploitable cracks in our evolved cognitive heuristics are well mapped, especially in subliminal work. Both illogic and rhetoric, both convincingness and persuasiveness, can be applied formulaically. Loaded or emotionally charged words fill the sound bites that make up the news seen by the masses. Buzzwords, triggers, primers, and anchors are all standard tools in the kits we learn in college. Whoever is working up a science of countermeasures for purposes of inoculation, deprogramming, and self-reprogramming is almost certainly working outside the status-quo political, theological, and economic systems.

Classical and Operant Conditioning
    A discussion of unlearning can’t avoid summarizing a general theory of learning. In classical conditioning, a behavior becomes a reflexive response to an antecedent stimulus. In operant conditioning, an antecedent stimuli is followed by a consequence of the behavior, given as a reward (reinforcement) or a punishment. In social learning, an observation of behavior is followed by modeling or replicating, and new behaviors are acquired by observing and imitating others. The home of all three of these conditioned behaviors is implicit memory. Our responses to stimuli, literally re-minding us of what we’ve learned, will usually occur prior to any declarative thought that rises into our awareness.
    Classical conditioning is learning to make an association between a salient biological (unconditioned) stimulus and a previously neutral (conditioned) one. The association finds it way into implicit memory and emerges into awareness when summoned by either of the pair. This is how Pavlov’s dog trained him to ring the bell. It might tie the sight of a national flag to an idea of freedom, even in nations where freedom is discouraged. It might tie the consumption of a special brand of beer to pictures of heaving bosoms. Some of the most effective unconditioned stimuli are cultivated or triggered fears and insecurities, which are then paired with promises, as of immortality, of salvation, of the vanquishing of real or imagined imaginary enemies, of what a new car can contribute to lasting happiness. The associations are gradually extinguished when the conditioned stimulus is presented in the absence of the unconditioned one. The process of creating connections between separate phenomenon occurs automatically in the brain, as when the phenomenon are paired together in experience. Propaganda, proselytizing, and advertising pair them deliberately. One way to find these is to look for two associated things that are unrelated in reality, like war and freedom, or money and spiritual salvation, or cigarettes and horsemanship.
    Operant or instrumental conditioning modifies the strength of a behavior using rewards (reinforcement) and punishment. Positive and negative are used a little confusingly here to refer to the presence or absence of a stimulus, not to the relative pleasantness of an experience. Reinforcement, given to increase a behavior, may be positive, as when one is given a treat for being a good boy, or negative, as when one is spared an aversive outcome because you cooperated with the authorities. Punishment can be positive, as when you get your face slapped for saying that thoughtless thing, or negative, as when you are denied both dinner and the thing you thought you’d get by throwing that tantrum. Extinction of a response in operant conditioning is brought about by removing the reinforcement that’s been maintaining the behavior in a conditioned state. Systematic desensitization (or exposure therapy) is a form of counter-conditioning, yielding diminished emotional responsiveness to an aversive stimulus. This is used therapeutically in diminishing aversions, phobias, and anxieties. The unpleasant stimulus is repeatedly presented in gradually increasing degrees of intensity or salience, but always absent the threatened ill consequences. When positive stimuli lose their impact or luster due to a similar overexposure, it’s sometimes called acclimation, or the hedonic treadmill.
    Social learning acquires new cognitive schemas and behavioral scripts by observing and imitating others. This is most effective when we’re copying role models or high-status individuals. We have networks in the brain that simultaneously engage both sensory and motor representations of things we perceive. These may or may not involve specialized neurons called motor neurons, but they do employ what may be called mirror neuronal networks or circuits. When we merely observe a particular action being performed, our motor circuits are also learning how it it is done. This feature also draws us into media representations as more than distanced observers. We participate in the drama onstage. We can almost taste that beer, the drinking of which will win us a night with that woman and her heaving bosoms.

Persuasion, Public Relations and Advertising
“You can sway a thousand men by appealing to their prejudices quicker than you can convince one man by logic.” Robert Heinlein
    Persuasion is the act or art of influencing another person. It might change what they feel, how they react, how they frame their perceptions, what they believe in, or how they behave. Contrary to many academic theories, this is seldom accomplished with words, ideas, and reason by themselves, and it’s always accomplished with some affective component, or some play to the feelings and emotions. We are easily played, especially through our personal and social insecurities. Politics, religion, and advertising are home to the more insidious forms, using fear mongering, the elicitation of anxieties, the irrational proportionality of our risk perception, the fact that we normally fear pain even more than we seek pleasure, our aversion to cognitive dissonance, and the fact that we are willing to deceive ourselves in order to seem better than we are, to ourselves or to others. Disingenuous persuasion can employ any type of anticognitive in any mental domain. It doesn’t require logically specious argument. We see everything from inflammatory rhetoric to flattery in use. Even while suggesting that a person can do better than they are doing in some way, persuasion will often make use of a subject’s self-esteem, our normally too-high opinion of ourselves, because that’s such a highly qualified information filter, in our humble opinion. The confidence game played by the con artist manipulates others using their own confidence, combining this high self-esteem with the mark’s wanting to pull off something clever or savvy and beat the system. But the persuasion game can also be as straightforward as implying someone will have more sunset sex on the beach if they would only drink a particular beer.
    There are positive and otherwise useful forms where we benefit from being persuaded, or we benefit others. With seduction, much depends: boy howdy, is that a crapshoot. Education has the potential to be persuasion’s highest form, if we can ever learn how to do it properly. Influencing others means recruiting their emotions into their perceptions, especially using self-schemas or social pressures. Some kind of personal relevance must be used. Relevance doesn’t even have to be immediate, if the one being educated can see value in deferred gratification.
    The development of the art of rhetoric by the Greeks took persuasion to a more explicit and artful level, where before it was largely accomplished through storytelling, particularly religious myths, or by threats of harm to life, limb, wealth, or liberty. Through the ages, the storytelling form would continue to vie quite impressively with that upstart reason. The power of pictures, each worth a thousand words, ought to be mentioned here as well. This goes clear back to the cave paintings and maps scratched in the dirt. Persuasion was taken to another new level with the advent of psychology. Freud’s nephew, Edward Bernays (1891-1995), is generally credited as the father of public relations, or the father of spin. With his methods, those in the know could manage the herd mentality, using crowd psychology and the fruits of psychoanalysis, to herd the people in the desired directions. While religious proselytization is considerably older, Bernays’ ideas still underpin much of today’s political propaganda. Appealing first to leaders, or simply hiring them to give testimony, was the quickest way to reach the most people. These were people of known success, competence, charisma, or prestige, which plays to a social instinct we have, to mimic our superior examples. Describing what he would later call “the engineering of consent,” he wrote, “If we understand the mechanism and motives of the group mind, is it not possible to control and regiment the masses according to our will without their knowing about it? The recent practice of propaganda has proved that it is possible, at least up to a certain point and within certain limits.” Tristan Harris, in pointing out the use of psychology in technical persuasion, called it “a race to the bottom of the brain stem.”
    The most frightening and discouraging thing about persuasion in today’s culture is the fact that it’s so pervasive because it’s so damned effective. And the human beings that make up society as a whole remain largely unaware of being manipulated, or at least largely under-offended by the fact. Developing an immunity to insidious forms of persuasion requires a great deal of both subtlety of perception and vigilance. Sometimes it can help us to feel a little offended, indignant, or insulted at being targeted (despite what Buddha said about equanimity being the higher state). This would at least hypersensitize us to these devious efforts to manage or control us, even though many of the efforts will remain, by design, too subliminal to see.
    The persuasive power of scientific culture and its journalism is especially problematic. Not only are researchers prone to error and logical fallacies in sampling, experiment design, and drawing their conclusions: those reporting them will usually introduce a new level of inexpertise, and seem especially ill-trained in basic logic, especially false dilemma and false cause. They will tend to think one-dimensionally, confuse hypothesis with confirmation, reify abstracts, and load the headlines with clickbait and hooks to get the reader reading. Few readers complete the articles about the papers, and fewer still will follow these to the papers themselves, especially when these copyrights are locked up by for-profit journal and paywall scams.
    Trickery in the use of language is a primary tool and vehicle developed in public persuasion. Government is famous for it. Bafflegab, pretentious or incomprehensible or twisted language, pervades bureaucratic jargon. The US Forest Service term for massive clearcutting of a natural forest is “intensive even-age management.” Wild animals and plants are stocks and resources. Proof by verbosity (argumentum verbosium) is argument by intimidation, and includes obfuscation and pseudo-profundity, such as we see in pseudoscience. Most of the new age gurus who throw the word quantum around are likely not even versed in high school algebra. Euphemism is common in military reporting, to sanitize the picture that truer words might evoke. Hyperbole, or just simple exaggeration, appeals to a desire to be deeply impressed before we’re willing to learn something. Words can de-stress or dismiss as well as inflate or exaggerate. Implication and innuendo can play to an unwillingness to admit to imperfect understanding. Loaded words and leading questions are seen widely, in and out of court. Dysphemism will substitute a derogatory term for a neutral one, such as loony bin for mental hospital, and most racial or ethnic slurs. Distraction, misdirection, red herrings, and non sequiturs will derail already questionable trains of thought. Framing, as discussed in four forms earlier, is an all-important trick, making reframing skills a must-have for defense.
    Products aren’t sold with rational appeals. If they were, people would be buying the lower-priced generic products and not paying all that extra for advertising. Products are associated with desired emotional states, or else the avoidance of unwanted ones. New needs often have to be created out of hints and innuendos. There are implicit promises: you’ll be rich, attractive, sexy, powerful, happy, in the know, and ever so special. You do have to hit those buttons: breaking down on the freeway is a fearful thing, so you must buy a new car every three years, both to stay under warranty and to maximize your financial losses to depreciation. Repetition, priming, and the availability heuristic, all described later, play big parts. Nudge theory holds that mandates and other extremes aren’t necessary - all that’s needed is just a minimum push at the right time and place. That helps to keep persuasion invisible or subliminal. Zimbardo offers some things to look for with regard to persuasive procedures, like those beginning with a subtle foot in the door and ramping up in increments. One increment might be to get some sense of indebtedness going with an initial token offer or favor. Things like habit and brand loyalty might take it from there.
    Studies have been done that correlate increases in our incomes to self-rated happiness (how much happiness does more money bring). The curve looks like you might expect: having more money to spend does a lot at low income, and very little as we move from millionaire to billionaire. Unsurprisingly, at least to those who pay attention, the best bang for your buck is had around the poverty line, when real needs can be met and spending becomes truly discretionary. These facts have to be hidden. You need to make more to spend more, because that’s what makes the world turn.
    Persuasion requires the capture of our attention, the engagement of our awareness, so that the desired associations can be wired or programmed into our minds. Attention is the currency. It’s expensive, but we pay it. To the advertiser, the propagandists, and the missionary, our attention is what they buy when they sponsor the entertainment, the bread and circuses, the grand revival. This is what terrorists get for their efforts, too, and for the lives of their martyrs: the rebroadcast of news reports, and the fear that those spread, has the precise effect that terrorists want. That’s why they’re called that, duh. Once again: Bin Ladin won his war against the Great Satan the moment the Patriot Act was signed and a terrified America lost the last of its perspective.

Ideology, Indoctrination, and Propaganda
    The greatest portion of our conceptual reality is socially and consensually constructed. Personal adoption of significant chunks of this aren’t voluntary for anyone who elects to belong to the society. Portions are given by fiat and others are never spoken. And portions are arbitrary, made up out of thin air. Money and property exist because people believe they exist, and support them with confidence and consequences. We perform acts of legislation by announcing that we’re performing them. We even end a felon’s life with a sentence. A government doesn’t exist until it’s constituted, nor a party until a platform is written, nor a religion until a sacred text is written or holy words recorded or memorized. These develop into integrated systems of thought, or ideologies. These are collective intentions, but not collective consciousness. They compete for dominance, for the hearts and minds of the people. In this competition, some degree of tolerance and diversity will have to be preserved to save a structure from its own over-rigidity, but beyond some kind of line, digressions and deviations must be put down or suppressed as a threat. They can say who has virtue and what has value, and they will often do this poorly. For reasons already mentioned, the personal identification with an ideology, where it forms the structure of one’s thoughts, is a lot more pernicious than an arm’s-length relationship. Where these are attached to sacred values that can’t be questioned, or when they call upon us to kill or be killed, we might start remembering that ideologies and their rules aren’t living things. They aren’t injured by analysis like those who attach to themselves to them can be.
    The recent postmodernist movement is a overreaction to this phenomenon, taking a sometimes skeptical, more often cynical view towards any modern progression of tradition, any grand narrative, any celebration of objectivity, reason, or truth. Everything is now relative to perspective, interpretation, context, upbringing, or construction. In effect, nothing is true, everything is permitted in the cognitive world. Meaning is purely what we make of it. This often degenerates into an equalitarianism, where everyone’s truth is the equal of everyone else’s, a microcosm of cultural relativity in all its epistemic, doxastic, and moral forms. Had these thinkers rested about a third of the way into this way of thinking, with a reasonable and warranted skepticism towards developing cultural traditions, more public good or benefit would now be sprouting from their philosophy, and less pointless existential angst. And less bad art and architecture.
    Much of our cultural heritage, just like much of the individual’s stock of knowledge, is only an assemblage of experience presented fairly randomly over time, the data that happened to be available, collected with whatever tools were at hand. This is known to some as the adjacent possible. Inventors use materials from their environment. One classic example is Gutenberg’s adaptation of a wine press. Similarly, the mind gets cobbled together out of available experience. This can be called a bricolage. Urban legend or not, it’s the distance between Roman chariot wheels, derived from measuring horses’ asses, determining the gauge of railroad tracks. This could describe a nation’s reluctance to cross over to the metric system. Yet somehow, this haphazard aggregation pretends to be orderly, meaningful, and coherent. Ideologies become affixed to rather arbitrary symbols, icons, flags, or religious fetishes, and these arbitrary symbols acquire a deadly seriousness. Cultural software, like our languages, has not evolved with a purposeful intent, or with much knowledge of what would represent the best or optimized fit with our evolved neurological structures and functions. This may be the best reason to not hold any part of it as presumptively sacred, but to choose or select, like evolution does, whatever is fit to survive and reproduce. One key to building a better mind is to optimize what’s made available to it, with some special regard and concern for early foundations.
    An ideology is a comprehensive and coherent system of beliefs that can be formulated as a doctrine. Taken to stiffer degrees, doctrine becomes dogma. Even when complex, it will attempt to be internally consistent and will be made to provide ready answers to a large array of questions spanning several aspects of life. It’s sold as a package. The inculcation of an ideology can be called indoctrination when it’s narrowcast to a student or class of students, or propaganda when it’s published or broadcast through media to a people at large, or proselytizing when it’s taken door-to-door.
    An ideology can also be a model or a paradigm of how things ought to run. One of the most pernicious and pervasive (and illustrative) examples is the growth-for-its-own-sake paradigm, what Edward Abbey called “the ideology of the cancer cell.” This is embraced within the larger phenomenon of human parasitism, which also encompasses overpopulation, over-consumption, and species exceptionalism, where humans take and take and give nothing back. In economics, this is a Ponzi scheme, debt based, and headed for runaway inflation and burst bubbles when the heirs to the debt have had enough. Costs will need to be hidden or externalized, books cooked, and debts renamed as unfunded liabilities. The contrasting model will be the old-growth or climax ecosystem, where growth and decay are in balance, and a relative long-term sustainability has been achieved.
    Politics is a heavily loaded subject. At bottom, it’s a bunch of competing theories about the rights and duties of a people, what powers they give to the governments they create to secure or enforce these, the delegation of other specified tasks to artificial or corporate entities, what provisions they make for an economic system, and how they will provide for posterity. The trouble arises when these theories are bundled into comprehensive and competing ideologies that are then taken up by parties and partisan politics. False dichotomies ensue. There are, nonetheless, some natural political dichotomies that arise out of basic human temperaments. There’s a distinct polar axis from conservatism to (classical) liberal, and another between authoritarianism and (classical) libertarianism. The (classical) had to be inserted into these axes, because name calling and name play make names a frequent casualty of running partisan battles and meanings can suffer much in the process. But the axes here are actually spectra. There are no hard lines or wide aisles that separate a rigid binary in either. There is a growing body of evidence that points to highly significant neuoranatomical contributions to which way a person will lean, often with particular reference to the amygdala, but it’s still identification with an ideology that cements these highly polarized chasms or schisms into place.
    There are reasons, places, and times to be conservative, as in maintaining what merits maintenance. But those who lean strongly conservative tend to have problems with change, diversity, and ambiguity, often amounting to anxieties or fears, until things can be simplified and some closure attained. Conservatives are more prone to the emotion of disgust, and associating this with others not in their well-identified in-group. Liberals are more useful when things need to move forward, when risks ought to be taken, when the predictability of things predicts less pleasant outcomes than are necessary. Where attributional styles are concerned, conservatives may look to natural character for behavioral causes, and will find too little in those in need of correction or punishment, and perhaps find more character than deserved in their own. Liberals will tend to explain more of human behavior in term of environmental influences and contexts, but as a consequence, may fail to hold many people accountable for a willful weakness of character. Jonathan Haidt (2003) finds moral behavior distributed more fairly between conservatives and liberals than most partisans care to admit. It will be fairly intuitive where the strengths of each lie across six axes: care vs harm, fairness vs cheating, loyalty vs betrayal, authority vs subversion, purity vs degradation, and liberty vs oppression. According to Gail Saltz, “liberals tend to have a larger anterior cingulate gyrus. That is an area that is responsible for taking in new information and the impact of new information on decision making or choices. Conservatives tended on the whole to have a larger right amygdala... a deeper brain structure that processes more emotional information - specifically fear-based information.” So it’s really responsible for the fight or flight or fright response. The correlation is high, with better that 2/3 predictability, but not absolute. What the studies don’t answer is how much of these differences are due to either epigenetics or early brain development. Conservatives will be easier to manipulate with fear or insecurity, and liberals with the seductive power of new or untried ideas. It’s often a question of loyalty, stability, and religious belief versus diversity, change, and scientific inquiry.
    Authoritarians have less trust in the native goodness of human nature, and a stronger sense of what our nature should be molded into. This encounters its greatest problems when the nature they want is so alien to the nature that is, that disobedience or rebellion runs amok. Classical libertarians, back when the word referred more to the sovereignty of individuals and the duties that liberty’s exercise teaches so well, are more sanguine about human nature, provided that natural consequences follow from the exercise of freedom, enabling firsthand learning to take place. There is no better teacher than the consequences of our choices, provided we cannot evade them. Authoritarians want to take fewer chances: the law will be the law, for everybody, and will be based on worst-case scenarios, period. Libertarians are more apt to follow a situational ethic. Of course, those in positions of authority are known to take some liberties with that, and the libertarians are known to adhere to unreasonably absolute and inflexible rules of conscience and will tend to demand this from others. Real authority, however, is for authors.
    Like politics, religion embraces a whole range range of human concerns and attempts to satisfy a number of needs with one-stop shopping. The needs that it addresses are natural and innocent enough, but as it becomes doctrine, ideology, and then dogma, it tends to get ambitious and opportunistic, and wrap increasingly higher percentages of human existence into its domain, bundling parts together that really don’t need to be taken together. It really isn’t necessary to have all of the things religion purports to do all bundled up in a single package. The problem isn’t with the more innocent, original nature of religion, but with who has done this bundling and why. Most people seem to be comfortable with excusing religion its failings, but the seriousness of the problem is pointed out by Steven Weinberg: “With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion.” And by Voltaire: “Those who can make you believe absurdities can make you commit atrocities.”
    On the explanatory side of things, people use religion to satisfy a natural need to feel less small, helpless, confused, insecure, lonely, misunderstood, and mortal. They want lots of answers, but at a minimum, they need enough roughly plausible explanations to permit them to move forward with their lives. Why do bad things happen to good people? Why do our wishes only sometimes come true? Why do sweet, nearly-perfect children have to die? What’s the point of being good if I myself am just going to die? Why should anyone be good? Where is justice and where are the rewards? Why do people who do evil escape punishment? Where is the moral template we all should follow? Ritualized behaviors are formulated to alleviate the anxiety of any unanswered questions and then attached to systems of belief. Because the superstitions, fears, and insecurities involved in these unknowns also arise from dark and mysterious places in the mind/soul, they really aren’t all that amenable to reason. The rational mind is too new in our evolution to deal with such problems. We need something at least as ancient as our current species is: storytelling, myths and legends, told around the campfire, at night, and maybe with some drumming, dancing, and chanting. Human religious beliefs should never, ever pretend to be anything more than primitive and irrational. Once they begin to do this, we’re in trouble, especially once they take hold of lethal  weaponry and take their God's Love to the enemy.
    Some people are still poking around in the brain looking for a religion module, something akin to Chomsky's universal grammar module, but with even fewer millennia of time to evolve. For an alternative, Stephen Jay Gould asserted that religion was an exaptation or a spandrel, that religion evolved as byproduct of psychological mechanisms that evolved with quite different functions. There was a handy set of available places in the brain to put together some software exploits. Religion is an ideology that exploits or recruits a number of native brain functions well enough to have tricked some of us into believing it’s an instinct. The Perennial Philosophy, which asserts in essence that all religions are saying the same thing, at least in their source or core ideas, is a great disservice to human culture. Instead of a common spiritual substratum, we have a handful of evolved cognitive processes, heuristics, modules, rewarding behaviors (drumming, dance, song), narrative, pattern and agency seeking, phobias, conformity needs, transcendence needs, etc., that lend themselves to emotionally rewarding combinations, and these combinations can be readily exploited with cultural software (like spandrels). There are neural substrates to what is termed religious experience, and these are connected to rewards systems in the brain. The number of combinations is finite and the cultural exploits would show considerable convergence, giving a false impression of a larger, universal, evolved system. But we haven’t had time for that kind of complexity to get locked in genetically. Language might be a little farther along than this, with the spandrels conferring a lot of survival value, so that some greater connectivity across the brain would be reinforced over many millennia. But the spandrel model still holds for the language instinct as well. Evolutionary explanations for the persistence and near-universality of what is commonly termed religion need to better account for its diversity, for the variety of its individual components, as well as its common threads.
    On the experiential side, religion is supposed to inhere in experiences that have no antipathy: wonder, kindness, fellowship, compassion, forgiveness, reverence, love, gratitude, patience, equanimity, and peace of mind. Further, many of us have native higher-order needs for unitive or oceanic experience, altered states, ecstasy, infinitude, and transcendence. Buddha never saw a necessity for an underpinning of metaphysical belief to enter into any of these states. Not only were such beliefs extraneous, they were absolutely not to be trusted, because unsettled minds just make shit like that up, in desperation to settle themselves down. This has nothing to do with truth or trustworthiness. His charge was simple, and free of doctrine other than method: “You should train thus: We shall be wise men, we shall be inquirers” (MN 114).
    The signs that a religion has deviated from its more innocent intentions are pretty obvious, except to those on the inside. Problems are invisible from the inside, in part by design. The same happens with hypocrisy, which is also one of the warning signs. It’s a short step for religion to become a means of social control, demanding submission to authority, perhaps inferring that all church authority is merit-based. An exclusivity of the in-group is another, requiring some form of social isolation from those outside the group and a shunning of infidels and apostates. Claims of infallibility and sole access to the Truth are fairly quick to appear, accompanied by internal censorship and control of the universe of discourse. These claims might even be accompanied by advice to keep questioning until the truth of the church’s assertions are proven, but often offered with smug-to-subtle condescension, suggesting that acceptance is only a matter of ripening experience or growing up. This is not the same as advice to pursue authentic inquiry. And sometimes we see reason, evidence, and education positioned as the enemies of true Faith, perhaps furnished by the Lord of Darkness and Lies. We are also likely to encounter proselytizing promotion and aggressive recruitment, with the fundraising efforts following closely behind. One would think that practicing a religious discipline that truly offered a better life, a stronger ethic, and a more authentic happiness would be enough to attract new members who were simply following good examples of living well or skillfully.
    There are developmental benefits to religious experiences, at least insofar as they satisfy our original needs. There are advantages to forming in-groups of like-minded people, congregations, communities of shared feeling meeting on a regular basis to share a sense of belonging. This part only becomes toxic when the non-members are vilified, dismissed, discounted, or disrespected just for not being members. A curiosity to explore alternative states is usually tempered by a social urge to not do this alone (and not go to scary extremes). There is little evidence that religious affiliation contributes much of anything to moral or ethical behavior. It’s been known for some time that atheists are generally better behaved, and they land in prison less frequently, at least on charges other than atheism and witchcraft. The religious can be downright morally repugnant and nasty to infidels. The need for a consensual secular ethic is in no way diminished by any religious alternative. We can always append an understanding that having some stricter in-group religious moral standards only confers rights to smugness and expedited entrance to paradise, not a right to legislate morality for others. There are solutions to the problems attendant on religiosity that still allow a seeker to attend church in good faith. We can take a cognitive step back from metaphysical belief, particularly from conflating our self-schema or identity with this belief, while still retaining the full depth of emotion and feeling. The two, in this case, are separable, but this assertion will never, ever be found in the Catechism. This may, of course, require keeping your damn mouth shut after the services, or speaking only those words you find true.
    In broader cultural views, the main point of public education is to increase the survival prospects or fitness of the group, transforming young adults into useful socioeconomic participants and resources. Some group or subset of the culture gets to decide what it’s necessary to agree upon, and what diversity might be acceptable. Indoctrination is most widely presented in a classroom situation, in Sunday school, and in some youth organizations. It’s part of the training to become a functioning member of a society. In Ivan Illich’s words, “School is the advertising agency which makes you believe that you need the society as it is.” In some cases, a choice of schools, churches, or clubs offers some variety of generally acceptable versions of the larger consensual reality, but few of these will teach open rebellion. Textbooks are often adopted by local and parochial entities, mirroring those world views and suppressing unofficial histories and rival points of view. Power over content shifts from teachers to school boards, and those who elect them. Political and religious slant is touted as objective. Kids learn about an idealized constitution, not what precedent, or stare decisis, has diminished it into. The chapter on how a bill becomes law says nothing about how the legislators are bribed. Right answers to quiz and test questions are reinforced with rewards, wrong ones, or even those just outside the box, meet with disapproval or shame. If parents had more time, money, and energy, or truly grasped how important these formative years really are, the system would change.
    Propaganda may be thought of as social engineering, using media, playing to the human capacity and motives for believing. The masses are aroused and moved around like cattle with the simplest movements and signals, and they seldom seem wise enough to see through it. Political power is the harnessing of their energy to serve the leaders’ goals. What must be harnessed is public opinion. This won’t necessarily require moving the majority, other than the majority of those who vote, and otherwise the most vocal and active of the crowd. Jacques Ellul writes in Propaganda: “The great force of propaganda lies in giving modern man embracing, simple explanations and massive, doctrinal causes, without which he could not live with the news. Man is doubly reassured by propaganda: first, because it tells him the reasons behind the developments which unfold, and second, because it promises a solution for all the problems that arise, which would otherwise seem insoluble.” Key to moving the masses around at will is the simple explanation (unless choices themselves are simple). This includes  buzzwords, euphemisms, stereotypes, bogeymen, triggers, and scapegoats. In the 1988 Manufacturing Consent, Edward S. Herman and Noam Chomsky proposed five editorially distorting filters to make mass media serve: owning the medium, managing the funding, managing the news sources, creating flak or opposition to certain kinds of news, and using terms of fear and enmity.
    No discussion of propaganda would be complete without noting the father of public relations, Edward Bernays, whose ideas may be summed up in just a few of his own statements: “The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized. Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society.” And also: “In almost every act of our lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons [...] who understand the mental processes and social patterns of the masses. It is they who pull the wires that control the public mind, who harness old social forces and contrive new ways to bind and guide the world.” However, Bernays had little problem with perceiving this is as an ought and a social good, and encouraging the puppeteers to keep pulling those wires.
    Bernays begins a transition from found ideology to persuasive ideology, and around his observations have developed many of today’s methods for brainwashing, indoctrination, programming, and advertising. Ideological division or exclusion (an aspect of divide and conquer) is another key. Jeremy Frimer writes, “Ideologically committed people are similarly motivated to avoid ideologically crosscutting information… .” to avoid the unpleasantness of hearing an opinion they dislike, “rather, people on both sides indicated that they anticipated that hearing from the other side would induce cognitive dissonance (e.g., require effort, cause frustration) and undermine a sense of shared reality with the person expressing disparate views.” When we identify with an ideology (I am this) rather than simply favoring it (I like this), then any challenge to the ideology has to be taken as a personal threat and thus defended like your identity depended on it. You then have no choice but to double down on your stupidity, your amathia, your refusal to learn, or willful ignorance.
    The language suffers when propaganda is propagated through the media. We’re familiar with Orwell's Newspeak: War is Peace; Freedom is Slavery; Ignorance is Strength. Acts of war, partly to avoid constitutional regulation, become police actions, protective reaction strikes, or pacifications. Enemies get new names to arouse reactions of disgust, which actually form in the same parts of the brain as reactions to rotten food and excrement. Weasel words are those whose meanings are sucked out by abuse and overuse. We lose the true meanings of words like sustainability and might even speak of sustainable petrochemistry. Most have no idea how serious this word is, or the damage that we do to culture when we use it to greenwash modest improvements in unsustainable products.

Us-Them, Social Consensus, and Weltanschauung
    We’ve clearly evolved as social animals, with hierarchal, territorial, and tribalist components. We have comparatively fewer and weaker incentives to cross territorial and tribal boundaries unarmed or unprotected. Prehistorically, we’ve long had an imperative to import and export mates of one sex or the other, in order to maintain our genetic diversity. This need likely set the stage for an intertribal trade in goods, tools, and information as well. We still don’t know a lot about when either of these began to turn so violent, as it can be with chimps, but there’s a good chance that this increased with successful adaptation and the subsequent pressures of population growth on territories. We’re now learning more about the neuroscience of in-group vs out-group perception and behavior, and that there are some biological realities to our problems with which we ought to come to terms, and for which we really, urgently, desperately need to learn cognitive software workarounds.
     In short, we too often tend to cognize those who aren’t our allies into sub-humans, or at least something less than us. We use metaphors, triggers, and associations of unpleasant experience, conflating the literal and metaphorical, and recruiting the brain’s insula, with its sensations of loathing and disgust. Those neighbors over there are morally and hygienically filthy. They are sub-human and smell bad. Both Robert Sapolsky and Philip Zimbardo explore these themes at length, and offer ways to start making our implicit biases better understood and managed. We also need better perspective taking. We need to train ourselves to view outsiders as members of some of the other groups that we all belong to, like hard-working people, or mortals, or fellow hominins, or Terrans. We can find things we have in common from different fields of interest. This helps us to individualize each other, put more specific, less stereotypical faces on each other. It’s not really in our nature to avoid Us-Them issues and problems entirely, and that might even backfire, giving us less of our needed in-group cohesion, but we can certainly strip them of much of the insecurity, anxiety, hatred, and fear. We still want to develop the well-being of Us, but without tearing Them down.
    It’s becoming popular now, and ever so politically correct, to parrot that there’s no such thing as race, or that nothing more than our skin pigment is involved, and that genetic variation within the so-called races is greater than between races (which is true). This is an example of an ill-conceived Us-Them workaround that won’t face the facts. We’re still trying to think of ourselves as extra special and ever so different from animals. We wouldn’t shop at a pet store that told us there was no difference whatsoever between a doberman and a border collie, or a Persian and a Siamese cat. We do know that if we backed off of our artificial selective pressures and let these animals interbreed, as they would in the wild, they would, within a few generations, return to something much closer to the original dogs and house cats. A more useful approach to sub-special or pre-speciation genetic differences is to look more closely at race, and the wide, very-fuzzy lines that set them apart, as successful adaptive strategies, largely to ecological niches, each of them with characteristics almost certainly worth keeping and integrating into what we are to become as a more highly evolved species. We can also assume that globalization and intermarriage will eventually erase these adaptations, unless this civilization collapses.
    One of the best Us-Them workarounds is in reframing our social identity, expanding our little patriotic and racial mental playpens. I try regard myself as a Terran, more than as a Human, an American, or a Coloradan. The whole Earth is our ultimate common ground and terra firma, and people everywhere have more in common than they have things that divide them. This leaves any enemies I might have as potentially a part of my own group, where they can be looked at as individuals, rather than as stereotypes, mostly just evil humans, with names, and their minions. Why not make dictators, murderers, and rapists the out-group, and not those damned Canadians? Some of my best friends are Them, and not just famous or celebrity Thems. Some are worthy opponents. This isn’t really a pseudo-kinship or just a conceptual reworking of boundaries, either. It’s the real, original deal, an expanded view or framing of us, with all of our relations, Mitakuye Oyasin, as the Sioux say. To learn better tolerance of outsiders, we might also look to those peoples in whom it’s growing, notably Northern Europe (and Spain’s doing better), though we have to acknowledge that much of this tolerance came about the long way, along a very bad historical road. Mark Twain had this advice, “Travel is fatal to prejudice, and narrow-mindedness, and many of our people need it sorely on these accounts. Broad, wholesome, charitable views of men and things cannot be acquired by vegetating in one little corner of the earth all one's lifetime.” This did not seem to help him much with the French, however. The kind of travel that works best here, at least to some extent, is going native, not dragging along an expensive, protective shell of the familiar, the way the tourist industry wants you to do it. Tourism profits most from your being just a little afraid of the places you’ll go, so you’ll stay in hotels that are just like you have here at home.
    From clannishness to xenophobia, we seem to come biologically primed or ready to view those outside of our group with suspicion. It’s far too easy to forget that our treasured native memberships in nation, race, gender, religion, and socioeconomic caste are nothing more than accidents of birth. In many cases, particularly with ideologies and religions, our identity is a kind of Stockholm syndrome that begins with our native geography. There’s nothing inherent in any of those identitifications to be proud of, and nothing has been gained there by character, merit, or achievement. It’s more than a cliche that we have trouble recognizing or distinguishing out-group individuals by their appearances, especially those of less familiar races. We also have trouble seeing individuality itself, but this phenomenon has a bit more of a cultural component. These can be overcome, and many of us are slowly getting better at it, but it’s learned when it can be taught. The betterness of us than them, our parochial exceptionalism, has to go if we are ever to unlearn war. And we especially have to quit relying on defining our own boundaries as where our those of our enemies end. When you need enemies for your sense of identity, you will have to make enemies.
    In-groups also tend to use denial to distance themselves from malfeasance by their own members, to keep from embarrassment over their own group’s misbehavior and hypocrisies. My country, right or wrong, love it or leave it. Given the codes of silence and solidarity in law enforcement, it’s surprising that the gated Internal Affairs departments even exist. It’s not surprising that law enforcement so often fails to discipline those who fail to protect and serve by murdering unarmed civilians, and will often stand united in support of these thugs. Taking a stand against your own nation’s militarist policies can be denounced as treason, and apostasy against a religion you've left, denounced as heresy or witchcraft. It’s required many centuries of uphill struggle, often unsuccessful, to wrest both of these acts of conscience or autonomy from the death penalty, and many nations continue to backslide on these issues, always thanks to true believers in ideology and religion.
    One of our biggest problems will combine the individual’s assumption of relative helplessness or powerlessness with a largely unconscious assumption that the group as a whole knows where it’s going or what it’s doing. It will assume that things will work out for the best without concerted individual efforts. But there is no such thing as group mind. If individuals don’t speak up, or blow whistles, groups have no conscience, either. Corporations will concentrate on profit, and governments on their metastasis. The theologian Reinhold Niebuhr, of Serenity Prayer fame, wrote, “The group is more arrogant, hypocritical, self-centered, and more ruthless in the pursuit of ends than the individual.” Yet, in recovery groups, most members focus on the “serenity to accept the things I cannot change” part and ignore “the courage to change the things I can, and the wisdom to know the difference.” Because those two parts are harder?
    Social consensus constitutes a huge part of the human worldview. This is largely a cultural construct of how the world is or ought to be. It forms the context for most of our perceptions, emotional, cognitive, and social, out to the boundaries of the social universe. The constructs contain paradigms, postulates, and axioms, and any rules that might be socially prescribed for the individual within it. There are strong linguistic components as well, to some small extent in grammar, but greater in vocabulary. Worldview also tells the individuals where they belong within it, often to settle existential anxieties. This is broader than a philosophy, or an ideology, and less often questioned, except by young adults, who may still be deciding what software they want to install in their prefrontal cortices. Worldviews do remain soluble throughout life in elucidogens, accounting for both the attractiveness of these and their illegality. Several worldviews coexist in the broader culture, interpenetrating and overlapping, but there is more diversity to be found between cultures than within one, and this almost sets us up with a definition of a culture.
    The word kayfabe (pron. kerferb) hails from carnival culture and the world of professional wrestling. It refers to the portrayal of all the events within the sphere of the industry as genuine. Characters stay in character off screen and outside the arena, carrying their pretended injuries and feuds around when in public and off the clock. This creates a suspension of disbelief in the public perception. Breaking this kayfabe is like an actor breaking character: it’s a disenchantment or disillusionment, with its consequences at the ticket office. As a metaphor for human society, with its goal of upholding a consensual reality, it’s a good stand-in for the folie à plusieurs, the madness of many, or the madness of crowds. Whole societies will go to great lengths to stay in character and on script, keeping their illusions maintained in great detail. Etiquette and protocols, particularly in the upper economic classes, provide the most obvious examples. But we all ought to know that the consensual world upheld as reality in the human kayfabe is lacking some things of dimension and substance, and of unrehearsed spontaneity. Nowhere is the phenomenon of kayfabe or folie à plusieurs better laid out before us than in Hans Christian Andersen’s “The Emperor’s New Clothes.” Charles Mackay, in Extraordinary Popular Delusions and the Madness of Crowds offers, “Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.” Mackay looks at individual phenomenon within cultures, rather than entire cultural worldviews, movements where cultures go mad temporarily, like economic bubbles, witch hunts, the occult, the Crusades, fashion, and religious relics. But entire cultures can get their bubbles popped, too.
    If culture is defined as that which isn’t purely native to our minds and our direct experience, then the whole of it is a construct. To some of us, this fact is a license and right to question it. Stepping outside has costs that might call on a strength of character that few of us possess. Since consensual reality almost completely defines the world of economics, and all of the needs that people have for things they don’t really need, the costs can be literal and expensive. Politically and socially, some will risk social acceptance to reach for what they suspect to be deeper, more authentic truths. To grasp and hold to the true, against the currents of current affairs, is the root of Gandhi’s term Satyagraha. The directive or imperative to be or hold true is also the meaning of the Chinese Zhouyi expression yǒufú, also discussed earlier.
    The worldview or weltanschauung of a culture may purport to answer most of life’s questions, but there is no special reason that all of its parts have to remain intact, even if it has evolved within the culture to function as a whole. Portions might merit biopsy and radical surgery, even if there are unforeseen consequences to the whole. The disassembly or radical excision of important models and structures might be a matter of our long-term survival, even the survival of civilization itself, if that’s what we really want. Economic growth for its own sake is one such model that has spun well out of control, and left human beings as little more than fungible replacement parts, and these must keep growing in number to satisfy the demands of the Ponzi metastructure.
    Of all the worldviews that we humans might have developed, our human exceptionalism has some of the worst long-term consequences, but this can be subsumed under the still larger umbrella of human parasitism, which also encompasses overpopulation and overconsumption, growth for its own sake, and paying no rent for our presumptive god-given right to be here. This is also intertied with another dangerous metaphorical model, most articulately laid out by Descartes: we are ghosts in a machine, spirits just sojourning here, walking around in meat puppets. Another implies that there is a purpose or plan to it all, and this one is sometimes even put forth without reference to a divine agent or creator. The vapid platitude “everything happens for a reason” is an expression of this, but it even creeps into discussions of evolution when we hear something evolved to perform a function or evolved for a particular purpose. Rather than this, there are explanations for the persistence of certain traits initially elaborated by mutation. Sadly, resistance to correction of these world views invariably entails the frightening implication that we will be deprived of the comforts and security gained by their acceptance. Losing the idea that we are all spirits bound for glory, and coming to grips with being no more than animals, or even negentropically self-organizing star stuff, really deprives us of nothing but a delusion. A world of sentient animals who are made of just such stuff may still develop emergent properties, and agency. Such a universe can still be known as sacred, and still be more than worthy of reverence and gratitude.



1.8 - Infoglut, Enrichment, and Lifelong Learning
  
 Critical Mindfulness and Cognitive Hygiene

    Sapere Aude, Against the Great Dumbing Down
    Infoglut, Selection, and Eclecticism
    Objectivity, Perspective, Stereopsis, and Feedback
    Unlearning and Overwriting Inferior Knowledge
    Lifelong Learning, While Keeping Minds Nimble

Critical Mindfulness and Cognitive Hygiene
    The problem with most approaches referred to as critical thinking is that they’re too simplistic, and they rely far too much on reason or logic. Critical thinking alone, without dealing with the emotional processes involved in ignorance, delusion, cognitive bias, coping strategies, defense mechanisms, and logical fallacies, is a pretty pointless practice, and more so the older we get. By the time we are processed through childhood, in any culture, we are like exceedingly complicated cognitive edifices, built by amateur architects without much of a clue what they’re doing. Like buildings, we can be thought to be built from the ground preparation upwards, and the need for repairs and error correction often goes all the way down to the foundations laid in early childhood. By the time we have reached adulthood, and particularly by our mid-twenties, when our prefrontal cortex has generally matured, we usually have enough to contend with simply maintaining our edification. And we are already starting to neglect the need for structural and infrastructure repair.
    Real self-correction gets progressively harder with age, acquired wisdom notwithstanding. Even with supportive friends, therapy, vipassana bhavana, and elucidogens, we just aren’t very likely to ever perfect our minds. And of course, we don’t need to do that. We just want to get ourselves corrected and self-corrected enough to live the life we want to live. The Greek name for this target is sophrosyne, σωφροσύνη, a mind well-managed, moderate, but not overly so, and possessed of a deep and useful self-awareness. Sophrosyne alone seems plenty to ask of even the uncommon man. Some of us have a need to go still further, and look out for the health of the world and the well-being of posterity, having a sense of indebtedness and gratitude for whatever forces brought us here and whatever gifts we were given. This includes a desire to set both the world and the culture straight wherever error threatens the long-term well-being of humanity and the biosphere. This will necessarily involve us in the techniques of persuasion, which we want to combine with authenticity instead of guile.
    With respect to the persuasion of others in an authentic way, we don’t need to look much further or get much more elaborate than the method outlined by Blaise Pascal in the 17th century: “When we wish to correct with advantage, and to show another that he errs, we must notice from what side he views the matter, for on that side it is usually true, and admit that truth to him, but reveal to him the side on which it is false. He is satisfied with that, for he sees that he was not mistaken, and that he only failed to see all sides. Now, no one is offended at not seeing everything; but one does not like to be mistaken, and that perhaps arises from the fact that man naturally cannot see everything, and that naturally he cannot err in the side he looks at, since the perceptions of our senses are always true.” That last bit, of course is true in only a very limited way. Elsewhere, he added, “People are generally better persuaded by the reasons which they have themselves discovered than by those which have come into the mind of others.” It also seems to be important to help replace what others have lost with something that works as well, or better serves their emotional needs with less cognitive error.
    Mindfulness is frequently touted as a way to heal or correct our cumulative cognitive woes, but the word is becoming so common and commercialized that it threatens to go the way of the word sustainable. The more successful mindfulness techniques date back at least to the meditation exercises that the Buddha learned from the Vedantin or Hindu Yogis. These, in turn, probably evolved from the much older “sitting and thinking long and hard about stuff” and “paying attention to whatever you’re doing,” the original consciousness-raising practices. Buddha’s system, combining Right Mindfulness and Right Concentration, is really a tap root for much of what comes later, including developments in our mental health therapies and cognition-based recovery programs, whether this is acknowledged by them or not.
    Do we need to understand what a mind is before practicing mindfulness? It at least seems to help to not look on mind as a spirit stuck in walking-around meat, a ghost in a machine, some little homunculus made of consciousness squatting behind our eyeballs. It helps to see the mind as integrated with a whole organic being, with all of its biological process, ready with all kinds of interesting electrochemical stuff for our attention and awareness to play with. The Chinese word for mind, xīn, is closer to this more integrated sense. The character is variously translated as heart, mind, intention, center, and core, and is the integral graphic component in the largest portion of the Chinese characters related to both thought and feeling. In Indian and Theravada forms of Buddhism, the mind is not something independent of the material form, and does not emerge from an independent self. Newer forms of Buddhism, with core materials outside the Pali Canon, may have developed other ideas more consistent with other religious doctrines, and may even have adopted a belief in reincarnation. This is not what Buddha taught: it’s only what many Buddhists wish he had taught. Here, xīn might even be better understood as a verb, as in “mind your Ps and Qs” or “Do you mind?” This brings in a sense of caring or relevance, some kind of personally felt salience, which is what’s needed to awaken us from our more automatic mental functioning.
    It also helps to understand that our consciousness doesn’t really happen without noticing something, without attention trained on objects or content, including nebulous and cosmic objects and content like nothingness or light. The mind is merely the experience of this, both in our perception and in its operations on the world. Simple mindfulness is really little more than paying fuller attention, but particularly to our subtler mental operations, objects, and contents. The objects or contents of consciousness can be sense, perception, intellect, affect, cognition, any thought we can think, and even consciousness itself. Minding is an activity or process that isn’t really there when it isn’t being done, any more than your fist is there when your hand is open, or your lap is there when you’re standing up. But it’s always dependent and always arises out of prior conditions to attend to a stimulus. Obviously, this model of things runs contrary to a lot of deeply held religious, spiritual, and mystical beliefs. Believers here usually want to see a greater or parent consciousness that’s a fundamental property of the universe, perhaps the most fundamental, pre-existing everything else, and perhaps responsible for creation itself. This is sort of like looking into a mirror and seeing a you behind your being, and so it might be expected of such a narcissistic species as ours. Of course, that could all be true, and we can’t really prove it isn’t, what with consciousness remaining the hard problem and all. However, this possibility should not affect what we’re doing here in any way, as even the solipsist can work with these ideas and mindfulness practices.
    Repeating this from the Introduction, there are reasons to pull away from the term "critical thinking," and also reasons to use “critical mindfulness” or “cognitive hygiene” instead. What I mean by critical mindfulness is simply adding some more knowledge and analysis to the more purely attentive or contemplative process. In particular, this is knowledge of where, how, and why the human mind can effectively practice self-deception and preserve or create error. Then, of course, cognitive hygiene incorporates an intention to correct or clean up some of this deception and error, both in our own minds and in those we can influence. The term cognitive hygiene refers to any set of practices to clean the mind of clutter, crap, and unwanted parasites. While this cleaning still includes the skill sets of critical thinking, it also embraces tools of emotional or affective self-management, since the affective side of perception drives so much of our human error.
    The objective of critical mindfulness and cognitive hygiene is to examine what we do with the objects and contents of consciousness in a more critical manner, with an eye to doing less of what we ought not do, with a critical eye to evaluating error as erroneous cognitive behavior and somehow getting rid of it or ceasing to do it. This more often than not involves working around error’s emotional defenses, over which we might have little or no conscious command. We want to learn to separate germ from chaff, both in our learning and in our sharing. The subject is huge when it’s looked at comprehensively. I’ll be closing Part Two with two chapters of recommendations, one for help in raising kids, in the hopes they can get started early and accumulate less that needs getting rid of. And one is for us dults, that we might learn to better regulate our inputs and come to understand at least some of the errors of our ways. That, too, is for the sake of the next round of kids, because it’s already too late for most of us to truly awaken.

Sapere Aude, Against the Great Dumbing Down
    The Greek maxim gnōthi seauton, γνῶθι σεαυτόν, or know thyself, was inscribed at the entrance to Apollo’s temple at Delphi, as one of 147 Delphic Maxims and was generally regarded as the most important. Menander later added some good wisdom to this with “‘Know thyself” is a good saying, but not in all situations. In many it is better to say ‘know others.’” This has yet to be adopted by the new age crowd. Not all of the Delphic maxims are now regarded as wise, but some of the others, related to our purposes here, have been translated: know what you have learned, think as a mortal, nothing to excess, be a seeker of wisdom, give back what you have received, observe what you have heard, accept due measure, and do not tire of learning. The Latin maxim, sapere aude, “dare to know (or be wise)” has another source, attributed to the poet Horace in 20 BCE. It was later adopted by Kant as a motto for the Enlightenment. Originally it read: “Dimidium facti qui coepit habet: sapere aude, He who has begun has the work half done: dare to be wise.” The word dare here acknowledges that education and inquiry often require courage. But courage for or against what? We must acknowledge that there are pressures and forces arrayed against the correction of ignorance.
    While there is some social and economic reward for mental or intellectual achievement, at least when it’s done through proper channels and certified on paper by a certified institution, there is hardly much universal moral support, particularly from peers in school or church. Most people seem to prefer being popular to being right, and may fear being regarded a smart aleck or know-it-all. Young people have labels like teacher’s pet, nerd, or geek to fear. There are certainly reasons to deride the pompous and arrogant among the learned, and the Dunning-Kruger club among the pretentious, but the great dumbing down is far wider in scope than that, especially among political conservatives and religious fundamentalists. Isaac Asimov commented on this trend in America, where that nation is now a world leader in this, “There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’” This hasn’t been helped at all by the relativism invigorated by the postmodernist movement. It’s also paradoxical that with so much support for the relativistic sense of truth, there is a simultaneous belief that there is a truth to be arrived at by majority vote. This is often the lowest common denominator, the thing that appeals best to the average or mediocre mind. Disagree and the majority will be more than willing to gaslight you into conformity.
    Groupthink, per Wikipedia, “is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Group members try to minimize conflict and reach a consensus decision without critical evaluation of alternative viewpoints by actively suppressing dissenting viewpoints, and by isolating themselves from outside influences.” Going with the groupthink flow lets people reap the benefits of belonging, enjoy anonymity, practice moral disengagement, and diffuse accountability into larger social entities. Sometimes a group will acknowledge a dissenting opinion, if it’s within a permissible range, but rarely are devil’s advocates or cogent critics truly welcome.
    Part of this problem derives from assimilation into the collective or hive mind, a contingent superorganism that behaves as if conscious but is not. This is a logical consequence of socioeconomic specialization that got underway with our first settlements and urbanization. The dystopian side is the Borgian “You will be assimilated. Resistance s futile.” It is, of course, to hive mind that we owe so much of our cultural and technological progress. But the parts of such a whole can’t be spending much free time on things outside of their specialty, so extracurriculars tend to be well-circumscribed. These fungible parts of civilization’s system are encouraged to be intelligent and competent within their field or specialization, but outside of this, not so much. Thus do we have brain surgeons who believe that Earth is 6000 years old.
    Generalists within the system are relatively fewer in number and tend to have a little more leeway in thinking outside the rut. Here again is the general who connects the greaser of chariot axles with the suppliers of the grease. But it’s really only at the outer fringes of culture where a true interdisciplinarian, renaissance thinker, or polymath has a function in this machine. Still, it’s a vital function. The freedom to move between specialties and disciplines is too important a source of new inventions, ideas, and perspectives. To take this too far is to move outside the system, to become the eccentric, or the shaman with his access to other worlds. Here, contributions to the hive mind become less predictable, and only occasionally useful or supported socially.
    The dumbing down of a society can conserve focus on the specializations needed for the hive mind to run smoothly. But the running smoothly itself is another requirement of the hive. No matter how competent the generals, the system is usually ill-equipped to handle more than predictable perturbations. Beyond a critical point, the complex system can undergo a cascade failure and collapse. This happens to nations and dynasties, around every 200 years on average, and sometimes whole civilizations fall. When they do, the urban folk will learn pretty quickly that eggs, milk, and potatoes don’t originate in stores. It’s therefore prudent for a society to maintain a tolerance of fringe communities who will not require re-schooling and re-skilling in more pre-civilized lifestyles. Analogously, society should also maintain a tolerance for those who let their minds roam free, with intelligence undiminished, outside the ruts and boxes. We need the rebel eccentrics at their best, not spending so  much of their time fighting to dare to know or be wise. And this needs to begin in grade school. The is an argument for separating the kids who do well from the kids who would bully and torment them.

Infoglut, Selection, Enrichment, and Eclecticism
    The word infoglut speaks for itself. The amount of data available to culture is increasing exponentially, and with a growing exponent. A measure of this is Bucky Fuller’s “Knowledge Doubling Curve,” and the knowledge doubling time continues to shrink, suggesting a coming explosion that some call the Singularity. You see the idea in Moore’s Law as well. The amount of data that’s of any use or value is also growing in that direction, but not nearly as quickly. The signal to noise ratio is shrinking, meaning that, unless we are also growing increasingly selective, we wade through more information of lower quality every year. Additionally, and probably a metaphorical function of that pesky Second Law, wisdom isn’t nearly as contagious as foolishness. It’s unfortunate that negative entropy has to be a local phenomenon. We are being required by law to change the way we handle information, or else blow what remains of our cognitive fuses.
    A close cousin to infoglut is Alvin Toffler’s concept of Future Shock. We are facing a wave of “too much change in too short a period of time,” leaving  a population stressed and disordered. Increasing rates of change combined with infoglut pose serious challenges to our adaptive resilience. But too often the maladaptive response is to shut off or shut down, or shrink our frames and narrow our views. At the very least, the adaptive strategy is to concentrate less on the quantities of what must be absorbed and more on the tools by which what is absorbed is selected for usefulness and quality. Studies of our mental capacities are incomplete still, and we probably won’t understand what the human brain is capable of until we grasp non-normative phenomena like savant syndrome and other expressions of genius. Each of us has our own limits to how much we can absorb, but these limits can be reached either by absorbing input at random, or by picking and choosing what we take in. This may also be thought of in terms of triage or prioritization. When the world around you is on fire and your evacuation limits you to one suitcase, what do you really need to keep? Do you fill it with socks because that’s the first drawer you come to? The quantity of stuff in different suitcases may be the same, but the quality saved will vary considerably. Toffler offered: “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” It’s now about tools and skills more than content. History, however, will still bear remembering, and facts like the multiplication tables won’t be changing any time soon.
    The bottom line may be that learning to think well doesn’t need to increase overall cognitive load, and in the long run, might even lighten it, particularly with the reduced costs involved in defending errors. The world is rich with information. If we can only absorb a limited amount, then why not absorb enriched information? Since we have to be selective anyway, why not select the good stuff? It really doesn’t hurt to know less than everything, but it still hurts less to know a higher percentage of good stuff. Interestingly, there may be a parallel here in how the brains of more intelligent people seem to use less juice and fewer connections in problem solving than the brains of more average performers (Genc, 2018).
    For those who favor more evolutionary explanations, that template fits the evolution of culture and its prodigious output of information. The side of the coin that’s all about mutation, experimentation, and wild creativity is the side that’s dearest to the human point of view. The other side, selection, natural and otherwise, will tend to make people squeamish, agitated, and fearful. Death and extinction are unhappy topics. Herbert Spencer’s phrase “survival of the fittest” is usually mistaken to mean “might makes right,” and this error puts simple people off. They would rather have equalitarianism [sic] and the democratization of life and knowledge. This might have some roots in the paradigm of all spirits being equal before the divine. But fitness has little to do with who can kill or conquer whom (or with gym memberships either). Fitness means adaptive fitness. It’s about how well an organism or population fits or adapts to its niche over time, and as the niche itself might evolve. It may have more in common with flexibility and learning than with force. Battles for dominance of the niche or mates are only one aspect of fitness. A statement that’s wrongly attributed to Charles Darwin, possibly from Leon C. Megginson, clarifies this: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.” What doesn’t work gets abandoned or devoured, and the traits that are useful tend to be preserved. Selection is fully half of evolution, like it or not, and it’s what got us here.
    We have yet to take a much-needed, serious, pro-selection approach to our burgeoning cultural database. This has evolved rather spontaneously and naturally, like our languages. And it’s noisy in there. A lot of information still gets selected out as a matter of course: an artist isn’t good enough to find a gallery, or a poet a publisher. And sometimes the haphazardness of it all takes a toll, and a Vincent van Gogh will take his own life in despair. The genetic counterpart of conscious, intentional, or unnatural selection is eugenics. This now elicits a widespread horror, thanks to some who bungled it badly, but eugenics continues apace in good old-fashioned sexual selection. We almost all, by now, prefer that new kind of female, without the tail and all that fur.
    Nothing really requires us to regulate the inputs that we admit or allow. This can be quite random, like a parade that’s passing before us, and many are content enough simply to watch the parade. But our individual minds do have selective processes, first for matters of salience, then relevance, and then value. Our brains have a finite capacity for processing, remembering, and using information, so our minds make value judgments about what to keep. We also have other filters besides value that bear watching, and even value filters are biases and prejudgments. New inputs may go unperceived or be rejected for incompatibility with what’s already been learned. Sometimes the rule is to adopt the first thing that convinces, and then defend that against anything contrary. In short, it might be useful to examine our own filters, defenses, nets, and sieves as carefully as we examine our inputs.
    We regulate our inputs best by violating those new age platitudes against being judgmental, and then striving to use good judgment instead, even our very best judgment. Dan Ariely (2009) suggests that ”What we need to do is to consciously start closing some of our doors....We ought to shut them because they draw energy and commitment from the doors that should be left open--and because they drive us crazy.” This can also mean a dismissal of  cognitive relativity. In the moral sphere, Ayn Rand offers this: “‘Judge not, that ye be not judged’ is an abdication of moral responsibility: it is a moral blank check one gives to others in exchange for a moral blank check one expects for oneself. There is no escape from the fact that men have to make choices, there is no escape from moral values; so long as moral values are at stake, no moral neutrality is possible.” The reactions that some of you may have had in seeing Ayn Rand’s name here might suggest something too.
    Judgment requires negation, and confidently assuming a position. Wheat is separated from chaff before it’s ground into flour. Chaff bread is just awful. When you process a quantity of something and you get rid of the material you don’t need in the finished product, you will wind up with less material overall. But you don’t then refer to that new, lesser quantity as diminished. It’s a concentrate. You say it’s enriched or refined. In fact, the end product has more value than the raw because it’s no longer subject to the added cost of processing. It’s value added. This is the way to approach being negative, as a process of enrichment. In other forms, such as neti neti (not this, not this) in Vedanta, via negativa in philosophy, and apophatic mysticism, we might try to relieve ourselves of just about every thing we can think of, and ultimately leaving ourselves alone with nothing but pure process, the end in Whitehead's process philosophy.
    Of course negativity isn’t always such a positive thing. We run Castaneda’s risk of turning “not wanting anything” into “not liking anything.” Skepticism can turn into the wrong kind of cynicism and the whole world gets under-esteemed. Humility can degrade into self-doubt, self-effacement, and self-criticism. We might also consider that cynicism (in the modern sense), or an over-reactive skepticism, can leave us just as ignorant as gullibility.
    Both politics and religion might try to package all of our problems into a single failed solution, where we could be much better off by tackling those problems one at a time with targeted solutions. Political parties still haven’t figured out how to blend fiscal conservatism with environmental conscience, personal liberty, and social liberalism. Out of the few religious groups that encourage picking and choosing, the Unitarian Universalist Association or UUA, may be the most successful at this, although Hinduism and some orders of Sufism explicitly acknowledge different strokes for different folks. In simple terms, eclecticism shows a preference for quality over quantity. The idea may have first been expressed in Chapter 59.4 of the Zhouyi, 3000 years ago. In the the Warring States period, the philosophical approach would be formalized as Zájiā, the Miscellaneous School.
    Here’s a metaphor to explain eclecticism: When you choose to buy into a comprehensive system of beliefs, like a religion or ideology, you’re issued a fancy box full of golden rocks and glittering stones. The box has a glass cover and is sealed. This, you’re told, is the complete, must-have collection. The treasure must remain as it’s been given unto you, as this is its proper order and arrangement, and it ought never be thought a plaything. Even when you suspect that much of the weight is fool’s gold, cut glass, and paste, the fact that nobody else is tampering with their own box gives you pause, and the thought threatens your much-wanted sense of belonging to the group that honors this exact same box. It’s certainly not the majority who will break the glass, test, assay, pick out, and pocket the real nuggets and gemstones, and leave the dead weight behind. The more gifted among us will crack open that box, toss out the pretty fool’s gold and paste, and the lovely box, and the ideological trojan horses, secure the goods, and move on with a lighter load. But eclectics don’t really travel much lighter in the end because they do this high-grading over and over, raiding more boxes, collecting the same weight in the end, just weight with a lot more value.
    Not surprisingly, this “supermarket shopping cart” approach isn’t approved by the systematizers. The political parties don’t want you walking off with random platform planks, religions with only the commandments you approve of, 12-step programs with only the steps you wish to take. And this caution or warning isn’t to be entirely dismissed. It may remain worthwhile to see how these pieces worked together in the context in which they were found. And a package that’s successful as a cultural meta-meme might also contain some information useful in grokking group psychology. A system may deserve at least some respect, or a second look, before it gets disassembled. Maybe a thumbnail sketch, or a gist, is still worth keeping, but with less dogma, less infallibility, fewer tentacles, tendrils, and strings attached. We ask it to do only we want it to do. We want it to adapt to scientific discovery instead of backfiring against disconfirming input. We will want it evidence-based and revisable. An ideology should be broken down before it’s fully assimilated, just like any meal. Those who faithfully follow the silly dictum “don’t be judgmental” run the real risk of living lives of bad judgment and filling their own heads with shit.

Objectivity, Perspective, Stereopsis, and Feedback
    The notion of objectivity proposes that some things can be regarded as true regardless of our individual perceptual interpretations, emotional preferences, cognitive biases, and imaginative overlays. This carries with it a secondary prescription or assumption of neutrality, detachment, or impartiality in order to fully participate in objectivity. That objectivity is even thought possible has its roots in naive realism and a degree of consensus between multiple persons perceiving the same thing. Naive realism proposes that a close picture of the real world is furnished by our senses and perceptions, i.e. that smelly things are ontologically smelly, and red things ontologically red. And philosophical realism proposes that objects are ontologically independent of perceptions of them. Simply put, our senses and minds don’t add the ontological boundaries or borders to things. Scientific realism asserts that the world can be faithfully portrayed by established models, theories and equations. This, too, can be a bit of an embarrassment, as when things that are described mathematically are reified, as when turning quantum physics into metaphysics, or when the discrepancies between our models and observations demand the insertion of placeholder entities like cosmological constants or dark entities. This is often a classic case of confusing maps with terrain.
    Objectivity is used here in contrast to both relativism and perspectivism. Relativism proposes that objective, universal, or absolute truth is just a pack of lies, but everybody gets their own little version of truth if they want it. Without a common measure, or an analog of an inertial coordinate system, some assert that these are all of equal value. There are hard and soft versions of philosophical relativity. However relative things may be, one relativist might still give another a shopping list and expect it to be filled correctly, or roughly so, so not all relative truths are separated by unbridgeable gulfs, and relativists do write mutually intelligible papers to share their ideas, Derrida excepted. There is also a moral relativism that makes such claims about good and bad behavior, which is often tied to a cultural rather than an individual context. This tends to ignore the fact that humans share very similar, inherited neurological structures with evolved human traits, many of which drive the acceptability or rejection of our social behavior.
    Perspectivism might be regarded as a better neighbor to relativism. This acknowledges that all perceptions, emotions, cognitions, and other things mental, derive from a perceiver constrained to a particular point of view. Perceptions are therefore usually incomplete or partial, with partial taken in both senses of that word. The quality of the apparatus of perception will also be varied. In contrast to relativism, there is less of an implication here that different perspectives are of equal value, and certainly not their perceptions. The term was coined by Nietzsche. There are still no absolutes here, but the true can be more closely approximated with communication across the gulfs, and by sharing perspectives from different points of view.
    Combining perspectives in perspectivism can be taken as an analog of our binocular vision, which makes use of retinal disparity, stereopsis, or binocular parallax. Each of our eyes gets a different picture. When these picture are put together in the brain, the additional information gives us an added dimension of depth. We don’t argue in our brains which picture is correct, or even more correct. We just get an additional layer of information and meaning from the differences. Much as stereopsis will combine the differing views from the eyeballs into a perception of depth, inputting points of view that differ from our own provides a deeper view into the world. This is among the better arguments supporting diversity of opinion, although it isn’t an assertion that everyone has the right idea, even from their own point of view. Polemics are different points of view with a failure to communicate. They don’t often serve us: the sum of these kinds of two-alisms is somehow less than the sum of the parts. When we have perspective-limited views, we may need to do some work to get the rest of the data and perhaps even prepare to change or fine-tune our minds.
    Feedback is the process by which systems learn to self-regulate. System outputs are recycled back into the system as inputs, of either information or energy. There is some cost to the system, but normally the compensation is greater. This sets up a loop or circular chain of cause-and-effect. A system doing this is said to feed back on itself. Causality is a problematic idea here because of the circularity, analogous to begging the question. But it’s this that can lead us to agency or freedom of choice.
    As we saw with reinforcement in operant conditioning, the terms positive and negative feedback refer to the presence or absence of informational or energetic input, and not to the subjective quality of the feedback. A child in mid-tantrum is getting negative feedback when he is ignored, not when he is swatted or put in time out. When applied deliberately, the feedback is called reinforcement, which can be supportive or aversive, and consists of either rewards or punishments, either given or withheld. Our capacity to accept constructive criticism is an important feedback processing skill, and it can be learned. Of course, we still need to learn to distinguish the clear, undistorted information from the erroneous or malicious.
    A better writer might have phrased Luke 4:24 as “No man is a prophet in his own village.” We need feedback to keep our minds from running away with grandiose self-estimates and ideas born out of limited perspectives. We get this locally, from friends, colleagues, and others who know us. In systems theory, the feedback loop allows systems to self-correct based on reactions of other systems, or conflicts with the world's harder edges. Multiple systems adjust to each other this way, and to the niches they share. As such, feedback loops are necessary components of adaptive fitness. As organisms, and as minds, we learn to adapt by paying attention to those we have effects on.
   
Unlearning and Overwriting Inferior Knowledge
“Our forward-charging culture sees regret as a sign of weakness and failure. But how else can we learn from our past?” Carina Chocano
    The brain is not a computer. We don’t simply erase or delete perceptions, schemas, and scripts when we find them to be in error, no matter how firmly we repudiate them. And the human brain doesn’t like to change its mind, a fact which makes not learning an error in the first place much preferable to unlearning, even if it’s more work up front. We are better off being sure we’re learning correctly before we’re invested in the results of our efforts to learn. If not, we’ll have sunk costs to protect. We’ll stick by our decisions and define ourselves by them. When we doubt our own decisions, we cast doubt on our own competence, and that’s a discomfort to avoid. When challenged on this, we escalate our commitment. These decisions become who we think we are, and where they inform our sense of self, they can arguably become part of our core beliefs, around which our core identity and worldview is constructed. Then their defense becomes a serious matter, even when our persistence is not in our own best interests.
    In psychology and behavioral science, unlearning is called extinction. A behavior may be gradually extinguished by eliminating the feedback that made it predictable or habitual. It isn’t always forgotten, though. It might just become less and less of a go-to response until it rarely if ever gets called on again, and languishes in irrelevancy. Unlearning is the loss of a conditioned response, whether classically, operationally, or socially conditioned. When we find a more optimal response to a stimulus or situation than one we have learned, the old one will get gradually supplanted. But the brain doesn’t give up on the old ones that easily. It will continue to try them with decreasing frequency, just in case, until lesser utility has been better proven. Learning, together with content learned and whatever memories are associated in the process, should be considered here as behavior: it’s still dynamic. Even when a memory is just sitting in place, seemingly doing nothing, it should still be regarded as a process, on standby, ready to link up and associate with any new experience it might be related to.
    While studies of memory have largely concentrated on persistence, we still have much to learn about transience or forgetting. Transience reinvigorates our flexibility in responding, disempowering the outdated and less effective programming. The adaptive function of memory will be more concerned with better decision-making than with collecting potentially useful information for long-term storage or time-binding. This suggests optimization of utility rather than accumulation of experience as the long-term imperative. The value of a memory, however, is relative, and people will hoard the darndest memories, perhaps to find some surprise use in unforeseen trivia games. Forgetting can lighten our cognitive load, but it seems easiest to do when what is forgotten is replaced with something better.
    The study of neuroplasticity is still in early research stages. Contradicting earlier theories, the stem cells in the brain do allow us to grow new neurons throughout adulthood. We know at least that this is much more prevalent that previously thought, and that abandoned circuits can be remodeled, reused, or retasked. This may be where full erasure can happen. As well as abandoning error, or supplanting old knowledge with better knowledge, neuroplasticity is especially useful in learning alternative affective or emotional associations and reactions to the stimuli that trigger episodic memories, particularly the unpleasant or traumatic ones.
    The simple forgetting of episodic memories correlates unsurprisingly with infrequency of related priming and conscious access, although recall under hypnosis of things believed long-forgotten should still keep us somewhat skeptical here. Unfortunately, early repetition and personal impact can keep such long-irrelevant things as ear worms and dumb jingles stuck in our heads forever. Brains do seem to be working to forget what they might never use. We have little cells that dissolve our unused bone with hydrochloric acid. Maybe we also have something that cleans up unused neurons. Or maybe one day we could invent some, just to hard and secure-erase those goddamned jingles. Paul Frankland suggests mechanisms promoting memory loss that are distinct from those involved in retention, and especially in the remodeling of hippocampal circuits, where memories could be completely overwritten. This would clearly have a function in cleaning our decision-making tools of obsolete clutter. But unlearning isn’t always forgetting. Often it’s just setting up a different program or model in a slightly different part of the brain and remembering to use that while the less useful thing languishes and perhaps gets overwritten by something that’s needed, or needed more often.
    Aside from the challenges of altering neurological configurations, we have to contend with our own sense of having invested time and energy in learning something that’s now of lesser utility. We have cognitive inertia and cognitive bias to deal with. Further, we’re likely to have felt something akin to pride in having learned the damned thing in the first place, and in having pronounced it correct, and worthy to come live in our heads. Do we now need to admit we were wrong in some way? So now we have our ego defense mechanisms to contend with, especially if we’re pressed to confess our error to others. We also have to ask ourselves what it’s worth to know stuff, and to carry around the extra weight of belief. Do we really need to know, or pretend to know this thing? If we adopt it, will it require cognitively expensive and emotionally draining efforts to defend it whenever it’s challenged? Would it not be better to carry a lighter load, maximizing tools and minimizing burdens?
    We don’t really know how much information a human mind is able to hold. Regarding the human brain as a beaker or a bucket that can only contain so much information is only a useful metaphor in one dimension of many, and ignores abilities to conduct and transmit information, or organize memories and methods of their access more effectively, or skillfully access additional information as needed while storing only those functions that enable this access (cognitive forensics). Memories may also be simplified or regularized instead of eliminated. Memories that are too specific to be useful may be stripped to their gist and combined with others to form such cognitive tools as higher-order categories, generalizations, and stereotypes. Or they may be stripped to the processes that produced them, as we might do when replacing learning stuff with learning how to learn stuff. Once again, memory isn’t really “for” collecting the sum of experience as much as developing a bag of tricks for making adaptive fitness decisions, or combing the environment for affordances to be used later. As to our capacity, we are once again hampered in our knowledge by our normativity bias, our eagerness to know the norm while ignoring the exceptional, the gifted, and the autistic savant, as nothing more than anecdotal. In 2005, Lu Chao of China recited pi to a record 67,890 digits. But who knows how long that record will stand.
    As their brains develop, young children will drop a lot of innate neural connections that haven’t been used yet. It’s a true case of  “use it or lose it.” This fact should suggest some experiments in stimulus enrichment in early childhood, though not done to overwhelming degrees. It may be that neural development and connections retained unnecessarily throughout childhood, perhaps with placeholder learning, can be retasked later with more useful programming. For example, young children who learn a language that will never be used in adulthood might better retain a capacity to rewrite that later with a more useful language. That’s just a shot in the dark. But the underlying condition, which will recommend broader and richer stimulation throughout childhood to keep circuits from being lost, further underscores the importance of preventing childhood adversity and impoverished education.
    When we’re learn something new, we will tend to attach some extra affect to this latest and greatest thing for a while, such as hope, or some premature confidence, so that it gets remembered and used until it can prove itself, like training wheels. But at some point, those need to come off. We have to ask, though, if this serves us. Do we really want this new plant to sink its roots so quickly and get all interconnected with everything else we’ve learned? Or is the probationary-period approach more useful? The importance of allowing for future unlearning and relearning is only amplified by the growing glut of information that’s coming our way and the accelerating pace of change. The techniques involved in unlearning need to be closer at hand and improving. And these skills are more learned than native or natural. We didn’t evolve in a world that was changing this quickly, except during cataclysmic events that left us genetically-altered, new species of hominins. Having stability in our cognitive composition is becoming increasingly maladaptive, as resilience and innovation are increasingly called upon. We’ll need to start holding our hard won knowledge ever more lightly.

Lifelong Learning, While Keeping Minds Nimble
    What is it to be unable to learn enough? Why do most people seem to go just a short way into learning and then stop? W. B. Yeats wrote, “Education is not the filling of a pail, but the lighting of a fire.” Do some just start out soggy or un-flammable, and others keep combusting for life? I suppose we have to return to the idea that memory isn’t “for” accurately recording as much of the world as we can. It’s simply an evolved way to solve problems that are likely or expected to arise in our lives, and get us through breeding and child-rearing intact. Those with lower demands and expectations in life, and those who elect to live in simplified and highly predictable environments, wouldn’t feel as much need to pursue learning as a lifelong effort. This is, after all, a great deal of work, especially when we’re called to rewrite or upgrade our knowledge, to clean our information and input of old errors, to continuously evaluate and revaluate our own cognitive competence. A part in a machine is developed to specifications provided from without, and so much of public education follows the cookie-cutter and assembly line model of instruction. This addresses the society’s needs more than the individual’s. Many don’t seem to feel a difference. But when that predictable society is heading towards a turbulent and unpredictable future, then a higher level of cultural literacy will be needed, and better informed and skilled participants in democracy, society, economy, and ecosystems. Generalists become more useful and specialists less. This wants at least a percentage of autonomous, self-directed individuals, with the ability to process and learn from feedback, and clean their information of error. For these, Asimov wrote, “Education isn’t something you can finish.”
    We can make an extended dietary analogy on the subject of learning, even unto the mentor or teacher as chef. You’ve got your toxins, contaminants, and infectious agents, and an ability to learn from your involuntary food tasters. You’ve got your purgatives and your laxatives. You’ve got your supplements, nutrition experts, and the latest fads. You may (or be made to) eat it if it’s good for you, never mind the taste. There are the ancient lures of sugars and fats. We aren’t entirely what we eat, only what we eat and retain. Exercise keeps us lean and nimble. The body knows things, and sometimes the best appetites do their listening there instead of the brain. But nothing, absolutely nothing in this analogy is more important than having a healthy appetite. The kindling and stoking of a hunger to learn in a student is the sine qua non of good teaching.
    Learning is an investment of time and energy. But what we get out of it isn’t strictly proportionate to our investment. This has got more than just one dimension. Obviously, the value of learning for its own sake will count for something for some, whether it satisfies a need to solve mysteries, or feeds the sense of mystery itself. And of course there will be the socioeconomic rewards from being knowledgeable and competent in a field that pays well. But the efficacy of our learning is paid too little attention, and this calls up the question of maintenance. The efforts recommended here in vetting our inputs, checking them against reality, revising our knowledge base, and unlearning errors are significant cognitive loads, but these have to be weighed against other costs, of working with inferior knowledge, living in delusion, defending errors against challenges, and in our defeat, the loss of both self-esteem and reputation, along with the companion feelings of shame and embarrassment. Cognitive hygiene is an effort, as with any infrastructural maintenance. It will slow our learning down, and run it a bit backwards on occasion, but having higher quality information will usually make up the difference. On top of all this, some of us must also consider our efficacy as contributors to culture and the direction in which it’s heading.
    Transformative Learning Theory takes a look at the dynamic changes in our cognitive processes, how the things we’ve learned adapt, or get revised, how their meanings evolve, what different points of view or reference frames do to them. Learning that doesn’t transform isn’t really learning anymore, and rigidity in this correlates fairly well with age. When our minds grow heavy with mass and inertia, resistance to change is the analogical consequence. Our hard-earned life lessons are wired for emotional self-defense, both as cognitive biases and ego defense mechanisms. Maintaining our learning as transformable is a learnable skill, but it does require some effort. As we accumulate experience, each new memory is integrated into the mass of relevant older ones, with the words, thoughts, and emotions that go along with them. A habitual practice of maintenance is best started early.
    Sometimes our mental world is forced to evolve in a punctuated rather than a gradual manner. Something might happen which drastically alters our self-schema, or a change in environment demands a change in us, or a crisis of faith collapses a cherished belief system. This is what Jack Mezirow calls a “disorienting dilemma.” But departing from Mezirow’s theory, we are not stuck here with merely thinking things through in some rational, analytical fashion. If we are fortunate, we will have a two-sided experience, where our options are perceived vividly and simultaneously. This has more dimension and emotional depth than a simple epiphany. Buddhists call this samvega. It’s often a prominent feature in experience with elucidogens. It’s often a life-changing event, and if one of those sides happens to be a hard, honest look at how you took a wrong turn in your life, it’s often the first step in recovery. In fact, what the addicts call hitting bottom is one of its forms. The omission of the vital role of affect in Mezirow’s theories is articulated by Edward Taylor in Transformative Learning Theory” (2001).
    Lifelong learning doesn’t mean staying in school, or even going to school. But it does mean going beyond the culture’s core curriculum, beyond what’s required of children and the young. It’s not only knowing what needs to be known, but staying abreast in these fields, or some analog of recertification. Lifelong learning means “transfer of learning,” that what’s learned makes the move from the classroom to real life application. This is learning the deeper structure of the lesson, a process distinct from rote learning. The learning of heuristics, methods, and skills, is more useful to carry off than remembered names and dates, although many of these are still useful or required as basic cultural literacy. Learning by doing begins with transfer learning, but to some extent, learning by running trial and error simulations in the neocortex is actually a form of learning by doing, and often gets the brain’s motor neurons involved in the mental exercise. So it’s really OK to learn by reading fiction and even watching the right kind of television.
    Cognitive load refers to effort expended in working memory in performing mental tasks. The mind already tries to simplify cognitive load over time with summary memories, gists, gestalts, generalizations, stereotypes, protocols, skill sets, etc. John Sweller calls this mental function “germane cognitive load,” and distinguishes it from intrinsic and extraneous. But the cognitive load or burden of our memories and thoughts also entails the associated affect and emotional responses to their being recalled and attended. These are never really separate. When we use the metaphors of load or burden, we want also to think of the lightness of our load, as this will affect our stamina and mental nimbleness. Buddhist thought is often associated with the idea of detachment or non-attachment, or nekkhamma in Pali, often translated renunciation. This is freedom from lust or craving, a detachment from the unnecessary, but not one from real needs, or from feelings worth having. Closely related to this is abyapada, an absence of aversion or ill will. It’s important to note, though, that this is not a cultivated numbness. We’re lightening up. We’re merely avoiding weighting our involvement with these things so heavily that they interfere with our progress. We’re leaving ourselves the freedom to move about, or come and go, or take-it-or-leave-it. These things we get distance from will include some of our big ideas, and the investments we’ve made in acquiring them and integrating them into our knowledge base.
    What we want to do with our ideas and emotions is use them, and not have them use us. There’s a parallel here between a useful form of detachment and loving with an open hand. We get to keep any thoughts and feelings we want, but we want things feeling natural, happening of their own accord, effortless, leaving us with more energy to do stuff. We just aren’t required to cling to them, or defend them to the death. This allows us to practice eclecticism, and in fact is a prerequisite. It seems to be pretty necessary to lifelong learning to not get bogged down in the consequences of learning too much. It’s a bit like having MacGyver Mind, or having one of your better Swiss Army knives. It’s by keeping ourselves as unburdened as possible of knowledge that doesn’t serve us that our minds stay nimble into our dotage.
    Without help from motivations like vocation, calling, personal purpose, or higher purpose, the human mind is going to resist taking up self-education as a lifelong project. It seems most are willing to settle for simple answers and settle into a routine. But is this the result of learning not having been a rich and exciting experience in itself? Or is this the result of defects in educational systems? The more ambitious among us have learned, sometimes painfully, that you can only become the best at something if you aren’t perfected yet, and that it takes at least ten-thousand hours of practice to even be really good at something. This is so much work that no amount of conventional rewards will likely be enough. The rewards really need to be intrinsic. Constructive discontent (a term borrowed from Irene Becker) skirts the Buddhist issue of dissatisfaction that stems from our craving and gives deficiency motivation a more positive spin. Those who have lives to be lived in ways they want to live them are willing to undergo some goads into action. It’s a can-do, get-er-done kind of thing. You expect stress, and costs, and some exhaustion. We have to skip the new age bullshit that tells us we are already perfect, and we need to develop an attitude that builds in some humility or authenticity, to acknowledge how much we still have to learn. This mandates a willingness to notice when we’ve made errors, and that makes it easier for us to unlearn or relearn as needed. Not knowing everything, or even much, isn’t being dumb. Dumb is not wanting to learn any more. Smart is not knowing everything already, and still being full of interesting questions. Answers are just periods.
    Once upon a time, some psychics came to our little mountain town, four followers and their leader, and held a free introductory psychic workshop, sponsored by a small group of locals who studied the Silva Method (of Mind Control, distinct from the Sylvan Learning Center). I went along. We held other peoples’ car keys and hair combs up to our foreheads and thereby got some visions to share. My vision was five kids in a big valley, but instead of mountains there was a giant wrap-around billboard that said Telluride. They all dismissed my vision as not psychic. I never really thought it was. But a little later, the leader was talking about going deep inside and contacting “the Learner Within.” On hearing that, I let out a good, audible sound of approval and applauded too. He was confused, so I repeated what I had heard him say and praised the originality of the phrasing. Much annoyed, he said “Well, I meant to say ‘the Teacher Within,’” and went on rambling, without a clue to the wisdom in his slip. This would not be my family. I’d always preferred the Learner Within. That would always teach me so much more.



Part Two:
Cognitive Challenges Across Ten Domains


2.0 - Towards a Taxonomy of Anti-Cognitives

Cognitive Psychology, Bacon’s Idols, Maslow’s Needs,

Psychology’s Languaging Behavior, Gardner’s Intelligences,
Bloom’s taxonomy, Piaget’s Stages, More Psychologists, Taxa

    Note: Not all of the subjects discussed in this chapter will bear directly on anticognitives, but they will form useful background. This section may read a little like one of those laminated college cheat sheets or crib sheets, enriched or concentrated, lots of stuff in a short space. If this is too much, just read until you get bored or confused, and then skip to the last ten lines of the chapter.

Cognitive Psychology
    Cognitive psychology and theory began in the 1960s as behaviorism was waning, and is often seen as the most recent school of thought in psychology. It concerns itself with how we process sensory and informational input into organized memories and how we use that information subsequently to that. It’s concerned with all things psychological, attention, memory, perception, language, and metacognition. Like most systems of thinking about thinking, however, it tends to come up short in the area affective neuroscience.
    In the 1950s, it was still common to hear teachers tell you, “animals have instincts, but humans have reason.” The mind was still a blank slate then, that parents, teachers, preachers, and culture were filling up for you, before they  turned you loose to perceive and think for yourself. John Locke’s held to this, affirming that “Nihil est in intellectu quod non ante fuerit in sensu” (there is nothing in the mind that was not first in the senses). Certainly, in terms of content that becomes memory, this much is true. But it’s not the whole story. Gottfried Wilhelm Leibnitz soon corrected this by adding “nisi intellectus ipse” (except the intellect itself). But intellect has to be process, not content. Today, both neuroscience and genetics are aware that native mental contents would need to be genetically or epigenetically encoded, and DNA doesn’t seem to work like this. We can say that nothing can enter the mind in any kind of memorable form that isn’t first processed, and that those processes must begin with a native form that underlies any later learning. This is what Leibnitz meant by intellect, the rough-hewn, meaning-making processes of the natural mind. These native forms will be referred to herein as evolved heuristics and processes. These are inherited ways to make sense of our world and our place within it. Amos Tversky and Daniel Kahneman (1974) referred to heuristics as mental shortcuts that provide intuitive solutions that are “in general quite useful, but sometimes lead to severe and systematic errors.” These they contrasted with biases, “systematic errors that invariably distort the subject’s reasoning and judgment.” But in places, they also fail too make an adequate distinction between the two.
    Both sensation and native heuristics are fallible, though the brain’s original exploration is well-intentioned, performed in good faith, and innocent of our ulterior motives. We want to know more about who and where we are, and what surprises our environment may have in store. However innocent, the “severe and systematic errors” that often result are still sources of human ignorance and must be discussed here. The more biased sides of ignorance, whether they are conscious or not, are usually represented by four distinct  classes of anticognitive processes: cognitive biases, coping strategies, defense mechanisms, and logical fallacies. We haven’t seen a system yet that attempts to integrate all four of these. The word anticognitive has been accepted as an adjective, denoting processes that work against developing an accurate or useful conception of reality, that is, opposing or counteracting cognition. As a noun, it’s one of theses processes.
    In order to present this material coherently, I needed either to find a set of meaningful sorting categories, or else configure a new one. I had hopes of finding a ready-made taxonomy of our various cognitive functions to which anticognitives would naturally apply. I looked, but nothing I found really fit the bill. It must be acknowledged that any of these will be based on lexical hypothesis, or be simply linguistic in nature, and not strictly scientific. And naturally, any attempt to neatly categorize cognitive processes into clean, cut-and-dried classifications will oversimplify the holistic, highly connective, and interrelated nature of cognition. To develop the taxonomy here, I looked at the following categorical systems, to be certain that any new system proposed here could cover all of this ground that these cover:


Bacon’s Idols
    Francis Bacon, in Novum Organum, was the first to offer useful categories here: “XXXVIII. The idols and false notions which have already preoccupied the human understanding, and are deeply rooted in it, not only so beset men’s minds that they become difficult of access, but even when access is obtained will again meet and trouble us in the instauration [restoration] of the sciences, unless mankind when forewarned guard themselves with all possible care against them. XXXIX. Four species of idols beset the human mind, to which (for distinction’s sake) we have assigned names, calling the first Idols of the Tribe, the second Idols of the Den (Cave), the third Idols of the Marketplace, the fourth Idols of the Theatre. XL. The formation of notions and axioms on the foundation of true induction is the only fitting remedy by which we can ward off and expel these idols. It is, however, of great service to point them out; for the doctrine of idols bears the same relation to the interpretation of nature as that of the confutation of sophisms does to common logic.
    “XLI. The idols of the tribe are inherent in human nature and the very tribe or race of man; for man’s sense is falsely asserted to be the standard of things; on the contrary, all the perceptions both of the senses and the mind bear reference to man and not to the universe, and the human mind resembles those uneven mirrors which impart their own properties to different objects, from which rays are emitted and distort and disfigure them. XLII. The idols of the den [cave] are those of each individual; for everybody (in addition to the errors common to the race of man) has his own individual den or cavern, which intercepts and corrupts the light of nature, either from his own peculiar and singular disposition, or from his education and intercourse with others, or from his reading, and the authority acquired by those whom he reverences and admires, or from the different impressions produced on the mind, as it happens to be preoccupied and predisposed, or equable and tranquil, and the like; so that the spirit of man (according to its several dispositions), is variable, confused, and as it were actuated by chance; and Heraclitus said well that men search for knowledge in lesser worlds, and not in the greater or common world. XLIII. There are also idols formed by the reciprocal intercourse and society of man with man, which we call idols of the market[place], from the commerce and association of men with each other; for men converse by means of language, but words are formed at the will of the generality, and there arises from a bad and unapt formation of words a wonderful obstruction to the mind. Nor can the definitions and explanations with which learned men are wont to guard and protect themselves in some instances afford a complete remedy—words still manifestly force the understanding, throw everything into confusion, and lead mankind into vain and innumerable controversies and fallacies. XLIV. Lastly, there are idols which have crept into men’s minds from the various dogmas of peculiar systems of philosophy, and also from the perverted rules of demonstration, and these we denominate idols of the theatre: for we regard all the systems of philosophy hitherto received or imagined, as so many plays brought out and performed, creating fictitious and theatrical worlds. Nor do we speak only of the present systems, or of the philosophy and sects of the ancients, since numerous other plays of a similar nature can be still composed and made to agree with each other, the causes of the most opposite errors being generally the same. Nor, again, do we allude merely to general systems, but also to many elements and axioms of sciences which have become inveterate by tradition, implicit credence, and neglect. We must, however, discuss each species of idols more fully and distinctly in order to guard the human understanding against them.”

Maslow’s Needs
    Abraham Maslow’s “Hierarchy of Needs” also had much to offer, but in a limited arrangement. As with Bacon’s Idols, the material can be resorted in to a larger system. This recognizes the function of cognition as purposed in meeting a range of needs from highest priority, basic, or homeostatic needs up to those we address our lives to in pursuit of “the farther reaches of human nature.” The precise enumeration of needs in each category has evolved and is varied. This is a typical compilation, from most basic upward, with the idea that those most basic are, on-average, prioritized: Physiological (breathing, food, water, basic health care, reproduction, circulation, shelter, temperature regulation, excretion, movement, and sleep); Safety or Security (physical, social, employment, resources, property, and health); Belonging and Love (friendship, family, community, and sexual intimacy); Esteem (confidence, self-worth, basic education, sense of accomplishment, respect of and for others); and Self-Actualization (fulfillment of potential, creativity, higher education, spontaneity, problem-solving, lack of prejudice, moral or ethical integrity, and acceptance of facts). In his later years, Maslow added another level, of Self-Transcendence (higher purpose, altruism, and spirituality).
    Maslow regarded the lower levels as deficiency or deprivation needs, and these become predominant when unmet, and so hold us back from our higher pursuits. The idea is: meet these needs and move on. While it isn’t mandatory that needs higher than the homeostatic be met before moving on to a higher level, it does lessen the need for the frequent return trips. And the repeated thwarting of a real need can lead to neurotic behavior, particularly where the thwarting can’t be rationally explained or is unpredictable. The function of cognition is tied to an organism’s meeting of needs. We can try to distinguish between needs and wants or desires. The function of an anticognitive will likely be found meeting one need and thwarting one or more others, unless it’s entirely toxic or perverse, which could often be the case in neurosis or psychosis. Here we might define drives more narrowly as homeostatic needs with life-or-death consequences, such as nutrition and thermoregulation. Then we might use motivations refer to those urges where failure to satisfy merely results in unpleasantness. Finally, we could regard the still higher order needs that require or demand no satisfaction at all as being often more drawn than driven, and this by values.
    Other psychologists have enumerated similar taxonomies of needs, drives, motivations, and spectra of human values, ranging from innate to cultural. David McClelland’s theory of needs is one example, with super-categories of achievement, affiliation, power, and cognition. Manfred Max-Neef has nine categories: subsistence, protection, affection, understanding, participation, leisure, creation, identity, and freedom, each of which finds expression in being, having, doing, and interaction. Henry Murray, in 1938, presented a still more extensive list, which included several important needs not mentioned above, like regularity, deference, nurturance, succorance, autonomy (sense of agency), infavoidance (avoidance of shame), sensory stimulation, play, and communication. This is found in Explorations in Personality.

Psychology’s Languaging Behavior
    The social psychologist William J. McGuire developed yet another list, leaning more towards the cognitive aspects of life, and adding to the above: cognitive consistency (against dissonance), categorization, causal attribution, modeling, tension or stress reduction, cognitive stimulation and growth, self-assertion, sense of purpose, ego defense, self-identification, and affiliation. Joy Paul Guilford charted the “Structure of Intellect” (1955), with a number of operations situated in three dimensions: 1) Operations (cognition, memory recording, memory retention, divergent production, convergent production, and evaluation, to which we might add memory recall); 2) Content (figural, symbolic, semantic, and behavioral, which we now call procedural memory); and 3) Product [output] (units, classes, relations, systems, transformations, and implications, such as predictions and inferences).
    The DSM-IV isn’t really organized around cognitive issues, but neither is it especially useful in guiding the troubled towards mental health. The DSM-5 is a step down from that, and is now primarily about checking boxes on insurance forms to expedite payment, and meeting pharmaceutical rules. The thinking is confused on anticognitive topics and it’s hard to tell categories apart. “Defense mechanisms (or coping styles) are automatic psychological processes that protect the individual against anxiety and from the awareness of internal or external dangers or stressors.” This is not at all well-articulated. It does, however, illustrate that the discipline of psychology is, and always has been, based on a kind of taxonomic or languaging behavior with fuzzy edges between categories. Even back when the behaviorists were calling it the study of behavior, it failed to recognize that this was its own primary behavior.
    The Big-Five Personality Traits (referring to a plethora or a dearth in openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism) really fails to address a number of the other dimensions of personality. It also lacks a specific focus on our cognition and its antagonists. Among the personality traits cited as deficits in the theory are risk-taking, manipulativeness, humor, religiosity, honesty, thrift, and gender dominance.  Similarly, Raymond Cattell elaborated Sixteen Personality Factors, which consisted of warmth, reasoning, emotional stability, dominance, liveliness, rule-consciousness, social boldness, sensitivity, vigilance, abstractedness, privateness, apprehension, openness to change, self-reliance, perfectionism, and tension. These suggest areas to be covered, but no contribution to a taxonomic structure.
    Thomas Gilovich (2002) has provided some organization of anticognitive processes in the chapter headings for his book How We know What Isn’t So: 1) Something Out of Nothing: The Misperception and Misinterpretation of Random Data; 2) Too Much from Too Little: The Misinterpretation of Incomplete and Unrepresentative Data; 3) Seeing What We Expect to See: The Biased Evaluation of Ambiguous and Inconsistent Data; 4) Seeing What We Want to See: Motivational Determinants of Belief; 5) Believing What We Are Told: the Biasing Effects of Secondhand Information; and 6) The Imagined Agreement of Others: Exaggerated Impressions of Social Support.

Gardner’s Intelligences
    Howard Gardner, in his influential Frames of Mind (1983) proposed seven kinds of intelligence in his Theory of Multiple Intelligences. This provided a useful starting point for the purposes here, although substituting the term “cognitive domains” for intelligences is somewhat truer to the neuroscience involved. Some of these have been combined here into single domains. These “intelligences” are not isolated lobes of the brain, or modules, but they are each functional configurations of neural processes that occur across multiple parts of the brain. This isn’t a phrenology revival, although it’s possible that someone skilled with fMRI could infer which domain might be in play for any given state of mind. Gardner started with seven of these intelligences: Musical-Rhythmic and Harmonic; Visual-Spatial; Verbal-Linguistic; Logical-Mathematical; Bodily-Kinesthetic; Interpersonal; and Intrapersonal. In 1995, he added an eighth: Naturalistic (an evolved, ecological, and holistic way of knowing the world). In 1999 he added Existential (which would also account for some of our so-called evolved spiritual interests). And by 2016, he was considering adding Teaching-Pedagogical (“which allows us to be able to teach successfully to other people”). Importantly, individuals can exhibit strength in one intelligence while showing weaknesses in another, although generally speaking, intelligence is usually more generalized across these categories. For example, someone with one of the several Autism Spectrum Conditions (ASCs) might be seriously handicapped in the Social Domain, partially so in the Sensorimotor Domain, and gifted everywhere else.

Bloom’s taxonomy
    Benjamin Bloom developed Bloom’s Taxonomy to classify educational  learning objectives. These were sorted into three general domains of learning: The cognitive domain (remembering, comprehending, applying, analyzing, synthesizing, and evaluating); the affective domain (receiving, responding, valuing, organizing, and characterizing; and this includes feelings, values, appreciation, enthusiasms, motivations, and attitudes); and the psychomotor domain (perception, set, guided response, mechanism, complex overt response, adaptation, and origination). He also developed a taxonomy of kinds of knowledge under which to organize educational goals: knowledge of specifics, terminology, specific facts, ways and means of dealing with specifics, conventions, trends and sequences, classifications and categories, criteria, methodology, universals and abstractions in a field, principles and generalizations, and theories and structures. Here again, we have a taxonomy that’s more like a wish list than a product of rigorous scientific analysis, but this is true of most taxonomies in most of the softer fields of science. The ideal, as we saw with Howard Gardner’s intelligences, is to allow them to evolve and get better over time.

Piaget’s Stages
    The theories of Jean Piaget and the Neo-Piagetians, that rockin’ band of developmental thinkers, will need to be accommodated here as well. Piaget charted four levels of cognitive development, which very roughly follow a measurable progression in age: Sensorimotor (0-2 years, developed in six stages: simple reflexes, first habits and repeated actions, habit development, coordination of vision and touch, experimental behavior and manipulation of objects, and internalization of early schemas and mental representations); Pre-Operational (2-7 years, developed in two stages, symbolic function and intuitive thought); Concrete Operational (7-11 years, rational problem solving and inductive reasoning, but still unequipped for abstract thought); and Formal Operational (11 to early adulthood, abstract thought, vicarious trial and error, and early metacognition). Subsequent thinkers have added a great deal by way of refinement, and allowing for individual differences, but have not overturned the basic concept. Kieran Egan has suggested that we also look at five stages of understanding: somatic, mythic, romantic, philosophic, and ironic. Lawrence Kohlberg has discussed moral development as passing through preconventional and conventional into postconventional stages. Michael Commons has suggested stages beyond the four, termed systematic, meta-systematic, paradigmatic, and cross-paradigmatic. Piaget also identified two basic processing modes by which we incorporate our new information: assimilation and accommodation. In both cases, there is already a body of accumulated experience that new information must be integrate into. With assimilation, the new finds a place to fit within the old. With accommodation, the new information presents a challenge or stretches an envelope, and the body of accumulated experience must itself adapt to the new information.

More Psychologists
    Kara Weisman, et al, in “Rethinking People’s Conceptions of Mental Life” (2017) attempts to chart the major dimensional axes in our perceptions of mental life. She challenges the work of Heather Gray, et al, (2007), who finds two: experience and agency, or afference and efference, or consciousness and will. Weisman finds another layer, “three fundamental components of mental life - suites of capacities related to the body, the heart, and the mind - with each component encompassing related aspects of both experience and agency.” “The first factor corresponded primarily to physiological sensations related to biological needs, as well as the kinds of self-initiated behavior needed to pursue these needs—in other words, abilities related to the physical, biological body… . The second factor corresponded primarily to basic and social emotions, as well as the kinds of social-cognitive and self-regulatory abilities required of a social partner and moral agent… . The third factor corresponded primarily to perceptual–cognitive abilities to detect and use information about the environment, capacities traditionally associated with the mind.” The method, for both researchers, isn’t neuroscience, but is bottom-up and self-reported by volunteer subjects, nor does heart refer to cardiovascular systems. Just as interesting, the examination was organized around a seven-part “a priori conceptual analysis of candidate ontological categories of mental life,” these being affective experiences, perceptual abilities, physiological sensations related to biological needs, cognitive abilities, agentic capacities, social abilities, and a miscellaneous category that included “being conscious, being self-aware, experiencing pleasure, having desires, telling right from wrong, having a personality, experiencing pride.”
    It might seem odd to some, but the many systems and layers of symbols from the Western Mystery Tradition, disparaging known as the Occult, along with its equally elaborated counterparts in the East, might be integrated into a system developed here, since these ideas are largely projections of internal cognitive processes. Most people already know that Mercury the messenger stands in for articulation and communication, and Venus for the hubba-hubba dimension, and some of our still higher aesthetics. These semiotics have had centuries to millennia to develop in resonance with the human psyche, such that they are bound to tell us something about ourselves, even if they say little or nothing about any metaphysical realities they may claim they to represent. These might best be added after the fact to a system developed with more of modern neuroscience in mind. Carl Jung, of course, is known for making inroads in this area, as are presenters like Joseph Campbell. Their work is often seized upon and run with by folks with more limited critical thinking skills, such that it may only take a self-help author three sentences to distort the collective unconscious into universal consciousness. And no, the Tarot cards are not archetypes. These systems of symbols still have things to tell us about ourselves.
    Many authors have worked on smaller pieces of the larger puzzle here. For instance, John Stuart Mill distinguished the moral and intellectual causes of fallacies, with moral causes belonging to human nature and intellectual to culture. Tversky and Kahneman (1974) attempted to integrate our cognitive heuristics with cognitive biases, as least with respect to the assessments of probability so important to decision making. But we can make a convincing counter-argument that our heuristics are something quite other than cognitive biases: they merely leave us subject to error due to their inherent limitations. Gigerenzer  (2002) has responded to this with a partially articulated “heuristic toolbox” or “adaptive toolbox,” with stronger emphasis on the usefulness of heuristics, despite their proneness to bias and error. Correia (2011) has taken steps to interrelate cognitive biases with logical fallacies, asserting that our “motivation affects argumentative reasoning at different levels (selective evidence gathering, selective choice of premises, biased interpretation, and fallacious inference) … yet arguers are themselves unaware of the fallacies they commit.” Paul Thagard (2011) does a decent job of questioning the efficacy of critical thinking and informal logic in getting to all of the roots of our erroneous ways: “Overcoming people’s motivated inferences is … more akin to psychotherapy than informal logic.” And “Human inference is a process that is multimodal, parallel, and often emotional, which makes it unlike the linguistic, serial, and narrowly cognitive structure of arguments.” Arguments aren’t usually the foundations of beliefs and decisions. These are more often rationalizations after the fact. In effect, we tend to get causality backwards, viewing our accounts of things retrospectively as causes. This is a most important observation.
    Dual process theory goes back in name to Jonathan Evans in 1975, though its concept goes back at least to William James. The central thesis is that we have two main processes through which our thoughts arise. System 1 is more ancient, involving the older parts of the brain, where affect is much quicker to respond. Our processes are faster, more automatic, preconscious, instinctive, emotional, conditioned, or reflexive. The heuristics are evolved and innate. System 2 is slower, more conscious, deliberate, deliberative, abstract, and explicit. System 1 runs with remembered associations, while System two will look to categories and rules. System 1 tends to be immediate, situated, or contextualized, System 2 more disassociated, abstracted, and generalized. System 1 is subject to deliberate modification via roundabout processes and periods of learning and reconditioning, often involving gradual changes in the perception of relevance and value, while System 2 can be reasoned with more directly. There is some correlation here and between functions in the vmPFC and dlPFC, respectively, and of course we also have the ancient dichotomy of heart or gut versus head.

Taxonomizings
    All of the above are in some way attempts to form taxonomies of cognitive processes. Some are more successful or more complete than others. No one system sufficed for the needs here. In compiling this, I kept a running list of processes not enumerated or alluded to above, or underrepresented, with the idea that these, too, should find a place in any taxonomy I might propose. Sometimes I would add to this list by unscientifically sitting down and cogitating, and by adding notes inspired by the many books listed in the Bibliography. This is a selection from that list, and is intentionally offered in no particular order: niche exploration, sense of competence, emotional sanctuary, narrative and storytelling, emotional reciprocity, cognitive load reduction, access to information, situational problem solving, emotional disengagement, sensory metaphor, conceptual metaphor, metaphor and analogy, patterns of movement, precursors to grammar, attentional blindness, framing issues, STEM thinking, metacognitive functions, universe of discourse, semiotics, emotional learning, mental exhilaration, improvisation, theory of mind, gender-specific cognitive traits, diplomacy, pattern seeking, syncretism, agency or executive control, cognitive satisficing, apperceptive mass, holistic or systemic thinking, working memory, interpolation and extrapolation, revaluation of values, habit breaking, emotional self-repair, risk assessment, fight or flight, efference copies and mirror networks, metastrategic knowledge, mindfulness and introspection, and perspective taking.
    None of the above arrangements of ideas, or parsings of our mental life, are universally accepted. None are really scientific in the way physics, chemistry, and biology are scientific. And neither is this anywhere near a complete accounting of our cognitive processes. It’s simply a representative sampling of the range of ideas that people have come up with in fields like cognitive psychology. I eventually arrived at my own list of ten cognitive and anticognitive domains, introduced below. I would have been just as happy if the number were nine or eleven, but ten seemed to be what was needed for this purpose. We have cognitive censors, filtration, and error potential at all levels. A cognitive domain, as used here, isn’t a part of brain geography, or a brain lobe, or an evolved neural module. The taxonomy used here is really more subjectively practical than that. The order in which the domains are given isn’t entirely sequential, or entirely developmental, or ontological. It’s mostly just convenient, and a more or less logical progression. Understand that some overlap and fuzzy boundaries are to be expected. We might also regard the categories as clusters of typical life problems to solve, together with our collections of strategies for solving them.

2.1    Sensorimotor Domain, sensorium, mental inputs and feedback
2.2    Native Domain, evolved heuristics, the mind we’re born with
2.3    Accommodating Domain, mental construction and adaptation
2.4    Situational Domain, situated problem solving and toolkits
2.5    Emotional Domain, affect, relevance, value, and revaluation
2.6    Personal Domain, self-schema and scripts, and their defense
2.7    Social Domain, participation, belonging, roles, and role models
2.8    Cultural Domain, education, hive mind, and social construction
2.9    Linguistic Domain, representation and modeling of reality
2.10  Metacognitive Domain, mindfulness and self-determination




2.1 - Sensorimotor Domain

Sensorium, Perception, Semantic Memory, Efference, Play, Art,

Imagination, Embodied Cognition, Sensory and Conceptual Metaphor

Sensorium
    A sensorium is the aggregation of the sensory faculties of an organism, considered as a whole. The sensory world that we live in is our Umwelt: it’s the world as it’s most directly experienced by individual organisms. The gathering of sensory experience into memory is about the first thing a new creature goes to work on. It isn’t all passive, either, as the fetus is processing feedback while it kicks at the walls of the womb. Once born, we’re all about the bright colors, the well-lit, macro detail, clean edges, movement of figures on ground, soothing sounds, gentle touch, nipples, milk, and faces. This first domain combines Gardner’s musical-rhythmic and harmonic, visual-spatial, and bodily-kinesthetic intelligences. It’s also Piaget’s first developmental stage, pre-symbolic, pre-representational, and pre-reflective. This domain, as with the next, carries no intrinsic motivation to distort information or deceive us. Inherent anticognitive processes here are nothing more than the natural limitations of sensation and perception. However, these natural weaknesses may still be exploited by others for purposes of deception or influence. One of the better known examples of this is subliminal advertising, where priming and triggers are presented to the senses below the threshold of awareness.
    We have far more than five senses, even excluding the several senses that we have of ourselves inside our own skin. We can be said to have a separate sense for every specialized afferent neuron, which would give us five senses for taste alone. And we also seem to have neurons evolved to sense states and events within the brain itself. When we take the world presented to us by our senses and sensory perceptions as the real world, it’s known as naive realism, or some version of common sense. Most of us, most of the time, don’t really venture beyond this naive sensorimotor representation of the world. We have to assume that this is a similar experience between all or most members of a species, or even genera, and somewhat less so for a taxonomic family, and by the time we belong to different orders, as with bats and cetaceans, perhaps wildly different.
    We sense the outer world with exteroception, the familiar vision (rods plus rgb color), hearing, taste, smell, and touch. The sense of touch is the least often articulated, but it should include heat gain and loss, other nociception (harm or pain), mechanoreceptors, fine and crude pressure, stretching, and erogenous stimulation. And then we get touch using multiple neurons, like two-point distribution and the distributive touch that makes reading Braille possible. Inside the skin, we sense with our interoceptors, with dozens more receptor types. Kinesthetic senses tell us of the tension in our tendons and muscles and the positions of our bones. Our vestibular senses tell us of our balance and orientation, and the otolithic, of linear and angular acceleration. The Buddha referred to the mind’s sense of itself and its own mental objects and functions as another sense in addition to the five that are commonly mentioned. Here, some of our awareness of the goings-on in our minds, such as memories and expectations, can be treated in the same category as sensory experiences. This could perhaps be called cerebroception. These sensations are processed by the mind in much the same way as new sensory input. The processes here are normally automatic, and they include the sensations of our neurochemistry (like feelings and emotions), space, or scale, and the passage of time.
    With so many dozens of types of nerve endings, we still have a woefully incomplete picture of the universe, our place within it, and even of our own biological status. We see only a tiny fraction of the electromagnetic spectrum, and we hear only a narrow range of available vibrational frequencies. Our limitations of scale keep us from seeing how much of a solid table is only empty space, within and between its atoms. Neither are we aware of our own, foreign, non-human microorganisms that greatly outnumber our own human cells, even though they only weigh a few pounds in total. We’re moving at barely comprehensible velocities through the universe, both linear and angular, in a number of different scales and measures, but we feel nothing except under local acceleration. This world or umwelt used to be all we had to work with, and for our needs at the time it was practically enough.
    Arthur Eddington pointed out that our grasp of life in the sea is dependent on the nets we cast. Sensory extension or augmentation relies on technology, instrumentation, and culture, and is not purely a function of the sensorimotor domain. But once the technology is in place and ready for use, it might as well be. We can look at a landscape from above in false-color infrared, and we might as well have wings and eyeballs that enable us to do that. Ditto with sound amplifiers, microscopes, telescopes, radio telescopes, spectrographs, sonographs, and oscilloscopes. Until we can do brain implants and create new kinds of neural nets, or learn to retask existing nets with new functions and software overlays, any new senses we make will have to be translated into the sensory qualia we already know, representations that our neurons are already prepared to make. But we are already starting to retask some senses to work in place of others, as with turning the sense of touch, read across someone’s tongue or back, into a kind of sight. The processing that makes this work is a level up from raw sensation, and under the heading of perception.
    Sensation can attended, ignored, exhausted, and inhibited. The faculty of attention may be drawn to a stimulus by several conditions, such as pain, salience, relevance, novelty, or urgency. Emotions get involved here as well, of both the pleasurable and painful sorts. Except when a sense shuts down from overstimulation, there is little rejection of experience, except as this is related to what is being done to the faculty of attention by factors inside and out. Inattentional or perceptual blindness is the failure to perceive something because it’s unexpected. Hysterical blindness may be driven by prior trauma. Inattentive or non-contemplative intelligence, such as driving on autopilot and somehow making it home safely, tells us that conscious attention is not a necessary component of working sensation.

Perception
    Perception is what becomes of our raw sensory data upon processing and interpretation within the brain. The brain gets involved and starts combining information, often from multiple sensory sources, using the sensory inputs to construct meaningful patterns, decoding raw sensory signals and recoding them as experience. This may be from one sense organ only, as figure-ground pictures are developed in the occipital lobe; or from two organs of the same type working together, as with binocular vision and binaural hearing; or two different senses in combination, as flavor combines taste and smell. We also have intermodal perceptions, wherein sensations are redirected to different parts of the brain, such as we have when we hear words while reading lips, or when the blind teach themselves to echolocate and use their occipital lobes to perform spatial mapping. We also have inherited perceptual modules for interpreting social cues, such as the recognition of at least six basis emotions on the faces of others. And we can get a lot of social information in the dark from another’s tones of voice. The meaning of a perception lies in how it alters our cognitive structure or behavior, in how it affects us, or alters our affect. It’s this that takes something personally, and makes it our own.
    Perception is our first and earliest stage of mental inference. It’s based on raw sensory input that’s juxtaposed with our memories of former perceptions. We are driven to build predictable representations, constructs, or models of the world through our senses. Early in life, violations of our expectations, as with magical illusions or perceiving impossible-looking events, really get our full attention. We are ever on the lookout for gaps between expectation and experience. Later on, such things may frighten us, because we thought we’d outgrown all of that. Our expectations of object permanence, an early and normally reliable thing to learn, can also lead to reification, the assumption that a passing thing is a real thing, and this can have us regarding processes as things. Where did that lightning go? Where did my lap go when I stood up? Where did my father’s spirit go when he died?
    We’re limited in our perspective. Sometimes what we regard as entities are only artifacts of our point of view. The temporal scale in which we live has us perceiving lightning as a thing, and the Earth as fairly stable. But there are no rainbows without the eyeballs to view them. Nothing like green exists in the outer world: that’s just the frequency of light that plants want nothing to do with. Where the mind is inclined to see constancy or persistence, phenomena like change blindness may occur, but not where we’re watching for motion. The brain may force a congruity between two senses relating incompatible things. The brain will often base perceptions on what’s usually correct, and use this to make unconscious inferences about the world before anything rises into awareness. The adjusted perception will then be reported to awareness unambiguously as fact. Perceptual priorities will draw more attention to some qualities of experience over others, as to motion over stillness, threat over security, pain over comfort, dissatisfaction over satisfaction, and loss over gain. A great deal of perception is confabulated or interpolated on the fly, in order to explain the unexplained, as we might see in mirages, and dreaming, and the sometimes terrifying hallucinations of sleep paralysis. The latter are often perceived as the work of a succubus, or an alien abduction. The process of misdirection, exemplified by magic tricks, along with optical and cognitive illusions, resulting from incorrect unconscious inferences, are sensorimotor anti-cognitives that occur on this perceptual level of sensing. They are also the funnest and most educational way to call the competence of the mind into question, and so ought to be studied from early childhood onward.

Semantic Memory
    Our memories of sense perceptions are stored intertied with connections to similar or otherwise associated events. New perceptions are continually being compared to the old for both congruence and deviation. One way or another, some sense will be made of this new thing, even where some confabulation of the past or distortion of the present is required.
    Our understanding of the relationships between language and the brain and the brain's evolution is still very much in flux, and it’s the subject of a lengthy discussion later. But at this early stage of cognition, aspects of our linguistic processes come to us by way of the senses, and these perceptions are stored just like other memories. This refers to the sounds of words and phonemes, the sights of gestural signs, symbols, letters, and words, and the distributed textures of Braille. Even when they are generally stored in their own region of the brain, they will make interconnections to other memories of sense and experience. Words are sensed visually, auditorily, and tactilely. It may well be that these capacities evolved on top of, or as extensions of older functions, by which we perceived simpler sets of gestures, vocal calls, cries, and utterances in our primate troops. Semantic memory regards these sensations, with their indirect sources other than raw sensation, as though they were sensed facts themselves. This may be no more articulated at this perceptual level than a simple lexemic gist, without the morphological operations that turn these into well-behaved words in syntactical contexts. Language is the card catalog that we mistake for the library. It contributes a lot in terms of access to memory and even to it’s organization, but it isn’t the substance of our memories, or the remembered experience. It isn’t the  lexemes that contain our sensorimotor memories or their emotional associations and triggers. That works the other way around.

Efference
    The sensory world is not just passively receptive processes. Our kinesthetic and vestibular senses are integrated within the brain with our efferent motor functions. Any activities we may be performing, as with dance, acting, and the martial arts, are very much a part of our sensorium and our sensorimotor domain. Gardner’s idea of bodily-kinesthetic intelligence includes control of our physical motions, eye-hand coordination, athletic ability, the skillful manipulation of materials, clarity of aim in goal-oriented behavior, bodily awareness, and behavioral training. Further, ideas will sometimes find their way out of the body and into expression even without conscious attention. This is called the ideomotor effect. Involuntary microexpressions are one example. Sometimes the ideomotor effect will be exploited to gain access to the subconscious, as with several forms of divination, automatic writing, channeling speech, speaking in tongues, questioning pendulums, dowsing, and Ouija boards. And sometimes subconscious content will just slip out in a Freudian kind of way.
    Heather Gray, et al, in “Dimensions of Mind Perception” (2007) suggests using two axial dimensions in relating our subjective perception of mind: the sense of experience (consciousness and afference, of being affected) and that of agency (will, efference, or being effective), with each of our many mental functions loaded with an estimable portion of each. Perhaps our feelings and emotions could be parsed along those lines as well. Not mentioned in the study, but interesting nonetheless, is that we are learning more now of the role of efferent or effector neurons and neuronal systems in perception and learning, even where they lead to no action. Neurons with efferent functions, responsible for initiating behavior, are also involved in cognitive function and simple learning behavior, even where no movement is or will be involved. When we think of force, our muscle memory gets involved in understanding the idea. The brain integrates afferent and efferent signals as it models the world. This is a part of a larger concept called embodied cognition.
    Stephen Asma (2017) offers, “Rather than being based in words, meaning stems from the actions associated with a perception or image. Even when seemingly neutral lexical terms are processed by our brains, we find a deeper simulation system of images. When we hear the word cup, for example, our neural motor and tactile systems [will be] engaged because we understand language by simulating in our minds what it would be like to experience the things that the language describes, as the cognitive scientist Benjamin Bergen puts it in Louder Than Words (2012). When we hear the word cup, the motor parts of our brain pick up a cup.” While initial enthusiasm over the discovery of mirror neurons has dampened some, we’re still looking at mirror neuronal circuits that tie perception and activity together in a mutually informative way, a way that’s integral to our learning processes, particularly social and cultural learning.
    When we prepare to speak words out loud, our brain creates a copy of the instructions that are sent to our lips, mouth and vocal cords. This copy is known as an efference-copy. The efference-copy will dampen the brain’s response to self-generated vocalizations, giving less mental resources to these sounds, because they are so predictable. All of us hear voices in our heads. Problems tend to ensue when our brain is unable to tell that we are the ones producing them.

Play
    There are plenty of good evolutionary explanations for intrinsic drives or motivations to seek out raw stimulation, for manipulating objects, for babies mouthing whatever they can grab, for raw learning, for leaping and skipping wildly around to no apparent purpose, for exploring and making sensory maps and catalogs of the environment to have on standby for later. It isn’t far into a toddler’s life that investigating the world advances into play, a phase which can last, with some luck, until old age, senility, and death. Regardless of the many prominent psychologists’ claims that play is a human endeavor, it’s commonly seen throughout the mammalian class of animals, especially during early development. Play can be spontaneous or purposeful or directed. It isn’t just about developing and improving skills that evolution tells us will be necessary later in life. It isn’t only about collecting knowledge and locations of affordances. It’s also about being embodied and learning life, and learning to live with the whole of our being. Its intrinsic nature is present centered, but we still learn and remember sensory experiences that can later be used in simulating imagined predictive and choice scenarios, and even as metaphors to represent ideas.
    Procedural memories are learned sequences of behavior, like knowing how to walk, swim, do the fox trot, or whistle. Developing procedural memory is an original function of purposeful or directed play. Scripts may be learned using instrumental or intermediate forms, like as pianos, bows and arrows, bicycles, or other games, props, and toys, while still within this sensorimotor domain. We learn to follow procedures to perform the behavior we intend. All will be integrated in awareness and memory with the afferent feelings we have in performing them, although when procedural memory becomes more fully implicit, the self-conscious aspects of the experience might disappear almost entirely. It’s like riding a bicycle.
    Pretend or make-believe is a form of play that begins early in childhood. This is usually encouraged by adult caregivers, who will tend to be eager to suggest the forms it will take, thinking that this is where socialization and enculturation really begin. Little boys are given Tonka trucks and either guns or science sets, and little girls get Barbies and Easy-Bake ovens. They may get these things from Santa Claus, who is somehow related to Jesus. A few months later, they gather colorful rabbit eggs, also somehow related to Jesus. It’s seldom questioned whether it’s really necessary for children to really believe the make-believe, or whether it might be better to start telling them by age three that Santa is just pretend but let’s play anyway just because playing is fun. The consensus seems to be that it’s best to let them believe until they figure it out, but if you look closely at how these children have grown up cognitively, and the fabulous forms of consensus they share as adults today, and the things they have come to believe, and the unlikelihood of them ever figuring those fantasies out, it might be wise to question this. It may well be that a truly fictitious sense of reality gets cemented into these impressionable young minds at such an early age that it becomes nearly impossible to weed the fiction out later. Let’s have some research here. Meanwhile, we are nearly certain that there’s still nothing harmful in going to the creek and turning over rocks to see what critters wriggle there, and in playing like we’re scientists while doing so. Pity all those fireflies and tadpoles that get caught, though.

Art
    Art is all about configuring and externalizing a sensual experience, both to experience it this way for ourselves and to share the experience (and share something of ourselves) with others. And usually, both for ourselves and others, this is with some hope to elicit an affective response. Aesthetics is its philosophical study, but neuroscience is coming into the picture with some interesting observations. According to Robert Sapolsky (2017) “we use the same circuitry on the orbital PFC when we evaluate the moral goodness of an act and the beauty of a face.” Similarly, the sense of disgust, in the activation of the insula, applies to both moral behavior and personal hygiene. With our aesthetic sense we’re integrating new and original experiences into the brain in novel and creative ways.
    Gardner’s musical-rhythmic and harmonic intelligence means the skillful perceiving and/or effecting of pitch, melody, tone, rhythm, voice, percussive rhythm, composition, chant, and song. The effects, inducing ecstatic states, from sentimentality to sacredness, from transport to trance, are well known, with or without dance accompaniment. There are lots of ways to approach music: productive, spontaneous, appreciative, and critical. It might be argued that half of jazz, three-fourths of opera, and the whole of hip-hop is little more than a terrible, terrible mistake, but I’ve been told this is a question for aesthetics, not epistemology. How much of a difference is there between these two? There are taste experts trusted as having objective, unbiased, and verifiable knowledge, at least as it relates to market decisions, but since when do market decisions have anything to do with truth?
    Gardner’s visual-spatial intelligence concerns spatial judgment and the mind’s ability to visualize spaces in relation to others. The skill set includes remembering images and details, awareness of context or surrounds, visual discrimination, recognition of forms, projection, mental imagery, spatial reasoning, mental rotation, image manipulation, mental mapping, modeling, the mental creation of 3d spaces, and the duplication of inner or external imagery.” It includes painting, drawing, sculpture, and its grandest, costiliest, and riskiest form: architecture.

Imagination
    Paradoxically, a special case of our  procedural memory is improvisation, composed of pieces of procedural memory and techne strung spontaneously together. This is most clearly illustrated after learning how to play a musical instrument and then improvising a tune. It doesn’t happen without familiarity with the component bits, but the overall theme can be new. Imagination is a kind of improvisation, an ancient form, a less organized or systematic form of running vicarious trials, spitballing, or brainstorming. Not all of it is confined to the sensorimotor domain, but a lot of it occurs here using both semantic and procedural memories of sensorimotor events. Since the word imagination includes the lexeme of image, we tend to think of imagination as pictures, either still or in motion. The word idea followed the same etymological track, deriving from the Greek idein, to see. But it can arise in any sense used here, from seeing pictures in our minds, to musical improv, to creative cooking and perfumery, to playing a lover’s body like a musical instrument. Insight is an improvisation as well. Wolfgang Köhler, a co-founder of gestalt psychology, described insight as the dynamic or “sudden reorganization of a perceptual field.” This flexibility is fundamental to both creativity and problem solving. In the sensorimotor domain we see it working on the fly as we string together the semi-random images that emerge in our dreams. Of course we also see it working on the fly in the native domain in our use of heuristics, and in the situational domain when our problem-solving challenges entice us outside the box.
    The mind is uncomfortable in chaotic states and will create meaning where meaning can’t be found. It makes up stories to connect scattered images and pieces. Imagination may also be seen as a way to make use of more random or surprising associations of memories. This is all prior to language and any deliberate art, but despite the great age of the faculty, it may also be what is gradually introducing us to freedom, or agency, as it allows us to create new options, goals, dreams, visions, and values that bear little resemblance to their component parts and don’t share their limitations. This thought would not surprise science fiction writers or inventors like Francis Bacon, Leonardo da Vinci, Jules Verne, or Nikola Tesla. But once we have an intent to imagine, like these gentlemen certainly did, it’s likely we have skipped ahead to the next domains in our taxonomy. In the sensorimotor domain, improvisations and imaginings are either coming to us or forming spontaneously out of the things and thinks we have already learned.
    Stephen Asma (2017) writes, “Thinking and communicating are vastly improved by language, it is true. But thinking with imagery and even thinking with the body must have preceded language by hundreds of thousands of years. It is part of our mammalian inheritance to read, store, and retrieve emotionally coded representations of the world, and we do this via conditioned associations, not propositional coding. Lions on the savanna, for example, learn and make predictions because experience forges strong associations between perception and feeling. Animals appear to use images (visual, auditory, olfactory memories) to navigate novel territories and problems. For early humans, a kind of cognitive gap opened up between stimulus and response – a gap that created the possibility of having multiple responses to a perception, rather than one immediate response. This gap was crucial for the imagination: it created an inner space in our minds. The next step was that early human brains began to generate information, rather than merely record and process it – we began to create representations of things that never were but might be.”
    Improvisation can draw upon any contents of the sensorimotor domain that have accumulated in memory, including the associations to the feelings and emotions being sensed as the experience registered, as well as the concurrent dynamics of bodily motion. It’s unusually free of vocalizations and even the deeper, more implicit layers of verbal thought. There isn’t time to attend to that. The discursive mind can be running offline with respect to awareness. The virtual realities or trial and error scenarios being so quickly explored in our imaginings are just a little bit freer from the demands of expectation and preconception, and are that much more open to things not seen before, like solutions to our problems, or other states of mind that we would rather be in. This is one of the characteristics of the state known as flow.

Embodied Cognition
    Embodied cognition is our complete organism’s physical participation in learning, wherein the brain, or mind, extends all the way to the fingertips, and develops its cognition out of its participatory, physical interaction with the world. George Lakoff (1980) writes, “cognition is a dynamic sensorimotor activity, and the world that is given and experienced is not only conditioned by the neural activity of the subject, but is essentially enacted in that it emerges through the bodily activities of the organism.” Assumptions about the world, even our abstract ones, are built into our anatomy. From the SEP, “cognitive processing essentially re-activates sensorimotor areas to run perceptual simulations.” Stephen Di Benedetto writes in Provocation of the Senses, “Even when decoupled from the environment [in default mode], the activity of the mind is grounded in mechanisms that evolved for interaction with the environment - that is, mechanisms of sensory processing and motor control.” And further, our neurochemistry is fully engaged in affective and emotional states that are frequent precursors to cognitive ones. Affect becomes an important part of the metaphor. The whole body is the primary instrument of cognition and its primary constraint. The coordination of sense and semantic memory is important in embodied cognition and the formation of sensory metaphors, which we use to understand more abstract ideas. The SEP tells us that “research is strong in suggesting that conceptual capacities incorporate and are structured in terms of patterns of bodily activity. Talking or thinking about objects have been suggested to imply the reactivation of previous experiences, and the recruitment of the same neural circuits involved during perception and action towards those objects would allow the re-enactment of multimodal information… . Also the pattern of interaction entertained with an object may influence the way conceptualization is done… . The fact that sensorimotor circuits get recruited, or rather, re-used for purposes like concept formation or language processing, other than those they have been established for, such as motor and sensory information processing, strongly favors modal and embodied approaches to cognition over amodal and abstract ones.”

Sensory and Conceptual Metaphor
    Concepts are represented in human memory by the sensorimotor systems that underlie interaction with the outside world. We think in these metaphors. Life is short, I’m up when happy, down when blue, lost when confused. My train of thought has goals, and gravitas, affection is warmth, judgment is cold. I swallow my pride, carry my burdens. I will grasp or see when I understand. Parts of the universe unavailable to our senses may also be unavailable to our umwelt and its modeling capacity. Limited availability of some experiences as metaphors also contributes to our inability to grasp abstract concepts (the particle vs wave, mass vs gravity, electricity vs magnetism, and space vs time paradoxes, for instance). Our sensory and conceptual metaphors are often regarded as the same thing, but here I’ll regard the latter as similar to the former, but denizens of the next domain, one step removed from the strictly sensational, and one step cognitively cooler.
    Because humans presumably share a fairly similar umwelt, there are what we assume to be universal experiences here. But our memories accumulate through experience, and so they’re colored by individual associations with contexts, and the different emotions felt at the time of the learning. Sensory metaphors carry some of the affective charge of their subjective experience. It’s hard to think of force without feeling muscle strain, and maybe there is some memory there of when that went too far. These associations enrich the metaphors, as hyperlinks enrich hypertext, or as old-style card catalogues enriched old-style libraries. It is, however, important to cleave to what we most have in common when we make up words to assign to these metaphors. Otherwise we just get poetry and everyone leaves the conversation confused.
    Lakoff also notes, “It is also important to stress that not all conceptual metaphors are manifested in the words of a language. Some are manifested in grammar, others in gesture, art, or ritual. These nonlinguistic metaphors may, however, be secondarily expressed through language and other symbolic means. Contrary to long-standing opinion about metaphor, primary metaphor is not the result of a conscious multistage process of interpretation. Rather it is a matter of immediate conceptual mapping via neural connections.”
    Peter Carruthers asserts that “there is strong evidence that mental images involve the same brain mechanisms as perceptions and are processed like them.”  He calls this the Interpretive Sensory-Access (ISA) theory. Keith Frankish offers, “The conscious events we undergo are all sensory states of some kind, and what we take to be conscious thoughts and decisions are really sensory images – in particular, episodes of inner speech.” And yet we still try to assign our lofty thoughts to higher realms of ideals and reason. We pluck our lovely cognitive lotuses from the ick and muck below. The better-grounded forms of Asian philosophy have long cautioned us against doing this. Laozi wrote, “Just as obtaining tallies of chariots will not be (real) chariots, do not long to dazzle & jingle like jade: clunk & clatter like rocks” (Ch 39). Sapolsky cautions, “Our metaphorical symbols can gain a power all their own. But insofar as metaphors are the apogee of our capacity for symbolic thought, it’s thoroughly weird that our top-of-the-line brains can’t quite keep things straight and remember that those metaphors aren’t real.”
    Some use the terms sensory and conceptual metaphors interchangeably. Both map one domain of experience onto another to establish a likeness, comparison, or equivalency. Here we will insist that sensory metaphors must have what is known as the source domain be a primary sensory perception, while conceptual metaphors describe connections made everywhere else that metaphors may involved, as between concept and concept, concept and feeling, or concept and lexeme. Both of these are important even in the hard sciences like physics. It’s hard to think of a vector without some memory of going somewhere at some rate of speed, if only on a subliminal level. It’s important to remember the limitations of these metaphors, particularly when they are applied to divide things like continuums and spectrums. Sensed properties may not exist in things being mentally represented. Not only does green not exist in plants, this names the frequency that plants have nothing to do with.
    Some of the challenges that the Sensorimotor Domain presents to the Native Domain and its Evolved Heuristics and Processes are discussed in Chapter 3.2.



2.2 - Native Domain

The Other Original Mind, The Evolving Mind, Our Big Brains,

Evolved Heuristics and Processes, Modest Modularity of Mind

“And then it sought to get through the ultimate walls with its head - and not with its head only - into the other world. But that other world is well concealed from man, that dehumanized, inhuman world, which is a celestial naught; and the bowels of existence do not speak unto man, except as man.” Nietzsche, Thus Spake Zarathustra, tr. Common

The Other Original Mind           
    Both neuroscience and genetics suggest that any native mental contents would need to be genetically or epigenetically encoded, and DNA doesn’t seem to work like that. All we’re given natively is biological structures that enable processes. But we can say that nothing can enter the mind in any kind of memorable form that isn’t first processed, and that those processes must begin with a native form of processing that underlies any later learning. Recall that Leibnitz added “nisi intellectus ipse” to “Nihil est in intellectu quod non ante fuerit in sensu” (there is nothing in the mind that was not first in the senses, except the intellect itself). This chapter is about the domain of that native or original intellect, the one that precedes our social and cultural learning, although it develops alongside them. These are the faculties that interpret the sensorimotor world. This is different from chūxīn, the original or beginner’s mind of Chan or Zen Buddhism, regarded as the mind we have at birth, without content, and also the consequence of enlightenment. While that’s a very fine and lucid state to be in, it’s still a poetic description, and it’s not to be reified as an ontological state.
    Cultural relativists, postmodernists, deconstructionists, and the tabula rasa theorists are all still trying to argue that there’s really no such thing as human nature. Pursuant to the infamous assassin’s creed, nothing is true, everything is permitted. It may be to some extent true that our ideologies can overwrite original mind, and even replace some of our  basic instincts, such as instincts for survival. The suicidal waves of infantry invasions and the mass suicide at Jonestown are proof enough. Carl Degler pointed out some of the  problems inherent in “a will to establish a social order in which innate and immutable forces of biology played no role in accounting for the behavior of social groups.” Culture is not going to conquer our biology. When we can set aside culture, even for an hour or two, and really dig deep and listen, we find that ancient processes are still alive in us, and even where these are imprecise and clumsy, they usually make the good point that we have gone pretty far astray, enough to threaten our survival. We need to develop a culture that respects our original nature, even as it may seek to transform it in transcendent ways.
    We are dismissing the tabula rasa and relativist assertions here. There is a human nature and this incorporates a natural human mind, preconfigured with processing abilities, rather than epistemic content. It has our sensorium and umwelt to work with, so we’re predisposed to working with naive realism or common sense. These faculties are inherently imperfect, and often faulty, but there is no inherent motivation to err. There is only being subject to error, and to manipulation by others that can lead to error. This chapter will generally scope this native domain and cite but a few of these native processes by way of example, but the processes themselves, itemized and described in more detail, will be the subject matter of Chapter 3.2, Evolved Heuristics and Processes, which you might consider part two of this chapter. A few of the dozens of heuristics and processes discussed there are focusing effect, priming, object permanence, availability heuristic, confabulation, pattern detection, effort heuristic, and pareidolia. These are sorted there into the various anticognitive domains. These don’t operate within those domains but they work underneath them. This is unlike the four sets of anticognitives described in the remaining chapters of Part Three, which function within the domains themselves. Here, the heuristics are precursors to the functionality of the domain into which they are sorted. Two of the logical fallacies pertaining to this domain, the appeal to nature and the moralistic fallacy, are also discussed in Chapter 3.7.
    Some aspects of this native domain absorb what Piaget called assimilation, at least where the learning is of something new, with no closely related body of knowledge to contribute to. The remainder belongs in the next domain, the absorption of new material into a body of inter-associated prior experience or  apperceptive mass. But here we have initial processing by native faculties. In the next domain it’s through learned structures. In both cases, the intellect reconstrues and reinterprets the environment to make it fit existing mental frameworks, but in the native domain, perceptions accord with naive realism.
    The foibles of the human mind in this domain are what Bacon calls Idols of the Tribe, and they describe what we’ve brought with us from deep in the paleolithic era. Richard Brodie writes, “The whole of science has been a concerted effort to foil that natural selection of stone-age ideas by our brains, and instead select ideas that are useful, that work, that are accurate models of reality.” And yet, these stone age tools got us here somehow. The conscious mind must often intercede later, and modify or reverse imperfect solutions derived from them, but they did get us through tribal living, eating enough, avoiding predation, finding our mates, and raising our offspring. David Hume writes, “Rather than reason, natural instinct explains the human practice of making inductive inferences.” Evolved heuristics have the advantage of being fast, subconscious, and effortless. Culture and language-based heuristics are more commonly discussed under the heading of heuristics, where examples of these will include such tools as rules of thumb, educated guesses, and cultural stereotyping. These things we made up, for dealing with the bounded rationality of the accommodating domain. Either kind can pass for intuitive reasoning, where they proceed without much effort or attention. Heuristics can serve four different types of positive functions. They tend to simplify or summarize datasets into bullet or talking points, reducing cognitive load to cheat sheets; they allow us to respond more quickly by assuming things and jumping the gun; they can add information and meaning to experience where this was lacking before; and they can dismiss information in bulk when this threatens information overload. All functions also have their downsides.

The Evolving Mind
    Charting this native domain is fundamental to evolutionary psychology. This new field, as explained by anthropologist John Tooby and psychologist Leda Cosmides, “is based on the recognition that the human brain consists of a large collection of functionally specialized computational devices that evolved to solve the adaptive problems regularly encountered by our hunter-gatherer ancestors. Because humans share a universal evolved architecture, all ordinary individuals reliably develop a distinctively human set of preferences, motives, shared conceptual frameworks, emotion programs, content-specific reasoning procedures, and specialized interpretation systems - programs that operate beneath the surface of expressed cultural variability, and whose designs constitute a precise definition of human nature.” What this omits to mention, however, is the treasure trove of data awaiting us when we get our human crania out of our human recta and look further back, through primatology into deeper time. In any case, much of our species’ behavior will be the product of genetically structured psychological adaptations that were conserved in evolution by solving recurring problems in our ancestral environments, particularly those concerned with shelter, tools, food, hygiene, mating, communication, and cooperation. There are also co-evolutionary adaptations to be studied, that lie partway between our nature and nurture, particularly involving software exploits of native capabilities, as language is to communication, or reason is to conceptualizing and categorizing heuristics. Many such modules are coevolutionary, roving between neural wetware and primate cultural software, and getting reinforced, gradually over deep time, by selective advantage. Being in between, we can only guess at how far some of these reach within the native domain, but we can attempt to describe those elements which underlie the further cultural development.
    For both evolved and cultural tools, it should be remembered that there are more important things than truth from an evolutionary perspective. Untruth is often more effective for getting certain survival-related jobs done, and sadly, for acquiring status and getting girls pregnant. By default, the evolved brain works out only those details about the world that we habitually find useful. But the shortcomings of our imperfect perceptions are also heavily exploited in cultural persuasion, particularly by the social engineers, propagandists, proselytizers, and advertisers. Some of our cultural software exploits native algorithms to our detriment, knitting them into whole systems, including instructions that insist that these artifices must remain whole. And some of our heuristics find this acceptable for various reasons.
    What things are in the mind that are not first in the senses? “The mind” here should include the sentient functioning of all sensory, motor, and interneurons, and all of the neurochemistry produced by associated glands. We should stop neglecting all of the glandular decision making that goes on in our heads. If we are talking heredity, we can be pretty certain that genes don’t encode eidetic, semantic, or lexemic content, or Jungian archetypes, at least as these are commonly misunderstood. The human genome is too lean to do that, but it isn’t too lean to hardwire simple neural processes with a huge variety of outputs, to which programs and algorithms might later be added. Gene expression would primarily affect neural architecture, and triggers for neurochemical production, and thus predispose the brain to learn from the environment, and subsequently respond, in a certain constrained range of ways. Any kind of content or inferential learning would only follow from that. To be called native, any faculty in this domain would have to precede both social learning and the acquisition of culture, and its foundations or underlying structure would need to precede learning. We aren’t looking for innate ideas, but for universal structures that will enable the processing of information that comes to us through the senses.
    Logically, the search for the constituent parts of human nature begins with a look at human, primate, mammalian, and chordate universals. These will be found scattered among our cultural universals. But it doesn’t end there. It’s only certain that evolved characteristics are likely to be found in this list. We can assume that inherited traits must be genetically based and conferred by either neurological structures, modules, or behavioral predispositions that are reinforced by our glandular activity. Whether our inherited forensic devices are neurological modules or distributed nets, their existence might be inferred by their cultural universality as unconscious or pre-conscious processes in humans taken collectively. However, universality will only make processes candidates for consideration as adaptations. These common traits may simply be convergent common behaviors entrained by similar circumstances and conferring an adaptive cultural advantage. This can be behavior that simply makes too much sense to not do. A stricter stance and stronger basis for affirmative judgment would be the ubiquity of a process in primate societies. Some of the processes catalogued in Chapter 3.2 have been drawn from Donald E. Brown’s 1991 list of human universals, which is also republished in Steven Pinker’s Blank Slate. He doesn’t assert or suggest they are genetic, simply that they “comprise those features of culture, society, language, behavior, and psyche for which there are no known exceptions.” Finally, we need to get past the false dichotomy of nature vs nurture, at least at the fuzzy boundaries they share. Unfortunately, that leaves many of our well-meaning attempts at a perfect taxonomy necessarily imperfect.

Our Big Brains
    Even with recent advances in fMRI, it’s still too early to map these evolved or native functions onto the geography of the brain in more than a few useful ways. And even a summary of what we do know so far would add a few more chapters. We could go on about how the ACC is involved in error detection and cognitive dissonance, or how the amygdala can frighten or anxious us into ignorance, or how the insula can confuse hygenic and moral disgust, or how your dorsolateral and ventromedial prefrontal cortices wrangle for your preferred course of action. Sapolsky's Behave is a good book to read for that. There’s just no room here, and if there were, the data would be dated within a decade. The popular notion of hemispheric dominance, particularly in its new age forms, doesn’t really serve us here: the right brain thinks as well as the left, and it’s not just for folks who hate thinking. Understanding the triune brain remains fundamental, even forms that are simplified for mnemonic use, with the reptilian (instinctual), paleomammalian (limbic), and neomammalian (neocortical) complexes. The rough-hewn translation of these to somatic, affective, and cognitive isn’t very precise, but it’s a starting point. Just a few things need to be mentioned.
    Neuroplasticity is the ability of the brain to adapt by altering neural nets, to drop or reform connections, to build new networks, and add new tasks to old networks. The brain can even deploy axonal plasticity to redirect around injuries, lesions, and other losses of function. The blind have been known to learn echolocation and to retask their visual cortices to interpret echoes as spatial perceptions. A neuronal recycling hypothesis attempts to account for how we can acquire recently invented cognitive capacities, like reading patterns of glyphs on a page as sounds, and then perceive sounds as complex meanings. These are exaptations, adaptive uses for functions that evolved with different roles, and spandrels, Stephen Jay Gould’s term for accidental byproducts of other adaptations that the brain has found new uses for. 
    Michael Anderson (2010) writes of neural reuse, “it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions.” This doesn’t even need to involve changes to circuit structure. He offers this idea as either an alternative or partial account for both human tool use and language, and given the complexity of both of these in humans, and how little time they have had to evolve, this offers us a transitional step, to become reinforced genetically as it continues to confer selective advantages. It also helps to account for the local specification of functions across the more general neocortex, although this is a generality that’s also further specified by axon bundles or white matter thoroughfares. In our semantic memory, it also helps to account for our facility with sensory and conceptual metaphors and with their assignment to lexemes. Speaking of Vittorio Gallese’s (2005) related neural exploitation hypothesis, Anderson calls it “a direct outgrowth of conceptual metaphor theory and embodied cognition, [that] largely sits at the intersection of these two frameworks. The main claim of the framework is that ‘a key aspect of human cognition is . . . the adaptation of sensory-motor brain mechanisms to serve new roles in reason and language, while retaining their original function as well.’”
    The brain is home to some newly discovered cells called “mirror neurons.” These help us to translate things we witness with the senses into patterns of action, or at least into modeling by efferent neurons. The initial excitement is dying down a bit as these interneurons are losing their assumed modularity status and finding their more limited place within larger networks. It was just too much to attribute complex behavioral functions to individual cells that may or may not be that much different from their neighbors. Nevertheless, they appear to have an important place in learning, and particularly learning from others, as they fire in resonance with certain perceptions. Besides their functioning in monkey-see-monkey-do learning, this mirroring neuronal circuitry also contributes to our sense of empathy, and understanding the intentions of others, which in turn plays a role in both theory of mind and self-awareness. They’ve been observed in other social primate species, and evidence of equivalent imitative resonance behavior is seen in some birds. Much exploration remains to be done by other zoologists. Gordy Slack (2007) suggests this, at a minimum, “there seems to be near consensus that we are exquisitely tuned to one another’s experience and that mirror neurons help us to experience each other viscerally and directly.”
    Spindle neurons, or von Economo neurons, are large cells that facilitate communication between parts of the larger brains in the animal kingdom, like elephants, cetaceans, and hominids. More primitive forms have also been found in macaques and raccoons, and others may be awaiting discovery. In humans, they are found in the anterior cingulate cortex (involved in error detection?), the fronto-insular cortex (involved in self-awareness?), and the dorsolateral prefrontal cortex (where, as Sapolsky says, we decide to do the harder thing).
    When attention is not on the senses, when we’re awake and still, and our mind itself occupies our awareness, we’re in what’s called the “default mode network,” or sometimes the task-negative network. For Buddha, this was another, sixth sense. We may be attending ourselves, reviewing our memoirs, or thoughts of others, or memories, or imaginings, or just be lost in thought, trance, daydream, or reverie. We might be laying plans, running vicarious trial-and-error scenarios, or otherwise prepping to make a decision. In all of these we have a mix of affect and cognition, and never either alone. This is a world in itself, and it’s the object of investigative native heuristics just like the world outside. Broadly speaking, “thinking about thinking” is referred to as metacognition. This term will be used somewhat more narrowly here, and general thinking about thinking will be regarded as just more thinking. Used here, metacognition will refer to this process happening in a more engaged manner, with the joint participation of cognition and affect, with some agentic or effective output, i.e., with thoughts and feelings that change thoughts and feelings, move them around, or initiate behavior.
    The default mode network allows us mental processes otherwise called context-independent cognition (Lock & Columbo 1996), abstract or offline thinking (Bickeerton 1995), and mental time travel (Sudendorf and Corballis 1997). It isn’t known how many other species share this ability, since it isn’t demonstrated while it’s occurring. It may, however, be demonstrated after a such a pause by evidence of insight, such as we see in the problem-solving behavior of certain birds. Relative context-independence is illustrated in semantic memory as well, with lexemes, core meanings, gestalts, or gists leaning towards independence, while syntax-specific words will tend to be more context-dependent. Inferential prediction, extrapolation, interpolation, recombination of concepts, nesting of analogies, calculation, and other forms of imagination all occur in context-independent cognition, allowing us to imagine and assess things that aren’t there. We don’t know how much of this occurs in the native mind. We don’t know to what extent a human mind raised independently of a culture could be in anything other than the here and now. If we knew this, we cold speak more confidently about other species.

Evolved Heuristics and Processes
    Oliver Wendell Holmes wrote, “The art of life consists in making correct decisions on insufficient evidence.” Heuristics are quick, practical shortcuts to problem solving, usually sufficient for immediate goals, especially if time is precious. Evolved heuristics are those we are born with, discoverable in the processes enabled by brain architecture and in some of the universal aspects of human behavior. We are not talking about the kind used consciously here, or the “judgmental heuristics” that Tversky and Kahneman associate with what they call cognitive bias. Even though these are the two who coined the term, a distinction is made here between cognitive bias and cognitive error, with the presence or absence of motivation for error deciding which is which. Native heuristics are regarded here as innocent of motive to err. They make mistakes. Cognitive biases themselves are mistakes, although they serve functions that allow them to persist. The collection of mental processes that can solve problems quickly and automatically was termed “the adaptive unconscious” by Daniel Wagner in 2002. It’s operating everywhere we are learning, sorting data, inferring, approximating, guessing probabilities, and finding patterns. Gigerenzer (2002) calls this the adaptive or heuristic toolbox. Where all the sensory and mnemonic inputs and the activities of neurochemicals and the glands that produce them are also included in this unconscious package of talents, we call that assembly intuition.
    The native domain is the world of evolved heuristics and processes, of the modular mind and competing theories. A clear distinction is stipulated here between evolved heuristics and heuristics in general. Evolved heuristics are the brain’s precultural and prelinguistic approaches to investigation, learning, diagnostics, and problem solving. These are first-resort, short-cut, or broad-brush approaches that have only proven themselves on average, enough to be conserved genetically, but they are in no way guaranteed to be optimal or perfect in their solutions to problems. Native heuristics were imperfect even in the simpler environments they evolved within, and it’s a more complicated world now, with information overload and higher levels of stress. Much of the error made here is simply due to being in an environment to which we haven’t had enough millennia to adapt. Heuristics have received a lot of study as cognitive limitations, as they are seen here, but importantly, heuristics work well enough to have been spared by natural selection. They aren’t just about the mistakes we can make.
    The primary epistemic hazard in this domain is inferring too much from too little, drawing conclusions from incomplete or unrepresentative samples. But it’s precisely the strength of heuristics in this domain to jump the gun, to jump to conclusions, or be in too big a hurry.  Here we make fast, frugal, and computationally cheap decisions with imperfect reliability. Through deep time, many of the situations where these contributed to fitness, and survival of the fittest, were life-or-death emergencies that didn’t offer the leisure of ponderous cogitation. Even at leisure, we often need to oversimplify. Logic here is necessarily fuzzy. The errors aren’t motivated, as they often are with cognitive biases: here we’re just doing the best we can with what we’ve been given. The second of our dual processes, conscious cogitation, will kick in if more thorough and systematic approaches are needed, and if we have time, but this will often entail diminishing returns for the effort if called upon when not needed. The original function of this evolved heuristic toolbox is “to achieve proximal goals, such as finding prey, avoiding predators, finding a mate, and if a species is social or cultural, exchanging goods, making profits, and negotiating status. The tools are means to achieve proximal goals and include learning mechanisms that allow an adjustment of the tools when environments change. “The heuristics in the adaptive toolbox just ‘bet’ on the environment on the basis of past experience or a little probing, without attempting a complete analysis and subsequent optimization” (Gigerenzer, 2002).
    As generalists, we need a wider behavioral repertoire than our experience allows us, even given our extended childhoods. These heuristics aren’t like tools made to be perfect for very specific tasks. They are more like Swiss Army knives, good enough much of the time, and unbeatable for portability. Overspecialization doesn’t further the generalist. These tools also respect the general limits that we have on our time and energy, far more than our more evolved cognitive processing will. Neither are these puzzle-solving abilities designed for general sets of problems. As Jack Balkin puts it, “Evolution is conservative and economical: it always solves the problems before it, not the more general difficulty that might arise at some point in the future. It always draws on the devices available to it; it does not redesign from scratch.” Some talents even appear to be what Stephen J. Gould called spandrels, the neural equivalent of the fisherman’s bycatch, a phenotypic characteristic that’s a byproduct of the evolution of some other characteristic, rather than a direct product of adaptive selection. They work to our advantage, but only by happy accident.
    The term archetyping will be used here in reference to native heuristics. But it’s important to clarify that we aren’t born with specific knowledge, or archetypes in any Platonic sense. Neither did Jung think of them in this way. Archetyping refers to inherited cognitive processes. What’s meant here is that “biology provides highly constrained learning mechanisms that guarantee rapid acquisition of the knowledge in an expectable environment” (Flavel p. 94) (Baillargeon, 1995). “Infants are born not with substantive beliefs about objects (e.g., intuitive notions of impenetrability, continuity, or force) but with highly constrained mechanisms that guide the development of infants’ reasoning about objects.” (Baillargeon 133). “What biology provides us is not the end point of development but rather the capacities that allow us to utilize experience in order to reach that end point” (Flavel 342). Infants quickly develop a sense of what isn’t possible and react with keen interest in anything that violates expectations. They even show early signs of moral assessment. What we have with archetyping is an innate readiness or predisposition to process experiences with mother into one cluster of associations, experiences with playmates in another, and experiences with bullies in yet another. These will be referred to here as social role archetypes. We also perform behavioral archetyping, intuitively sorting behavioral scripts into categories like heroics, betrayal, reconciliation, grooming, and seduction.

Modest Modularity of Mind
    Modularity of mind theory postulates innate, often local neural structures or modules which have specific functions conserved by evolution. This is not a return to phrenology, since modules can be widely distributed, and they can remain available for any multiple functions that are capable of integrating them. They are not mental organs. Certain classes of experience get referred to these locations and networks, where they are greeted with a certain neural readiness expressed in network structure. They are content-ready facilities of perception, in dedicated areas of sensation and environmental interaction. They function as unconscious, instinctual processes. Certain parts of the brain have predispositions to learn certain kinds of things in certain ways. Modules may be in part computationally autonomous, and functionally dedicated to specific kinds of problems, but they are capable of serving multiple functions, even newly invented and overlain cultural functions, much as as the insula, for example, will address disgust in both hygienic and moral presentations (Sapolsky 2017). This multitasking is called neural reuse (Anderson). The more complex modules, which have a discernible sequence of operation and are dedicated to a primary problem-solving cognitive task, are referred to here as native heuristics. These operate prior to learning and despite attempts at conscious intervention. Jerry Fodor (1983) has proposed that modules are inferential, like our higher neural functions, are specialized to specific inputs, are encapsulated or relatively autonomous, process data in specific pathways, are quick in production of simple outputs, and specific in what other parts of the brain they inform. These modules combine or recombine with others and other parts of the brain via established white matter interconnections. Any non-native or novel connections between them would be functions of neural reuse or cultural software. By these overlays, evolved heuristics often provide supportive substrates for later cultural learning. John Searle (1997) refers to these as “the background abilities” that enable our perceptual and linguistic interpretation. For example, rudimentary perceptions and concepts, classes and classifications, relationships, etc. will also anchor lexemes acquired to access them. Some of this is preverbal experience we are known to share with other animals.
    Noam Chomsky’s idea of a “language acquisition device” is perhaps a premature and overly enthusiastic exemplar of modularity theory, given the long time that such circuitry would require to evolve and stabilize, yet it’s also possible that such an evolutionary process is already underway by now, given the tremendous adaptive advantage that language confers. But such theories of modularity at the more global levels of the brain, referred to as massive modularity, are more problematic, particularly as they apply to task specificity. Many theorists have backed away from these grander ideas of modularity and are speaking now of modest modularity. There appears to be no domain-general module for the likes of rational thought, or for volitional self-management. Such processes can be better accounted for with metaphors for neural software that exploits pre-existing connectivity with potential for neural reuse. Perhaps even a few of Gould’s spandrels may be recruited, and theories about neuroplasticity allow for plenty of this. Jaak Panksepp (2010) suggests looking at the “developmental interactions among ancient special-purpose circuits and more recent general-purpose brain mechanisms” and so provide alternative accounts for what looks like massive modularity. This is plausible so far. However, he also asserts incorrectly that the general neocortex is born largely tabula rasa, and all functions, including vision, are programmed into equipotential brain tissues that initially resemble Random Access Memory.” This equipotentiality ignores both contextual effects and white-matter thoroughfares, and the versatility he cites can also be explained by neural reuse capabilities. Panksepp’s important work with lower level brain functions, their evolved (though not-so-autonomous) modularity, and their bottom-up influences on cognition, is not diminished by this objection, but his argument for a neocortical tabula rasa fails.



2.3 - Accommodating Domain

Accommodation and Assimilation, Constructivism and Bricolage,

Apperceptive Mass and Inertia, Memory and its Plasticity,
Schemas and Scripts, Analogy and Modeling, Cognitive Reappraisal

“A great many people think they are thinking when they are really rearranging their prejudices.” William James

Accommodation and Assimilation
    This is the domain of the mind as accumulated and integrated contents, composed of the sum of experience to date. This domain is home to Bacon’s Idols of the Cave (Idola Specus), “for everyone has (besides vagaries of human nature in general) his own special cave or den which scatters and discolours the light of nature.” Individual differences in life’s education lead to individual preferences and biases in cognition and affect. We’re invested in the minds that we’ve assembled, and we usually hate to write off any of our efforts as bad investments. Mental self-modification can be a slow and often painful process. But ignoring potential improvements to our database is the etymological root of ignorance. With good reason, we can usually rely on perceptions that we deem constancies. In fact, it seems a primary goal to recognize or form cognitive invariants. Predictability is important to survival. But many consistencies become cognitive ossifications that must be broken up from time to time. If a cognitive structure is unable to accommodate verifiable new information, then the structure must either shut itself off to something that’s likely true, or must itself change to accommodate the new. Certainty will only be an asset when it prevents us from accommodating a falsehood.
    For Piaget, individuals learn and construct new knowledge by processes of assimilation and accommodation. Assimilation is the more straightforward of the two. We begin by using our native heuristics and simpler processes, and gradually add learned heuristics and algorithms to our repertoire, building a base of remembered material that’s gradually assimilated into an edifice of experiential learning or memory. Much of the time, however, new experience or knowledge doesn’t just slip frictionlessly into this growing edifice. Either the information itself may be modified, or rejected entirely, or else the edifice itself has to adjust itself to accommodate the new. This happens wherever the disobedient world violates our preconceptions, expectations, or demands for how the world should be. The analogy of the mind as a constructed edifice also suggests the idea that accommodation will mean a structural change, a remodeling, re-plumbing, or re-wiring. Thus, accommodating is a better word than integration. It implies that both the input and the database may undergo some alteration. Such prospects are frequently met with resistance, even some whining. It’s a harder kind of learning that often requires some demolition work, maybe stuff we’d acquired at great cost, or stuff we’d grown altogether too fond of. We may have to learn to do without some of that stuff if it’s real self-improvement we want. But maintaining our motivation to keep ourselves learning also demands some confidence in our ability to do this correctly, and admitting our errors can undermine this confidence.

Constructivism and Bricolage
    The general rule for learning seems to be on a first-come, first-served basis. We learn the first thing that convinces us. Once we have that associated into place, it needs to be defended. The more elaborated the associations, the more urgently it needs defending. It’s only reluctantly that we unlearn a thing we once held to be true. While this gives us a sort of ratchet effect that tends to increase the body of our knowledge, it can be just as effective in lending this ratchet effect to our ignorance.
    The brain undergoes most of its physical reconstruction during childhood. Although the human neocortex is far from being a blank slate, it develops with plenty of flexibility. Infants are born with more neurons and potential connections than they will ever use, and although smaller in size, their brains have as many or slightly more potential connections than they will have as adults. These brains are subject to more attrition than growth as they develop, and attrition is accomplished on a use-it-or-lose it basis. The young synapses are initially overconnected and undergo their most substantial pruning, of both dendrites and axons, between early childhood and the onset of puberty. Brains in the gifted develop a little differently, with both ends shifted to slightly later in the gifted, followed by a quicker pruning. Their neural connections wind up somewhat leaner, but connect more efficiently (Genc, 2108). Bruce Hood (2009) speculates, “It turns out that the overproduction and subsequent culling of connections may be a cunning strategy to shape the brain to its environment. A massive connectivity means that the brain is wired up for every potential pattern of activation that it may encounter from experience. But remember, only neurons that fire together wire together. When neurons are not reciprocally activated, nature prunes their connections through inactivity.” This recommends a broad educational exposure early in life, across a wide range of interests and activities, to keep associated neural possibilities alive. It also highlights the tragedy of childhood adversity.
    In Piaget’s notion of constructivism (distinguished from constructionism) learning is an interaction between an individual and an environment that relies on what the learner has already experienced and integrated. Education, therefore, should be individualized or tailored to the subject who is learning, instead of to the subject that is being taught. We can optimize the process by understanding students’ background knowledge in order to help them acquire new information more effectively. This is not to say that there are no innate processes, or generalized and roughly predictable stages of development. Piaget allowed for the genetic precursors to his epistemology, so it isn’t an epistemological relativity. Nor does he dismiss the utility of rote learning of structured material where this is needed or advised. The idea is scalable, not only downward to specific areas of learning, such as models and extended analogies, but upwards to include how societies and cultures learn.
    The edification of the brain in evolution, of the mind in personal growth, and human culture in general, is comparable to a bricolage. This analogy, developed at some length first by Lévi-Strauss (The Savage Mind) and then by Jack Balkin (2003), develops from the French verb bricoler, referring to a handyman’s DIY construction or repair using only tools or materials at hand, opportunistically adapting, like MacGuyvering bombs out of Bisquick. The bricoleur grabs what’s on hand and makes do. Lévi-Strauss contrasted it with engineering, which plans the job ahead and has the materials delivered. For Balkin, where the analogy ends is where the bricoleur’s project becomes the basis of what he has to work with in the future. This recalls path dependency. Minds are developed by way of a bootstrap tinkering, reaching selectively where able, for whatever experiential learning comes within our reach and finds its way into memory. The bricoleur is the natural mind, the engineer is the STEM guy. Since our minds are constructed from the environment we develop within, we are subject to effects like “garbage in, garbage out” and “you are what you eat.” We are well-advised to actively process ideas before they can lodge in our minds, and to remember that viruses and parasites can infect infect hosts that are able to accommodate them. A cognitive immune function like vigilance can spare us much effort of unlearning later on.
    Path dependency is a consequence of this bricolage, and it occurs on levels of individual theories, individual minds, family dynamics, social dynamics, and culture as a whole. As systems develop, they get slotted or entrained into channels that were established early in their development, often by random or arbitrary choices, decisions, or oversights. Although apparently incorrect, the example of railway gauge dimensions being driven by Roman chariot design, and ultimately by the width of a the horse’s ass, is a common example. The mile, a thousand left steps of a Roman army on the march, is a pretty useless metric compared to the kilometer, but extensive industries and economics are woven together in it. You can’t just type an ampersand when coding html. QWERTY is just freaking stupid, but there it is, and will likely remain.
    The inertia of our cognitive mass is resistance to change, or conservatism. There are reasons and justifications for conservatism in human culture. It’s especially important in STEM, where all papers are peer-reviewed and all experiments replicated. Science and peer review make trouble for innovators, at least until the pressure for a change becomes irresistible and leads to a discontinuity. It gets a bit sillier out in the softer areas of academia. But we do stand on the shoulders of ancestral giants, and this is the cultural literacy that even visionaries and creative geniuses have to begin with. It’s frustrating when you aren’t allowed more than one original thought in a paper, though, and everything else has to be second-hand and cited.
    We resist changing our minds. “If a fact comes in that doesn’t fit into your frame, you’ll either not notice it, or ignore it, or ridicule it, or be puzzled by it—or attack it if it’s threatening” (George Lakoff).  Jack Balkin points out the chief threat, “The process of understanding is invasive in the deepest way, for it offers the possibility that we will become different from what we are now.” It takes courage for us to challenge our beliefs. But sometimes a little sense of indignation towards whoever sold us an inferior idea, or a new humility felt within ourselves from having bought it, can be a good substitute for courage. Emotions in response to pressures for change need not be defensive. Constructive response is proactive. And we can simply be motivated to become better and wiser people.

Apperceptive Mass and Inertia
    The term apperception, used by Descartes, Leibniz, Kant, and Spencer, refers to the mental contents that we bring to an event, a new experience, or thing to be learned. In psychology, it’s “the process by which new experience is assimilated to and transformed by the residuum of past experience of an individual to form a new whole.” Perception is transformed by the contents already in mind. Johann Herbart expanded the term into apperceptive mass, which is suggestive, by analogy, of inertia or resistance to change. The term cognitive inertia may also be used where the resistance of beliefs to change is concerned, although the phenomenon also contributes to such desiderata as perseverance and trust. “Apperception is that process by which an aggregate or mass of presentations becomes systematized (apperceptions-system) by the accretion of new elements, either sense-given or product of the inner workings of the mind” (Herbart). The aggregated mass of a person’s previous experience may be used in understanding the new. Predispositions, prior knowledge, tacit or underlying assumptions, explicit beliefs, trigger words, painful and pleasant associations, prejudices and biases, and habitual frames of reference all play dynamic roles in learning. Applied to education, this suggests that teachers acquaint themselves with what students already have in mind before offering new material. Robert Burton borrows the term “hidden layer” from information theory: “It is the interface between incoming sensory data and a final perception, the anatomic crossroad where nature and nurture intersect.” And where intuition meets deliberation.
    Where there is any doubt whether a discrepancy should be solved to favor the personal status quo, whether the intuited feeling that something is wrong is right, our cognitive inertia will likely favor the known, even where known incorrectly. This is called cognitive bias. New input is filtered through our cognitive biases, and particularly for consistency with self-schemas. We can need much more than a hint of a fishy smell to get us investigating. Vigilance needs to be learned as a habit, especially where things seem to be going wrong by steps and degrees, as the rise of German Nazism and Zimbardo’s Stanford Prison Experiment taught us. We need to learn to use our new forebrain to tag our doubts as we feel them and remember to investigate. We still, however, require our biases, preconceptions, and prejudices to run both our native and learned heuristics.
    Our chief concern at present isn’t so much with how the mind as presently constituted adapts to new experience, but with how and why it fails to do so, how learning to date can mangle any new information that threatens to make the mind perform work to correct itself, or refuse to see anything but what it wants to see. We’re concerned here with the persistence of perceptions once perceived and ideas once accepted, and the role of associated feelings and emotions in that persistence. Nearly every experience that we have made our own carries some level of affective valence or charge. These affective charges can even determine whether we attend to any new information at all. Since what we perceive is largely a function of our previous experiences and the conclusions we’ve drawn from them, it will serve us to have a sense of our limitations here. The implications are social and cultural as well, since these accumulations are unique to individuals but the forms of communications about them are general. Communication is only possible on some degree of common ground. Being socially effective requires incorporating knowledge from other perspectives into our own. We are too enslaved to the inertia of what we already think is knowledge. Thomas Jefferson offered: “Ignorance is preferable to error; and he is less remote from the truth who believes nothing, than he who believes what is wrong.”
    Satisficing, introduced by Herbert A. Simon (1956), is a heuristic in the native domain that has extended effects in the accommodating as well. It’s the termination of further inquiry or problem-solving behavior when a sense of sufficiency for the task has been reached, or an acceptability threshold is met. It’s where learning becomes having learned. This is every bit as much affect as thought. We get a feeling of knowing enough, at least for now, and it’s this feeling, rather than the knowledge, that puts the brakes on further inquiry. We will also feel that to press forward will only mean diminishing returns for our efforts. At this point, we need to grow dissatisfied with what we know before resuming investigation. This is a stop search command that has both merits and hazards, depending on the pressing of needs. We nearly always have our constraints, of limited time, of limited knowledge, of finite computational ability. All but the simplest and most formal forms of our rationality are bounded, leaving us reliant upon our imperfect induction and inference. Simon says (!), “decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world.” Errors occur in being satisfied with sufficiency too soon, as we quit investigating and squat or hunker down in our beliefs. When this process becomes conscious, we look explicitly for necessity and sufficiency. Is this datum necessary to our picture of things, and is it sufficient to address all of the problems that its absence poses?

Memory and its Plasticity
    Mnemosyne, the goddess of memory, wasn’t one of the Olympians, but she was the mother of all nine of the Muses. This is an interesting correlation of memory with creativity. It isn’t always the same thing as faithful recording. We remember what we assess or judge at the time to be of further use to us. It’s selective and self-serving. And it’s plastic. Each time we recall a memory it picks up new associations, new meanings, new feelings, and sometimes re-confabulations. It’s seldom perfectly faithful or complete, but processed to suit present and anticipated future needs. Our bodies of memories continually evolve, and they even undergo modification during sleep. Memory is faulty. Eventually, we are told, most prisoners come to believe their old not-guilty pleas. Memory is a library of sorts and has encoding, storage, and retrieval. It develops a kind of catalog, to which language has made a useful addition. Remembered experiences are usually given multiple associations, which act like handles for retrieval. The handles might be lexemes, feelings, smells, conditioned stimuli, or other instances sharing a category. Remembering is the revisiting of stored representations of experience. Memories are multi-dimensional or multi-modal, integrating associations from diverse areas of the brain, sensations, perceptions, semantic representations, lexemic tags, feelings and emotions, and piano lessons. Autoassociative memory will retrieve a piece of data upon presentation of only partial information. Hetero-associative memory, when given a pattern, will return a different pattern with similar or common attributes, and from different places in the brain
    Short-Term, primary, or active memory, a parietal lobe function, holds a small amount of new information in mind, pending disposition. It’s only held there for a couple of seconds, since patterns seem to be retained as functions of neurotransmitter depletion and replenishment (Mongillo 2008). Quantities of information held there may be increased by chunking, something like a cognitive data compression mechanism. And the duration of short term holds may be increased by adding repetitions. Formations of associations during this period facilitate the transfer to long-term memory. Short term memory mediates attention, interprets language, integrates sensory information, and references present affective states. Blackouts don’t mean that a person was unconscious during an unremembered event, only that something prevented the conversion from short to long term memory, such as drunkenness or dissociation.
    The idea of working memory, a PFC function, carries the obvious analogy to RAM, random-access memory, and this is woven into the origins of the concept. This compares present experience, short-term memory, and dynamic recall of material already held in long-term memory, together with affective (feeling and emotion) and effective (motor and mirror) mental processes and associations. Several component processes have been identified here as the phonological loop (speech processing), visual-spatial sketchpad, short term memory, episodic buffer (explicit memory access), retrieval structures, and central executive function (direction of attention to task relevance).
    Daniel Dennett (1991) develops a theory of working attention called the multiple drafts model of consciousness, attempting to found consciousness on a strictly material, non-Cartesian basis. “All varieties of perception—indeed all varieties of thought or mental activity—are accomplished in the brain by parallel, multitrack processes of interpretation and elaboration of sensory inputs. Information entering the nervous system is under continuous editorial revision.” Bernard Baars (2002) proposes a global workspace theory (GWT) similar to working memory, analogized as a theater of consciousness with a spotlight of selective attention, with most of the theater in darkness. There is no homunculus in there to act as audience or director. Memory-prediction framework is a theory proposed by Jeff Hawkins in On Intelligence (2004). It suggests that interactions of the mammalian neocortex, the hippocampi, and the thalamus match bottom-up sensory and affective inputs to stored memory patterns in ways that predict the future. Mismatches, dissonance, and novel stimuli demand recruitment of increasingly higher-order cognitive functions. But other process must also be possible, since intelligent birds, cephalopods, and mantas don’t have mammalian cortices.
    Long-term memory is seen in a number of subcategories, beginning with a division between explicit and implicit memory. Explicit or declarative memory (busiest in the hippocampus and temporal lobes) encodes complex memories with cognitive maps. Its job is re-cognition, bringing the memory into the present moment to be re-cognized, along with whatever relevant associations it may be related or connected to.
    Semantic memory is concerned with facts, ideas, concepts, meanings, or general knowledge. In a semantic network, each node is to be interpreted as representing a specific percept, concept, word, or feature. That is, each node acts much like a symbol, connoting related memories. From Wiki, “Links come in many different types, each one standing for a particular relationship that can hold between any two nodes. Processing in a semantic network often takes the form of spreading activation.” The features and the associations of these memories are like their handles, which might be on top, such as an association to a more general or abstract category, or below, where a node collects other associations to similar memories. Whenever two items are held and attended simultaneously, the association between them grows stronger. A chunk is a collection of familiar associations that will act as a coherent group when retrieved. Items of vocabulary are semantic memories, accessed by sound, sight of word or sign, or the feel of braille, and associated with any number of other memories. Not all chunks contain lexemes, but all lexemes are parts of chunks.
    Episodic memory (busiest in the medial temporal lobes, anterior thalamic nucleus, mammillary body, fornix, and prefrontal cortex) holds our situated experiences, the specific events and objects that our attention has entertained. These memories always entail the perspective of an observer and a context for the experience. They represent short slices of time as part of the context, slices which themselves fit into larger contexts, like a particular phase we were going through. These are normally sorted in order of their occurrence. They will contain summary records of sensory, perceptual, conceptual, and affective processing oriented in a spatiotemporal context. Recall has many of the characteristics of the initial experience, although it’s rarely as vivid, and rarely entirely faithful in its detail. Episodic memory allows travel backward in time, but at the same time helps us to imagine how the outcomes of present decisions might be expected to feel. Episodic memory can specify specific occurrences, particularly in the case of flashbulb memory, which registers experiences with strong affect, trauma, or other extreme forms of salience. But it can also generalize typical experiences, averages from a number of similar examples, like what it’s like to swim laps. You remember that well, but you will seldom remember swimming each lap. Episodic memories get stronger with frequent recollection and they can be altered, particularly in their affective content, by recalling them in different emotional states than first experienced in.
    Autobiographical memory is our collection of episodic memories specific to our own personal history, our self-schemas, and behavioral scripts. It’s reconstructive, dynamic, and will confabulate in both self-serving and self-destructive ways. It incorporates or recruits semantic memories, and may adjust them as needed to fit a current narrative. The first few years are nearly always a blank, perhaps because our self-schemas and autobiography are still forming. Reminiscence bump is the tendency in autobiographical memory to recall personal events from adolescence through early adulthood with more detail and vividness than from other periods in life. The age range seems to correlate well with the higher rate of development in the prefrontal cortex, when we are improving our ability to make choices, and defining who we are and who we want to be. This memory holds what’s called a working self, a set of active personal goals and self-images organized into goal hierarchies. These personal goals and self-images work together to modify cognition and the resulting behavior to get ‘er done according to the plan, to stay on track. We also carry thematic memory content in our autobiographical memories, with strong affective components. We are fierce, or compassionate, or lonely, or courageous, or spiritual, or sweet scowlerly types.
    Fuzzy-trace theory posits dual and independent verbatim and gist memory processes. This distinction between precise, literal verbatim memory and meaning-based, intuitive gist accounts for memory paradoxes including some dissociations between true and false memory, false memories outlasting true memories, and developmental increases in false memory. Generalizations and stereotyping can also be thought of as gist memories.
    Implicit or non-declarative memory allows us to perform tasks without conscious effort or participation, without self-consciousness, or comparing the moment to prior experience. Procedural memory, as with learned motor skills, playing musical instruments or snooker, doing dance steps, or pole-vaulting, or typing, are stored in pathways in the cerebellum. We also have other minimally or not conscious functions that operate more globally, though with deep roots in the limbic system. These include classical and operant conditioning, grammatical assumptions, reactions to priming, and trigger stimuli and words. Category level knowledge lets us know without thinking that a Great Dane and a chihuahua are pretty much the same thing. Clever that. Emotional conditioning links preconscious stimuli and perceptions to our emotional responses, and this can get particularly intense when the amygdala gets involved.
    Memory plasticity refers to the alteration of specific memories over time, including, but not limited to, leveling and sharpening of detail, according to repeated uses, exaggerations, retellings, and reformulations. Confabulation is a common example, which gets intensified with repetition. Confabulation is defined as the production of fabricated, distorted, or misinterpreted memories, without the conscious intention to deceive. It may involve embellishment to fit valued schemas and scripts. Memory might be contaminated with details filled in after the fact. Individuals may be confident about their recollections, despite contradictory evidence. Corroboration with others can also lead to confabulation, as we see with false memory syndrome. Constructive memory will add or discard features, infill, integrate, extrapolate, abbreviate, organize, and reconstruct in creative ways, often simply to settle on the thing that feels best. Cryptomnesia is the opposite of confabulation, where an old memory is mistaken for imagination, and something old reemerges in the guise of a new idea. This might on occasion lead to accusations of plagiarism, or at least of crappy note-keeping.
    When memories are retrieved, they get reconsolidated and reintegrated. When they are replaced again, they will carry with them some of the new experiences we had during their recollection. Memory is not a library where materials are replaced unaltered or undamaged. Of particular importance here is that new feelings and emotions can be added to older recollections, altering the associations with the original feelings and emotions. This process is used in deconditioning. Instead of just hammering ourselves with the same old re-sentiments or resentments and making memories even more unpleasant, we can add such affective experience as understanding, patience, or forgiveness, and take some of the damaging emotional charge out of the memory before putting it back. This is a big part of why elucidogens are effective in treating PTSD: it’s easy to attach very strong and positive affect to a memory being re-examined, and at the same time more difficult to amplify a negative one.

Schemas and Scripts
    The term schema (pl. schemata or schemas) in common use “describes a pattern of thought or behavior that organizes categories of information and the relationships among them. It can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organizing and perceiving new information” (Wiki). In simple terms, it’s a coherent set of ideas that makes up a larger, more complicated idea. Schemas will reduce the cognitive load of both working memory and memory storage and retrieval by a process known as chunking, collecting more elemental units and inter-associating them so they hang together. A processes like the priming effect or the availability heuristic will put entire schemas on subliminal standby for use by our working memory. Grouping can be by class membership, common attribute, underlying relationship, similarity, concurrence, parts of wholes, similarity of affect, etc. At higher levels of  abstraction in less native domains, they may be quite complex. But even on the native level, we can have such fact-and-concept organizations as superordinate and subordinate categories, like animal, pet, dog, and Rover, and importantly, these levels may exist before we assign these names. The associated lexemes or name tags are assigned as additional parts of the schemas. Some categories can be functions of native heuristics and even sensorimotor perception. Schemas capture connections and similarities, tying memories together to be recalled together, eventually to answer to their common name, which we might identify here as a concept, one step more abstract than a sensory or conceptual metaphor. Self-schemas, used a-plenty here, are the complicated ideas we have about ourselves.
    Hiroko Nishida, in Cultural Schema Theory (1999), describes eight types of schemas in our social interaction: 1) Fact-and-concept schemas: clusters of general information about facts (who, what, when, where, why, and how); 2) Person schemas: knowledge about character or personality types (as with Myers–Briggs Type Indicators); 3) Self schemas: self-image and self-concept structures; 4) Role schemas: knowledge of social roles and what might be expected of them (underpinned and supported by social role archetyping, assuming a careful, stipulative definition of this term); 5) Context schemas: knowledge of the right settings, frames, and scales for different kinds of things and events; 6) Procedure schemas: knowledge of scripts permitting causal inference and prediction (underpinned and supported by behavioral archetyping); 7) Strategy schemas: knowledge of heuristic, game, puzzle, forensic, algorithmic, and problem-solving strategies; & 8) Emotion schemas: knowledge of what feeling tends to go with what stimulus or behavior. But:
    Schemas are developed out of experience, and so emerge by way of the sensorimotor and native domains that provide us with their content. Nishida uses the term schema at its very broadest: “Memory representation or neural circuits created in the brain as a result of information processing are assumed to be schemas” (1999). Here, the term is used more narrowly. Functionally, we can draw a stronger line between schemas (Nishida’s fact-and-concept, person, self, role, and context schemas) and scripts (Nishida’s procedure and strategy schemas). Scripts tend to engage different parts of the brain than schemas and unfold in more temporal dimension, drawing more on narrative functions, autobiographical memory, and procedural memory. His Emotion schemas won’t be considered here a separate category of schemas, since all schemas and scripts are stored in memory with affective associations, and most especially, self and social schemas. Neither do they exist in a vacuum. Emotions will, however, still be categorized. Self-schemas are central enough to what and who we are to warrant a domain all their own, the personal, which will be developed at some length two chapters below. This is where accommodation is the most difficult, emotion is the most challenging, and unconscious overreaction is the most prevalent. Both procedure and strategy schemas should be further subdivided into situational scripts and social scripts, which have distinct emotional accompaniments. These are discussed later, in the chapters on situational and social domains. In some fields that lean towards computational theory, we see schemas and scripts, respectively, referred to as memory organization packets (MOPS) and thematic organization points (TOPS) (Schank 1982). The latter of these is more pertinent to affective association and personal relevance. Finally, I would also add a ninth type of schema to Nishida’s list: Interpretive schema: extended metaphors, analogies and models, non-linear maps or conceptual superimpositions along with their lexicons, that are overlaid on a territory.
    Scripts are generic or generalized sequences of expected behaviors with fungible components or actors. They will be treated here as distinct from schemas, even though they are normally regarded a subset. Scripts differ from schemas in a similar way to how semantic memory differs from the procedural, or semantics differs from syntax, and they rely more on implicit or procedural memory than explicit and semantic, although the semantic provides the dramatis personae. Scripts are organized in both time and space, and they predict expected unfoldings and transformations. They are mental templates of how things are expected to go, helping us not only to anticipate outcomes, but to infer backwards to investigate how we might have got here. They assist us with causal inference. In the native domains, they help us out with the narrative heuristic. In the less native domains, they help with the storytelling, or they may be referred to as procedures or protocols. Scripts are the opposite of impromptu behavior, or spontaneity, and they make a useful example of the Chinese word wéi, the kind of acting or doing that appears in the concept of wúwéi, or not doing. Not doing is thought a good thing to do. Scripts are a kind of acting that a performer does, where performing can be considered etymologically as acting through-form. The analogy of theatrical scripts might be extended further, to include counterparts for actors, props, settings, sequence of events, lighting, frame, and stage direction. Scripts can be used to facilitate recall of the precursor events that led up to the present, templates for recognition of elements in the present, and predictors of process and procedural outcomes. Scripts for individual roles are role schemas set in four dimensions. Eric Berne developed his Transactional Analysis using only three fundamental role scripts: parent (exteropsyche). adult (neopsyche), and  child (archaeopsyche). Scripts often develop out of both behavioral and social role archetypes (as these are understood in Chapter 3.2, Evolved Heuristics and Processes, subset in the Social Domain.
    Jean Piaget, who introduced the term schema in 1923, saw information integrated into these cognitive structures by assimilating and accommodating processes. Assimilation is the relatively unconflicted construction, or just the straightforward adoption, of schemas and scripts. Accommodation creates new, restrained, or remodeled schemas and scripts to accept otherwise incompatible information, which assumes that the easier assimilation process has failed. Schema Therapy, developed by Jeffrey Young out of Cognitive Behavioral Therapy (CBT), explicitly seeks to treat maladaptive behavior by restructuring maladaptive schemas (and scripts).
    Being structures assembled over time, with effort, and oft with our blood, sweat, and tears, both schemas and scripts are resistant to change, although they do grow. As in systems theory, repeated information over time confers organization, coherence, a sense of consistency, and reliability, and a little bit of variety or diversity confers a little bit of resilience. In aggregate, they constitute our apperceptive mass, and thus have apperceptive inertia. They would much rather not be contradicted. Once a schema or script has been assimilated, or modified by accommodation, compatible information is readily incorporated into that window on the world. New information will normally be adjusted in perception to minimize conflict with and alterations to schemas and scripts. Adjustments can be severe, and woe betide any new information that’s cognitively dissonant. Against such assaults we have a defensive arsenal of anticognitives, especially including, but not limited to, cognitive biases.

Analogy and Modeling
    Analogies and models are interpretive schemas, maps or superimpositions overlaid on a territory belonging to a different schema or reality. For our purposes here, we might regard metaphors as simple analogies, or analogies with fewer moving parts. The mental process of recognizing and developing metaphors, analogies, and models begins in the native domain, with the similarity and pattern-recognition heuristics, and these are underpinned by the perceptions in the sensorimotor domain enabling sensory and conceptual metaphors. Extended analogies and models develop in this accommodating domain as experience and learning accumulate. Analogies may be drawn from both qualitative and structural or formal similarities. Here we also begin to make connections between our narrative tales and the other dimensions of reality, enriching the meaning of legend, myth, and fable. Many of our fables and parables, such as “The Blind Men and the Elephant,” “The Emperor’s New Clothes,” “Goldilocks and the Three Bears,” or “Brer Rabbit and the Tar Baby,” have multiple applications to attitude maintenance in practical life. The humorous teaching stories of Sufism, Daoism, and Zen are explicitly told to facilitate a deeper understanding of life on levels other than the narrative in which they are told.
    Nested metaphors, analogies, and models fit inside each other, such that their corresponding parts resonate with each other. This is fundamental to both correlative thought and magical thinking, but it can also apply to any cognitive map that purports to represent a corresponding reality. Ordinary road maps are examples, nested with the terrain they are made to represent. Inferring the missing pieces in an incomplete model is one of the important functions of the nested analogy. We seek the occupant of the corresponding part of the nested analog and translate this into the terms of the original. This works for both static or synchronic parings (of schemas) and dynamic or diachronic pairings (of scripts). The dynamic analogies can be used for both causal inference and prediction. A calendar is a map that’s nested with our real life days. When we see our weekends and holy days correspond with the days of our real lives, we can make inferences related to the potential uses we may have for those days. Nesting analogies might also expose more pieces and properties in common than those first seen.
    Scientific theories, and especially those expressed with mathematics, are linguistic models that purport to resonate or synchronize with some specific aspect of reality. In science, these are subject to rigorous testing for reliability in inference and prediction. Where our analogies and models fall short with these rigorous tests or proofs, they are still perfectly legitimate heuristics for generating testable and falsifiable hypotheses. The scientific method specifies the rigor that distinguishes the two. Science uses nested analogies wherever it assembles standard models, and the utility of this heuristic is shown by such discoveries as new elements and subatomic particles from holes in theoretical models. Of course, the caveat here is that this sets up expectations, and we all know that humans can see what we want to see. Gonna wait and see if dark matter and energy are anything more than holes in our model, or placeholder names for the discrepancies between our measurements and our expectations. We will tend to interpret the new in terms of the old. Because of our ability to relate to the world through sensory metaphor, very few experiences can be regarded as completely new in a qualitative sense. Beyond infancy at least, we nearly always have an accumulated database of remembered experience to which we can relate the novel, even if the novel has nothing in common with the known. The expectations given by our models are no exception.
    Extended analogies that structure our perceptions of the social and cultural worlds can be a mixed blessing. We can have useful and informative ones, like the marketplace of ideas, or the invisible hand, and counterproductive and deceptive ones, like the wars on poverty, drugs, and terror. Even with the good ones, however, problems arise when we mistake the map for the terrain, get stuck in the abstract and conceptual side of things, and lose touch with the reality that the map is supposed to represent. Here we find the confusion in 1752, when England switched from Julian to Gregorian calendars, requiring the date to be advanced 11 days overnight. Riots ensued, by hysterical folks believing that eleven real days had been stolen from their lives. The use of an arbitrary or random construction as one of a pair of nested analogies happens frequently in popular mystical pseudoscience, where such random sequences as alphabets and calendars are alleged to correspond with reality in gematria and numerology. Probably a majority of astrologers don’t understand that the chart is a cross section of the sky looking South, or that the wheel has slipped by roughly 30 degrees since it was last adjusted.
    Analogies are complex, structured comparisons between items in different categories. Moving between different scales, we can liken this to fractal self-similarity. They can be enormously useful as heuristic devices, despite their potential for error. The recent development of meme theory, which likens the most atomic units of transmissible culture to genes, has led to some useful speculation about the propagation, selection, and evolution of culture. Still, it’s important not to take this as factual. One widely-accepted analogy that may have equal parts of utility and delusion is the comparison of the human brain to the computer. The end-state of such a nesting is assumed by many to be the awakening of sufficiently sophisticated computers into consciousness, perhaps soon to be followed by the digital transfer of a living being’s mind into a computer network. This makes for interesting science fiction if you can grant the conceit, but this is being greeted with increasing suspicion by those who study the wetware of the human brain with its embodied cognition and bubbling cauldrons of neurochemicals.
    From an evolutionary standpoint, the main points of developing orderly mental representations of how the world works concern making predictions (or knowing what to expect) and making good choices when options present themselves. In the 19th century, Hermann von Helmholtz proposed that predictive processing was the primary function of the brain. But the process works both ways as it alters what we see to conform to our predictions or expectations. Cognitive failures and their recognition are needed to keep this dynamic in balance. We rely heavily on our representations of the world’s causal structures and we will experience anxiety when errors or discrepancies arise. This kind of cognitive dissonance demands a response. Either we’ll have to alter our internal representations or else we’ll have to modify that naughty, importunate, disobedient input. Unfortunately, elaborate cultural infrastructure, assembled to make us feel better about our failures, allows us the second option all too frequently and so we lose the salubrious habit of continually refining our models. We’ve discussed the heuristics selected when predictions and choices had to be made in a hurry and there wasn’t time for ponderous cogitation. In this accommodating domain, we have more time to form expectations and apply preconceived measurements and values to weigh our choices. We wouldn’t have so many cognitive biases if they never served any useful of adaptive function, but they do get in the way of the learning that underpins good analogizing and modeling, and we come to see little more than what we expect or want to see. Our predictions become self-fulfilling prophesies. Our choices are rationalized into looking like better moves than they are. Placebos work just often enough to remain in our medicine chests, especially the red, extra-strength, name brand placebos.
    The questions we ask of life frame the answers life gives us. Contextual schemas form a good working pair with interpretive schemas and having a variety, toolbox, or repertoire of these on hand is never a bad idea. Where interpretive schemas like analogies and models, maps, and their lexicons will provide a reference point of view, contextual schemas provide frames in both space and time that give us options in weighing broader effects and longer-term consequences of our choices, or else a closer look at the details of our decisions. Choice in the relative scale of things gives us choices in how we value them, and this is an important key to the power we have to revalue our values themselves. Of course, we can use frames to deceive ourselves too: 1st place is 2nd-to-last in a two-man race. The glass is twice as big as it needs to be at the moment. Disproportion is one of denial’s great tricks. An alcoholic might look ahead with an unbearable anxiety to the discomfort of withdrawal, but except in extreme cases involving DTs, that discomfort won’t really be any greater than the next scheduled hangover or two. It may require some work to correct an erroneous thought, and adjust some of the thinking it’s connected to, but that will likely be a lot less work than continuing to defend the error and the mistakes that connect back to it. Too few of us seem to have learned this.
    Infilling is a species of inference used in both contextual and interpretive schemas. Interpolation is a common form, and its most common stimulus is the problem of how to insert the right missing or absent data into narratives. It has roots in the native domain, where it’s seen in confabulated connections making sense of optical illusions, and in stringing together meaningful dream sequences. In the accommodating domain it’s the process of confabulating missing pieces in our memory, or supplying the fabricated data that we still need to complete our puzzles. Scientific hypotheses will often fall into this category, whether they are destined to survive or not. We will conjure up phlogiston to account for fire, or cosmological constants to make our equations work.

Cognitive Reappraisal
    Master Yoda reminds us, “You must unlearn what you have learned.” But thinking is hard work if you’re not having fun. It may not burn many kilowatt hours, but it does take a proportionate toll on our time and metabolism. Most people seem to prefer doing to thinking. They may even pay money to get out of thinking, and certainly they expect higher pay if a task requires them to think. This domain is most heavily burdened by the incorrect things we have already learned, and their resistance to being unlearned, particularly when they are cemented by illusory credibility, or our own credulity. This powers our cognitive biases. Of course, had we been adequately warned or trained earlier in life, we might have practiced more vigilance towards the things we admitted or committed into our memories. We might find that, if constructing a more error-free mind were one of life’s objectives, it might well have been worth greater effort to maintain a skeptical approach and vet our facts more thoroughly. Unlearning means relearning, overlearning, replacement learning, and then “letting that sink in.” New behaviors must be gradually ingrained. We don’t just replace the old with the new, like deleting one file and inserting a new one. The mind will be stubborn. It will keep trying the old ways of thinking and comparing the old with the new, until the new thing has proven itself and the old thing is slowly forgotten. We naturally allow a strategy or script to fail a few times, just to be sure. We might drop a newer, successful strategy on purpose once or twice, just to give an old one one more chance. Unless a lesson is traumatic, we resist one-trial learning. The new thing now has to be used more frequently than the old, with a stronger sense of personal relevance. This is yet another reason why elucidogens are so effective in relearning: you can simultaneously have a strong affective repudiation of an old idea and a strong affirmation of its replacement. The Buddhists call that samvega. If we consider cognition as another form of behavior, we can use the term extinction to describe the unlearning process, the gist being that operant conditioning that has been secured by reinforcement will gradually stop occurring as reinforcement is withdrawn. Much of the research here has centered on the deconditioning of fear and habitual or addictive behavior. The constant in such research is that the process will take time. It isn’t done by executive order from high in the prefrontal cortex, by snap decision, or by a conscious act of will. All of those special and affectionate associations we had with dragging on that cigarette or downing that glass of wine need time to unravel more completely. But they are there to keep reminding us until that process is done.
    Cognitive reappraisal names one therapeutic technique that seeks to alter the affective charge attached to an idea or belief. This is harder to do with a conservative mindset and we require some reason to want to change, such as having a belief that leads to undeniably maladaptive behavior. In effect, we look at things in a different light, in a different frame or context, or from a different angle, and explore having different feelings about them than we had before. This is re-evaluation leading to revaluation, but in itself falls a level short of Nietzsche’s revaluation of values, which asks if the values we are using to reassess thing are even worth having. Altering the emotional charge of a memory, concept, or belief is more akin to the processes of sublimation that to those of suppression or repression. We first have to own our emotional associations, and then we exchange them for something better. Those who believe that it’s inauthentic to manage or control our emotions, or do other than let them be, may have a hard time with this philosophically, but these wouldn’t have much future in philosophy or psychology anyway. The term cognitive reappraisal is more excellent than a single brand of therapy should keep to itself. It says a lot about how we rethink things, and particularly about how we need to recruit our affect and emotional reward systems into the changing of our minds, in order to overcome the inertia of the fixed idea and regard the reconfiguration of neural networks as something worth our while. Unlearning is neurologically expensive, and neuroplasticity is demanding.
    Changing our minds amounts to making a decision to prefer one thing over another. But this isn’t a simple cognitive task like overtyping. The new choice may need to work its way into lower levels of the brain where consciousness doesn’t go. Robert Sapolsky, in Behave, explains some of the interactions between the ventromedial and dorsolateral portions of our prefrontal cortex, where these decisions are frequently made. Like with other decisions, options are presented to awareness, but not just as ideas. They are given as though they were “emotional memories of possible futures.” We don’t just run cost-benefit analyses with our cold cognition. The limbic system runs internal simulations of affect, and reports that to our awareness. We examine these options in terms of how they make us feel, how valuable and relevant they feel to us. Usually the default choice will present itself as the easiest and most comfortable. The dorsolateral PFC has its own processes though. It inserts distances or degrees of abstraction. Sapolsky says that helps us to “do the harder thing when it’s the right thing to do.” By this dual process, the more abstract alternative thought is held up for affective evaluation as well, but it isn’t entirely dependent on the limbic system for the needed hotness of the cognition.
    Fred Previc (2009) has proposed that supralimbic dopaminergic reward systems coevolved with the higher cognitive functions in intelligent animals. Reason alone has no edge over the monsters from the id down below, who have already decided “to do the easy thing.” Evolution had use for a reward system to make the hard thing more entertaining than reason alone could manage. Previc suggests this allows for sublimation of impulsive mesolimbic drives and their control by our rational intellect by making the act of thinking itself more pleasant. A few millennia before, Buddha suggested terminology for such “beautiful functions of mind” that appreciate the pleasure of skillful mental functioning. This fits with Previc’s reward system for such functions as executive intelligence, agency, or will. A few of Buddha’s: kayalahuta, lightness, buoyancy, agility of the mind; kayakammannata, readiness, adaptability, wieldiness,  efficiency of the mind; kayapagunnata, proficiency, competence, vigor, or fitness of the mind. He names 25 of these as higher pleasures worth enjoying. These are listed in the Abhidhamma Pitaka (not in the Suttas) as Sobhana Cetasikas. Epicurus, the high-standard hedonist, might have toasted that observation with some healthful drink. So there is some joy in reason after all, provide we know how to find it.
    Transformative learning theory describes “the social process of construing and appropriating a new or revised interpretation of the meaning of one’s experience as a guide to action” (Jack Mezirow, 1994). As this is normally presented, it shares an over-reliance on critical reflection and rationality with most of the other critical thinking programs. Edward W. Taylor (2001) does the theory a kindness by offering missing pieces of the puzzle that it might accord better with the realities of the brain: “Transformative learning as explained by Mezirow in the field of adult education has been criticized as a process that is overly dependent on critical reflection, such that it minimizes the role of feelings and overlooks transformation through the unconscious development of thoughts and actions. This paper further substantiates these concerns by exploring the emotional nature of rationality and unconscious ways of knowing (implicit memory) from the field of neurobiology and psychology and offers a physiological explanation of the interdependent relationship of emotion and reason and the role of implicit memory in transformative learning theory. Recent research not only provides support that emotions can affect the processes of reason, but more importantly, emotions have been found to be indispensable for rationality to occur. Furthermore, brain research brings to light new insights about a form of long-term memory that has long been overlooked, that of implicit memory, which receives, stores, and recovers outside the conscious awareness of the individual. From implicit memory emerges habits, attitudes and preferences inaccessible to conscious recollection but these are nonetheless shapes by former events, influence our present behavior, and are an essential part of who we are.” Implicit memory here includes classical and operant conditioning, and important processes such as priming. Exploration and resolution of feelings associated with memories and present cognitive states is vital to cognitive transformation because this is how decisions regarding transformation are voted on in the skull. The questions come down to meaning and relevance, which have affective valences that drive our attention. It’s therefore irrational to try to strip emotion and passion from reason. What we want to do instead is to weigh these correctly and keep them in their place.
     Specific anticognitives that function underneath the accommodating domain can be found itemized in Chapters 3.2 (Native Heuristics), and within it in Chapter 3.4 (Cognitive Biases) and 3.7 (Logical Fallacies).



2.4 - Situational Domain

Problems as Puzzles, Cognitive Development, Problems and Emotions,

Attitude of Approach, Sense of Agency, Processes and Heuristic Tools

Problems as Puzzles
    The situational domain concerns problem-solving tasks that don’t have people in them. It includes those procedure and strategy scripts that are not involved in social interaction, including such implicit procedural memories as using tools or musical instruments. Problem-solving here includes heuristics and algorithms, and approaches that solve investigative problems as well, but predominantly linguistic or mathematical behaviors that stay in their own little worlds belong in their own domain. So many more of our problems are personal or social that these also demand their own domains. Such problems recruit more parts of the brain and entail a greater involvement of affect, or hotter cognition, but as we will see, situational intelligence is not the same as cold cognition. The word problem itself is going to mean something a little different here: the term should be understood in terms of challenges rather than difficulties. Give a good mathematician or puzzle master a new problem and they’ll get all gleeful inside and rub their hands. Problems here are able to elicit and encourage solutions. And even a situational stressor can move us towards a stronger sense of the real, or a more pleasing vividness.
    In this domain, it’s largely just you alone with the world, though with a bit of a fuzzy area in our socializing with non-human relations. As organisms, we’re situated or contextualized in a (peri)personal space, and surrounded by others and things, within a specific span of time, with problems to solve or tasks to perform. But this is only one of the kinds of situations in this domain. We also live situated in extrapersonal spaces, in much larger frames, outside of direct interaction. In the brain’s default mode, we also are situated in our memories, thoughts, and feelings. In our cogitating mode we are situated in abstracted realities, schemas and scripts, analogies and models. The purely linguistic and mathematical problems we solve tend to occur within their own tautological worlds, and these are given their own domain, even though they are instruments put to use in the others. But here in the situational domain, we still set up game boards in our minds, problem spaces, or theaters to act out scenarios. Whether we’re thinking inside or outside the box, we still have the box to start with.
    This is the domain where our ignorance and delusion is apt to find the least justification. We have nobody to blame but ourselves when we see only what we wish to see. It’s somewhat more obvious when a failure is our own fault. From birth, we’re driven to explore and cognitively map our environment and develop the skills needed to move successfully through it. We’re born to turn over rocks and see what may live under there. In general, to whatever extent we develop inaccurate databases, we will diminish our own opportunities for success. There are some real exceptions. An oversimplified understanding can play the averages and count on our stereotypes being accurate somewhat more often than not, and this can be a parsimonious use of energy that, on average, can outweigh the disadvantages of error. Deceiving ourselves at least a little about our own competence can often give us the confidence we need to overcome the consequences of our own incompetence. These aren’t perfect solutions, especially as life advances. Electing to remaining needlessly dim has long-term consequences. Had the energy that was spent tormenting the smart kids in school been spent on study instead, then “would you like fries with that?” or “welcome to Walmart” might not have become so important a part of an adult’s vocal repertoire.
    Complications arise where decisions have missed their mark. Ill-conceived solutions sometimes create more problems than they solve. Uncoordinated efforts lead to unexpected interactions. Planning for the worst case scenario usually leads to a huge waste of resources. Compounded safety factors or margins multiply each other into ridiculous solutions using many times the needed resources. It’s a learned skill to avoid these types of errors, putting better solutions generally out of the reach of government agencies. We also have a learned persistence bias that reminds us “If at first, you don’t succeed, try, try again.” But we don’t seem to have the companion advice to take a breath and try to figure out where we might have gone wrong before trying and trying again.
    Although proposed in a different context, Arthur C. Clarke’s distinction between failures of imagination and failures of nerve might be applied to dichotomize our pre-failures in this domain along the cognitive-affect axis. We may, unnecessarily but culpably, limit either our access to the information needed to solve a problem or the attitude we need to approach it successfully. In science, however, a failure tends to be the same as a finding if we approach it correctly, and negative findings can be just as valuable as confirmations. “The greatest teacher, failure is,” says Yoda. To avoid life in fear of that is to fail life itself. But we can also fail at life just by performing spectacularly stupid stunts.

Cognitive Development
    It’s with our basic survival drives that we find cognition first pressed into service. This is the very bottom tier of Maslow’s need pyramid, excluding sex. Early in life we have these new beings finding themselves in numerous new situations, and it isn’t the business of the situations to figure things out. On this primary and homeostatic level, successful solutions of some degree are required, meaning mandatory. The safety and security needs on the next level up are also problematic. Childhood adversity and varying degrees of cognitive and physical impairment can be the consequences of both bad situations and bad choices here.
    Piaget’s concrete operational phase of development and mode of cognition is mostly centered in this domain. External operations can be internalized, played out in various simulations and scenarios in the mind, with a sort of vicarious trial and error, but the building blocks of thought are still largely sensory and conceptual metaphors, without much benefit of abstraction. His formal operational stage occurs here as well, using the schemas and scripts, analogies and models from the accommodating domain, but this also spans other domains, especially the cultural and linguistic.
    In 1995, Howard Gardner added an eighth intelligence to his collection: naturalistic, our evolved, ecological, and holistic way of knowing the world. This would include such ancient skills as reading nature and clouds, orienting under both sunny and nighttime skies, basic classification of flora and fauna, health assessment and disease recognition, foraging, testing unknown foods, natural medicines, tracking, hunting strategy, predator avoidance, hygiene, sanitation, shelter seeking or building, fire making, materials identification, tool making, weather prediction, and clothing ourselves. It also incorporates a sense of unity with nature, and when we get it, a sense of connectedness with our other relations, mitakuye oyasin, as the Sioux say. Some of these will come to us naturally, but we are born to learn even more from our teachers. Obviously, there are cultural elements to these lessons as well, and some acquisition of lore is in order. We continue to learn detailed procedural things from our environment in the situational domain. There are also sensorimotor elements, as with the ability to recognize dietary acidity, alkalinity, salinity, sugar, and fat with our tastebuds. Gardner’s other kinds intelligences don’t seem to integrate well or clearly with this domain, though it might be said to include bits of his visual-spatial, verbal-linguistic, and bodily-kinesthetic intelligences (at least in terms of implicit memory and behavioral scripts).
    Research into the benefits of the cognitive training exercises “as advertised on TV” have shown mixed results, with improvements made in tasks similar to those given as practices, but little change in overall cognitive performance that has to recruit more global functions and unrelated neural networks. On the whole, the advice of “use it or lose it” persists, and it’s still regarded as wisdom, particularly with regard to later life and the aging brain. Optimal mental stimulation and cognitive exercise, within any limits that the brain itself might signal, are unquestionably at their most important in childhood, while the young brain is first being mapped, and unused neural or synaptic connections are being dropped or re-tasked. Our working memory seems to benefit from regular exercise as well. Our prefrontal cortices continue their most significant development through our mid-twenties, so culturally, we might be regarding ourselves as grownups just a little too soon. It really isn’t a time to quit learning, although many seem to do just that. New experiences are processed in a different part of the forebrain than the been theres and done thats (frontopolar vs dorsal prefrontal cortices, Kentaro Miyamoto, 2017). We ought to get both of those tanks topped off as well as we can, and learn how not to just keep repeating our errors, and how to better “self-evaluate our own ignorance,” before we set out to run or ruin the world.
    There’s an acknowledged dichotomy in problem-solving strategy between modularity (distinct from modular mind) and flexibility. These are negatively correlated. Modularity has an array of known solutions to specific problems, the collected work of specialists. Flexibility is more apt to move outside the problem itself, to access interdisciplinary resources, or think laterally. They each have their strengths, with modularity favored on more simple tasks and flexibility on the complex or multidimensional. Getting outside the box or high above the problem brings to mind the Einstein quote, “The significant problems we face cannot be solved at the same level of thinking we were at when we created them.” In military hierarchies, the specialists are on the bottom and the general handles the general or overall picture from above. The general must understand all of the foes, the specialist, mainly his own. This metaphor applies also to executive functions in the mind. Modularity vs flexibility isn’t the only useful dichotomy in developed thinking styles within this domain. Evolution provides us with diversification and selection, which by analogy means expanding the components and permutations of a problem to get a closer look at options, and then applying judgmental criteria to highlight successes and cull failures. This is also known as divergent and convergent thinking. There are also useful and similar dichotomies between analysis and synthesis, and between eclecticism and syncretism.
    Overthinking can be a big problem in this domain, especially a problem of diminishing returns or wasted effort. This includes confusing the map with the terrain, or getting lost in the map and forgetting about the terrain. Unless we are seeking to develop perfect abstractions, these only need to be good enough to solve real-world problems. Thinking here is not an end in itself, and thought is a better servant than master, as a tour of the great bulk of our academia might attest. We might also recall Jonathan Swift’s Laputans, who needed to be whacked with bladders from time to time to remind them of functionality in the real world. Since the goal in this domain is to actually get problems solved, instead of just fooling ourselves or anyone else, or baffling with bullshit, pragmatism plays an important role here in getting ‘er done. We are, however, able to adjust our criteria for determining success according to variable standards, from close enough for government work, to workmanlike, to near perfection. Further, where we are free of moral constraints, we are also freer to choose from alternate means to our ends.

Problems and Emotions
    We don’t make purely rational decisions. Without affect, we don’t sense the salience or relevance we need to even pay attention to a task. Without the social dimension, there is far less of ethical judgment or moral sentiment in this domain. Feelings here are personal, and more related to anticipation, or ongoing questions of progress, success, or failure, or subsequent reflections upon success or failure, and how that makes you feel. It might include some moral sentiments, such as how you feel about cheating at solitaire, or how often you touch yourself down there, and whether you’re OK with having imaginary beings watching you do that from Heaven. There can be difficult feelings and emotions that arise here. My personal least favorite problems involve the repair of plumbing systems more than three decades old (hakuna matata for new systems). There’s usually cursing involved, and throwing things, and some bleeding, and sometimes crying, and multiple trips in a day to the hardware store to exchange mispurchased items. It’s almost worth calling a plumber. Of course, finding yourself standing between a mama bear and her cub is a situational problem in this sense as well, all full of emotions. Fear of death is a big one, or falling, or other fears of letting go. Frustrated, vigilant, anxious, superstitious, insecure, threatened, and other stressful states emerge in places here. There can be unwelcome feelings of stupidity, fatigue, boredom, clumsiness, inadequacy, or impotence. Blame-shifting will often occur here, especially on a nature-nurture or attribution axis, and other self-serving excuses for failure.
    Stress is the most general term for what the thoughtless disobedience of the universe does to us in the situational domain. At its most fundamental, it’s a reaction to upset or deprivation in any one of our homeostatic requirements or more important needs. At our higher cognitive levels, it subsumes cognitive dissonance, frustration, fear, vigilance, anxiety, disappointment, superstition, resistance, loss of confidence, insecurity, and uncertainty. The predominant response to stress is the coping strategy. An extensive list of these can be found in Chapter 3.5, Coping Strategies, and those found most often in the situational domain are listed under that heading.  But there are also healthy forms of stress, and coping strategies that will leave us improved. A loss of predictability or sense of control is often sought out on purpose. We pay more money for the E-ticket ride. We pay to jump out of airplanes. Working out, play, and sports are intentionally stressful. These are marked by challenge, engagement, and flirting with overstimulation, often in the hope of getting into a zone where are activities are supported by one’s whole adapted being. Stress, whether voluntary or not, along with help from glucocorticoids, will heighten our senses, and help us to “get out of our heads” by overriding the prefrontal cortex. Bad stress, on the other hand, ages us, and also makes it more difficult to override resentments, traumatic memories, and phobias.
    Uncertainty is going to be a given in life, reduced mainly by the limitations and restrictions we place on our own experience, or the protective walls we build around ourselves to stabilize our worlds. Our word for paradise derives from a walled-in garden, without any windows to let in bad news. Western theologians, in making their deity omnipresent made him immovable, in making him omniscient made him unable to learn. Buddha took the opposite approach, insisting that it’s on us to adapt to the hard facts of existence: its impermanence, its incompleteness, and its illusion of lasting identities. Those who want to master all they survey must lock themselves into small boxes to survey as little as possible. Our need to solve problems and make decisions necessitates us narrowing our world or its options, and reducing uncertainties to high-probability desired outcomes. As vulnerable and mortal fleshbags, it’s right that we should be averse to unnecessary or whimsical risk and make some preparations against the unforeseen. But when we develop an excess of uncertainty avoidance behavior, this becomes counterproductive. Harsh regulations on behavior, reliance on absolute truths, belief that there is only one way to live, obsessive risk aversion, and design for worst-case scenarios, may hold some uncertainty at bay for a while, but any opponent declaring war on change, diversity, and ambiguity is doomed to eventual failure.
    Other subjective elements come into play when we’re called on to value choices in approach-approach, approach-avoidance, and avoidance-avoidance scenarios. These begin with primitives valences: wanting more and wanting less, liking and not liking, being drawn and being repelled, approaching and avoiding, and fighting versus taking flight. Our emotional commitments to identities, beliefs, and ideologies can also deny us access to practical ways of solving problems and dealing with the world because they are proscribed, taboo, haram, or not kosher. Sometimes we can allow relevant-but-less-than-positive feelings to remain in the background, or use what some have called constructive discontent. That is sometimes quite useful. Ultimately, it’s on us to do triage on this mess and decide what needs to be done before we can get ‘er done. Task-involvement describes precedence given to a problem at hand before personal or ego involvement, which is subject to hotter cognition and distraction, especially under threat of failure. And under a threat of failure, there is also self-handicapping, not giving a task your all because “if you’re going to fail, it’s better not to have tried your best” (Baumeister, 1991).
    Background emotions and moods can bring extraneous distractions to the situation from external sources. Unrelated states like anger, arousal, boredom, free-floating anxiety, existential anxiety, depression, discouragement, dread, fatigue, hunger, illness, imbalance, nausea, overload, stress, and tension can all play hell with decisions, making focus or partitioned awareness a good tool for the belt. We’re already making unconscious assessments of what aspects of the problem are irrelevant and what solutions are impossible, so we don’t need any further distractions.
    We will look first for salience, then personal relevance, and then value in our experience, and all of our problem-solving and decision-making involves some sort of evaluative thinking. Higher and lower opinions are raised and lowered by affect as much as by any cold, cognitive standard. We can make objective assessments where the only affective assent we may need to give is congratulatory, on finding the perfect answer. Outside of these, judgments are value judgments, the weighing of choices on merits. They can be normative, prescriptive, or proscriptive. The subjective components must be disallowed when looking at logic, so there needs to be enough of the objective in there to stand alone. Objective standards may well be one of the things we examine, but we approve of our solution or choices when they feel right, and when they feel preferable to any others. We may have some say in the processes by which we approve, and we may have thought much about it, but these are learned skills and circuitous routes to gaining control of our judgments. We can, with some difficulty, and to a limited extent, take charge of our ability to assign the value we seek to further. In fact, this may be our only path to free will, but it isn’t for everyone, it isn’t all of the time, and it takes some time to develop.

Attitude of Approach
    In the accommodating domain we looked briefly at the ability to have fun with our problems, or enjoy thinking them through, boosted by an abundance of dopamine in the prefrontal cortex, a sort of handicapping for the PFC that helps it to compete with the limbic system and put forth the extra effort to figure things out. That was in the context of rethinking the things that were already built into our constructed minds, but the utility of a kind of thinking that enjoys itself doesn’t end there. Fred Previc proposes a critical role for the expansion of dopaminergic systems in the evolution of human intelligence. Here, “dopamine is postulated to be the key neurotransmitter regulating six predominantly left-hemispheric cognitive skills critical to human language and thought: motor planning, working memory, cognitive flexibility, abstract reasoning, temporal analysis/sequencing, and generativity” (1999). It should be remembered, however, that dopamine doesn’t work alone, and in certain combinations with other neurochemicals, it can lead to feeling simply more instead of better. This neurochemical boost would have led to an outward focus, beyond peripersonal space and time, to a greater exploration of the unknown, to the growth of culture, and the invention of new things under the sun. In the situational domain, our cognitive processing is free to use any of Buddha’s 25 beautiful mental functions or sobhana cetasikas, all encouraging aids to purposeful intelligence. There can be a joy or a thrill in challenge, in a sense of the game afoot, with more to follow upon achievement or victory. While we will still lean strongly towards the rational in this domain, and this helps us to choose what we can take away from an experience, the memories that arouse at least some feeling or emotion will be better remembered than those that don’t.
    Confidence in our assessment of problems, and in our toolkits for solving them, is needed more than certainty. Certainty doesn’t always have to mean overconfidence, but it’s often equally unnecessary and counterproductive. It certainly doesn’t help with flexibility. We want to have a sense that we are in control, at least of the problem’s givens or the puzzle’s pieces. But certainty will tend to imagine these in rather fixed arrangements. People might fear approaching a problem that has too much randomness or disorder, but too many prior assumptions don’t help either. Closure is supposed to come at the end of the process, not partway through it. Insight often needs to dynamically reorganize the perceptual field or datasets as presented, and this has even been used as a definition of insight. Persistence is almost always a good thing to have because it implies that the problem may not yield immediately, helping us to adjust ourselves to a need for patience. Confidence gives us that, while certainty might give us impatience.
    It doesn’t always hurt to have some positive illusions about our abilities, especially where these contribute to confidence. This can be like a bootstrap psychophysics, or self-fulfilling prophesy, or some placebo cocaine, without the impairments, faulty assessments, and aggressive actions. But we should remain at least dimly aware of its limits and stay prepared to adjust to failure when this begins to cause trouble. Approaching problems with a confident, positive attitude tends to make the entire experience more salient, and both successes and lessons learned wind up getting better registered and better interconnected in memory. Another useful attitude is an opportunism with some of the characteristics of appetite or hunger, a sense of sport or the game afoot, an eye of the tiger. The chronicles of winners at the Olympic games are loaded with tips and tricks and anecdotes that illustrate them. When you can really be the ball, that ball’s gonna get hit. Curiosity might be described as an appetite or hunger for new experience. The first thing an intelligent animal will do in a new environment, as soon as normal emotional and cautionary inhibitions are satisfied, is explore and map the territory’s resources and hazards, so that later it can navigate through these quickly, with whatever purpose it has in moving so quickly. The initial exploratory behavior may appear biologically expensive and wasteful, since much of this data may never be used, but its survival value has been well-established in the survival of the species that use it. We are collecting affordances for possible future uses.

Sense of Agency
    The wrong approach to take in situational domain problems is nowhere more evident than in the fear of mathematics in students. So many will lean or hang back from problems instead of leaning forward into them, lacking all eye-of-the-tiger courage. But, just like in downhill skiing, when you hang back, you take the weight off your edges. You literally and metaphorically lose your edge, where almost all of your control is to be found. Subsequent failures compound and make the problem even worse, so that it might be best to go all the way back to the point where control was lost, run through the lessons again, and start leaning forward into the problem.
    Most of neuroscience is slowly coming around to Buddha’s point of view that the self is more or less illusory, or at best, a temporary composition of shifting components that’s capable of maintaining itself as a subjective sense of continuity. While we are living organisms, we are also “thoughts without a thinker.” Nevertheless, even Buddha spoke in the first person, and scientists who find themselves in mortal danger will still feel that they have something essential, precious, and possibly even sacred to protect. In the situational domain, we rely on our sense of self to act as an executive, making decisions, a conscious will. This is not to say that we all have free will, or that any of us have it all of the time, which is a subject for later discussion. This executive construct makes its home largely in the prefrontal cortex, but it isn’t that little homunculus peering out at the world through our eyeballs. It’s important to successful problem solving that this executive sense, or an internal locus of control, maintains a level of confidence in order to be an effective agent. It doesn’t need all the flattery that we heap upon it, though, or to be told that it’s some immortal spark of the divine. It just needs good reasons to make good choices. This sense of agency can be compromised by a number of mental formulations regarding its role in the grander schemes. It can find itself referring to itself in the passive voice. It can adopt a victim mentality, or a disease mentality, such that it can’t help itself. It can believe itself to be little more than the product of its environment, with most of its outcomes already predetermined. It can be made to feel tiny, alone, anonymous, and impotent. It’s repeatedly told to let go, let it be, and accept the things it cannot change. Many in recovery imagine themselves under the control of an inanimate liquid that’s “cunning, baffling, and powerful.” Many, and perhaps most, can find some relief from these unwanted feelings in fairly complete submission and obedience. While a great deal of what we are is in fact a product of our environment or nurture, particularly if we’ve suffered childhood adversity, it’s still up to our illusion of executive function to get a grip, suck it up, and play the hand we’ve been dealt. It’s what problem solvers do.

Processes and Heuristic Tools
    These are some of the many general mental processes involved in the situational domain: abstract representation (patterns over patterns); associative clusters connecting  perception, affect, idea & lexeme; behavioral mores and rules; causal inference; cognitive biases; cognitive flexibility or shifting; coping strategies; decision making; depersonalization or lightening up; desenrascanço (Portuguese for artful disentanglement); improvising solutions; destigmatizing failure; domain-specific knowledge, expertise, and skill sets; executive function (internal locus of control); experiment; evaluation; exploratory behavior in new environments; generativity; goals and objectives; holistic or systems thinking; humor (as juxtaposition of referential matrices); inference (from premises to conclusion); inhibition (abeyance of automatic responses); learning strategies (including learning of learning strategies); logistics (coordination of resources and tools to be deployed); mnemonic devices; multidimensional or non-linear thinking; one-dimensional or linear thinking; orientation; perspective taking; procedural memory and scripts; puzzle assembly; research strategies; reward prediction; situational awareness (intelligence and sitreps); spatial comprehension; strategic scripts and flow charts; tactics; temporal comprehension; and tool use and instrumentality (see how corvids do).
    And these are some of the specific heuristic modalities: abstracting to higher levels and solving in theory; algorithms; analogy, metaphor, and modeling; Bayesian thinking (remembering priors, undoing assumptions, and incremental updating); bell curve studies (including finding the extremes and exceptions to norms); brainstorming, shotgunning, and free association; changing the lexicon and discourse; considering contrary cases; convergent thinking (towards your one final answer); correlative thought; divergent thinking (exploring multiple possible solutions); dividing to conquer (reductive analysis); experiment and scientific method; extrapolation; falsification or disproving; formulae and equations; genetic analysis (causal inference of analysis of root causes); interdisciplinarity; interpolation; juxtaposition of puzzle pieces in unexpected ways; lateral thinking (oblique and unexpected approaches); means-end procedure; mental contrasting (seeing the outcome and the work); methodical inquiry; perspective shifting; quantification and measurement; questioning assumptions and premises; questioning adequacy of evidence; questioning selection of heuristics and algorithms; recognition (finding similarities to problems already solved); recombination of components and elements; reframing; representational flexibility; rescaling; schemas and scripts; skeptical inquiry; switching to nearby models with known solutions; switching to simpler models with known solutions; systems analysis or holistic thought; prediction from precedent; prediction from statistical inference; research; thinking outside the box; time horizons and deep time perspective; trial and error (both real and virtual or vicarious); and troubleshooting.
   The specific anticognitives relevant to the Situational Domain are listed and discussed in Chapter 3.4, Cognitive Biases, Chapter 3.5, Coping Strategies, and Chapter 3.7, Logical Fallacies.



2.5 - Emotional Domain

Affect and Emotion, Reason and Emotion, Setpoints and Treadmills,

Hydraulics and Other Fallacies, Emotional Self-management,
Cognitive Behavioral Therapies, Reappraising and Revaluing Values,
Resentment and Neuroplasticity, Classifying Emotions

“Ultimately, happiness comes down to choosing between the discomfort of becoming aware of your mental afflictions and the discomfort of being ruled by them.” Yongey Mingyur Rinpoche

Affect and Emotion
    Emotion will be used in two senses in this domain discussion. The context should clarify which. The domain name will use it in it’s broadest sense to include all affect, whether this is emotion in the narrower sense, or feeling, sentiment, mood, and even temperament, in a generally ascending order of duration. In a narrower sense, emotions are affective reactions to subliminal triggers concerned with our various levels of needs, and as the word implies, they are evolved responses that encourage us to perform the motion that will satisfy the need. Motion or movement generally refers to taking action or advancing some behavior. But here, we also are going to consider cognitive activity, and especially anticognitive activity, as a form of behavior in this sense. Emotion, then, can move us to meddle and mess with our own minds in both salubrious and maladaptive ways. Feelings are more subjective, or simply felt, more afferent, less efferent, and when they lead to behavior, they may be re-tagged as emotions, which makes it challenging to separate the two with classification. Moods are diffuse affective states that generally last for longer durations than emotions and are also usually less intense. Moods are less specific, but may still be triggered by some particular stimulus or event. Temperaments are very long-term affective positions that many will claim are inherited affective set points, independent of learning, values, and attitudes toward life. People don’t change? People can change?
    All affective states, taken together,  can be thought of as another class of sensations, alongside exteroception and interoception, sensations that monitor internal states of the nervous system, and especially its chemistry. This is the brain tasting its own molecular soup. Like the other senses, these sensations combine in and with other mental functions into affective perceptions. Based on similarities to previous experience, emotional perceptions may develop into propositional attitudes as they emerge for conscious review, as we try to make sense of them, but they will often fail at these comparisons on further review in the PFC, and will often get dismissed as incorrect propositions, to be told, as it were, to simmer down. Emotions can be regarded as experiences in their own right, with the whole or primary point of their being beginning and ending in their arising, although it’s not correct to separate them from their triggers. Sometimes this might be called wallowing, sometimes savoring or relishing. Anticipation can be as exciting as an event, and you still get your dopamine, whether the event happens or not.
    This is the cognitive domain that’s referred to as emotional intelligence. Andrew Coleman’s Dictionary of Psychology defines emotional intelligence as “the capability of individuals to recognize their own, and other people’s emotions, to discern between different feelings and label them appropriately, to use emotional information to guide thinking and behavior, and to manage and/or adjust emotions to adapt environments or achieve one’s goal(s).” But with regard to anticognitives, it includes our stupid feelings and how they encourage our maladaptive behavior. An intelligence is an ability to learn, infer, or understand from experience, to retain important lessons learned, and to respond successfully or adaptively to new experiences. It consists of a lot more than reason, while the continued neglect of processes other than reason in such practical areas as critical thinking may actually work to diminish our overall intelligence. In “Critical Thinking and Emotional Intelligence,” Linda Elder (1996) writes, “To engage in high quality reasoning, one must have not only the cognitive ability to do so, but the drive to do so as well. One must feel the importance of doing so, and thus be driven to acquire command of the art of high quality reasoning… . Thus the affective dimension, comprised of feelings and volition, is a necessary condition and component of high quality reasoning and problem solving. Every ‘defect’ in emotion and drive creates a ‘defect’ in thought and reason. Intelligence on this view, then, presupposes and requires command of the affective dimension of mind. In short, the truly intelligent person is not a disembodied intellect functioning in an emotional wasteland, but a deeply committed, mindful person, full of passion and high values, engaged in effective reasoning, sound judgment, and wise conduct.”
    Emotional literacy in this domain, according to Daniel Goleman (1995) will encompass: “1) the ability of immediate self-awareness, recognizing a feeling as it happens, understanding the causes of feelings and being able to separate feelings from actions; 2) the ability to manage the sometimes obstreperous nature of emotions, which involves more effective anger management and tolerance for frustration; 3) the productive utilization of emotions which involves marshaling emotions in the service for focusing attention, self-motivation, delayed gratification and more self-control; 4) the ability to empathize, reading emotions of others, reflecting their needs and wants by taking another’s perspective and through active listening; and 5) the ability to handle relationships, the skill in managing emotions in others.” But here, two more important abilities are proposed: 6) the ability to manage the emotions associated with our memories, or initiate top-down transformations to implicit memory; and 7) the ability to manage the emotions in our present awareness by taking charge of our capacity to reassign personal relevance and value. Neither of these abilities are commonly seen, and neither are easy to develop. They have, however, been explicitly practiced by Theravada Buddhists and others for ages.
    Willian James observed, “the perception of bodily changes, as they occur, is the emotion.” When we react to something we go through stages, usually involving emotional responses in the limbic brain. Eventually, the reactions may become conscious, when we can begin to evaluate or assess them in the forebrain for their accuracy and usefulness, and begin to deal with the issue linguistically as well. But the earlier, triggered, preconscious reactions are still in play. Hume asserted that reason is and ought to be the slave of the passions, and so placed emotions in an epistemically and volitionally prior position to reason, at the heart of our character and central to what drives us. But emotions are infamous for motivating our irrationality. Steven Novella (2012) writes, “Emotions are involuntary and subconscious. We don’t choose to feel angry; we just feel angry and then invent a reason to explain why we feel angry—with varying degrees of insight. In addition, explanations we invent for our feelings and behavior are typically highly self-serving.” But importantly here, although “we  don’t choose to feel angry,” we are still able to choose to not feel angry once anger has begun to emerge into awareness, and we can gradually learn to alter our initial response to a stimulus that at present triggers anger. We can do this without suppression or repression. In the West, Aristotle may have been the first to write of anger management: “Anybody can become angry - that is easy, but to be angry with the right person and to the right degree and at the right time and for the right purpose, and in the right way - that is not within everybody’s power and is not easy” (Nicomachean Ethics).
    Feelings and emotions arise unbidden out of the subconscious, in response to sensations, perceptions, thoughts, other feelings, and recalled memories. The emotions themselves are subjective interpretations of our neurochemical responses to triggers. They are preserved in evolution for their assistance in helping us allocate our attention by their insistence on being felt, frequently triggering stereotypical behavioral responses more quickly than our rational thinking could ever hope to manage. The memories summoned into working memory, or put on standby as only potentially relevant, can still bring their associated affect along with them, even if only distantly related to the task at hand. Sometimes this will account for a general anxiety. None of the stimuli or sources need to become conscious before the affect is strong enough to be felt. Emotions are more commonly associated with intuition than thought. Although most intuition is a mixture of affect, conditioning, memory, and learning, intuition still won’t be fully accounted for without a prominent place for emotion. The salience network of the brain is on perpetual watch for personal relevance, and the first signals it gets from sensation and perception aren’t thoughts: they’re emotional reactions arising out of conditioning or prior experience. Even in our most dispassionate states, consciousness and emotion are not separable, and affect comes first.
    Damasio (2000) asserts, “the biological machinery underlying emotion is not dependent on consciousness.” He also reminds us: “a significant part of body input actually does not travel by nerves but by way of the bloodstream.” In terms of always being first in line for consideration, it’s like emotion’s last name is Aardvark. Then we tend to decide what we’re feeling after the body and the older brains have prompted us from below. The prompt is supposed to tell us if we’re doing well or poorly, and whether we’re threatened or safe. And whether we want to approach that thing, back away slowly, or run like hell. Hedonic motivation is more than approaching pleasure and avoiding pain in our sense receptors. It’s also about approaching good feelings and avoiding the bad. The computational theory of mind, that proposes to one day replicate mind with hardware and software and patterns of electrical charge, has yet to explain where emotion enters the picture, if at all, or even discuss how emotion might be essential to sentience and consciousness. Perhaps there will be other processes that will tell an AI what to value and how to choose, but it doesn’t follow that this will require, entail, or generate self-awareness.

Reason and Emotion
    Reason alone will want to see things consistently and fairly. Rationality is due proportion, things in their proper ratios, rationed according to their merit and truth. Emotion plays hell with that and has an inflated sense of both scale and permanence. Emotional moods and more fleeting states can often be seen or exposed in the use of linguistic absolutes and hyperbole, the words always, never, nothing, and completely. They are frequently seen in black-and-white thinking or false dilemma as well. The first person singular will be used more frequently too, or the second person in accusations. A minor inconvenience becomes a catastrophe. The heartless or thoughtless thing that you did nearly killed me. In an argument, that thing you do on occasion becomes that thing that you always do. The occasional omission is now that thing that you never do. Emotional disproportion is there to drive us with its hyperbole, because precision measurement alone lacks motive force. It’s just too boring to elicit big actions and reactions. Emotional states may also bring up any related resentments, even from years ago, and add these to the fire or argument, even things long thought forgiven and forgotten. Emotions might hijack our reason entirely in such self-perpetuating states as hysteria, mania, lust, or they may simply lock us into cognitive and behavioral ruts and loops.  Hey, the heart wants what it wants. Who am I to change that? It may be that the most useful function for reason is inhibitory, the power to just say no, by presenting to awareness another picture of the future that looks to be less unpleasant than the option currently being entertained. But a reason to resist an impulse or defer gratification is usually going to need an emotional charge of its own. Reason can act as an immune function, like a vaccine, a dead version of the threat to practice our defensive skills on.
    Jung wrote, “Where wisdom reigns, there is no conflict between thinking and feeling.” We might also suggest that the resolution of conflicts between thinking and feeling is one of the characteristics of wisdom, and one of the pathways to wisdom. The Stoics thought of the emotions as judgments about the value of things. Rather, they are more like expressions of the values that we (as our organisms) have given to things, often with little or no conscious effort. Psychologists are beginning to recognize the existence of emotion schemas, constructs that we develop about what emotions mean and patterns of emotion that guide our evaluations that are stored in long-term memory. These arise largely out of interactions in and between the intrapersonal and social domains. But as said earlier, emotions don't constitute schemas, and emotions form at least a small  part of all schemas. What they are speaking about is schemas concerning understanding and interpretation of emotions. But at least this recognition is encouraging: psychologists aren’t always the quickest ones to notice things about the psyche.
    There is a common assumption that our feelings are the closest we get to honesty or authenticity, and that any attempts to manage them are therefore inauthentic, somehow a departure from the path of the heart. But emotions aren’t necessarily authentic at all, or even more truly our own than our thoughts. We are able to pretend to the point of conviction, as actors might with their methods. We can come to believe our own pretense, and dance or smile our way into happiness. Because life is an adaptive process, when we “fake it ’til we make it” we might even claim that what began as artifice has become authentic. The fact is, an authentic person is a whole person, and is entitled to use the whole brain, thoughts and all, and even use cognition to inhibit or guide affect where circumstances might recommend such a course.
    The function of feelings can duplicate, parallel, and reinforce many of the functions of rational thought, especially in matters of comparison, evaluation, and choice. Decisions we make between options are driven by our values, and values are fundamental to our commitments. In most cases, they can’t really be separated. We require a sense of salience or relevance even to become aware of and pay attention to our options. We require emotions like curiosity to drive us to investigate our environment. We count on feelings to give us thumbs up and down on our methods and rates of progress. Emotions are fundamental to just about all of our decision making. We will use them in an anticipatory way when we have choices to examine. We try to estimate how we will feel after we implement this choice or that, with this largely based on similarities to past experience, and how those felt. We rely on this even with memories ravaged by time, confabulations, and biases that favor homeostasis for settled beliefs.
    Background moods and emotions were discussed briefly in the situational domain as able to bring distractions into the present context from external sources. The ability to distance ourselves from these, to sequester or partition our tasks from unrelated distraction in any cognitive task is important in any domain. Even before memories arise, and sometimes even before stimuli are present, we may be reacting to something imagined as possible, a rekindling of some traumatic memory, or a vague anticipation that more stress is on the way. We may even have anxieties that something may come along and create anxiety. Longer-term temperaments, on scales like timidity-to-boldness, or cheerfulness-to-melancholy, might be more resistant to change, but they are somewhat more amenable to sequestration, given the time we have to adapt to them. They do have significant roles to play in the long-term values we hold and the long-term goals that we set, and these in turn have effects on present states of mind and cognitive abilities.

Setpoints and Treadmills
    The pursuit of happiness being itself a goal is one of our sillier ideas. It’s like saying that the most important part of the journey is the speedometer reading. Robertson Davies has suggested, “Happiness is always a by-product. It is probably a matter of temperament, and for anything I know it may be glandular. But it is not something that can be demanded from life, and if you are not happy you had better stop worrying about it and see what treasures you can pluck from your own brand of unhappiness.” It’s a good thing that we can learn to do things that leave us satisfied, pleased, or happy so much of the time, but those feelings are not the point of it all. They are merely signs that we might be on the right track, and making that nice to know.
    The intensity, and sometimes the overall quality, of our default or resting emotional state can be something approximating a homeostatic constant. This is sometimes called an emotional set point, and it has aspects of a general temperament. We will tend to return to this level of affect in spite of major positive or negative life changes, although repeated boons or blows over time can perhaps set upward or downward trends in motion. There is a sort of self-governing action here, as good fortune has us expecting still more, increasing the power of disappointment as the odds get more even. When we habituate to a new state and intensity returns to our normal intensities, this seems to reset us in order to better experience the full range of responses to the next thing. When we are moving, say in a car or in an elevator, we don’t really feel the motion unless we’re accelerating or decelerating (acceleration also refers to turns and sideways bumps). Our affections of pleasure and happiness can be problematically similar to our sense of acceleration: we will tend to forget them when we remain in a balanced state and attend them best when things are changing. Maintaining a valuable steady-state emotion like gratitude can be a real challenge.
    Love is now known to be just a clever trick that life plays on us, to render imperceptible the unpleasant inconsistencies between another person’s true character and our own hopes and expectations. It must be maintained until we are bonded and it’s too late to get away. That charming little laugh becomes a hyena-like cackle. There can also be some mistaking of the bonding effects of oxytocin for various forms of love and lust. Or else we might find ourselves more attracted to someone we’ve met in a high-intensity situation, since we can confuse anxiety with arousal. That chemistry tends to wear off, returning us either to true love, which is far more ordinary that we had hoped, or back to being single.
    We are wired to keep seeking improvement, not homeostasis. This bodes ill for maintaining pleasure and happiness in steadier and more sustainable states. This phenomenon is also called hedonic adaptation: we get used to the pleasant things, and until we can learn to control our subjective states, we are left with having to combat this unexpected boredom by adding endless new variations and amplifications to our experiences. Further, we are somewhat more sensitive to a loss than to a gain: when our precious thing gets lost or stolen we will usually have stronger negative feelings than we had positive feelings when we acquired the precious thing in the first place. This means the game is rigged in favor of our dissatisfaction, as our expectations adapt primarily upward. This is sometimes called the hedonic treadmill. We have a similar problem in economics. Rational people might understand rationally that sustained growth in a finite system is unsustainable, yet any decline in the positive rate of growth is called a recession, or even a depression. The best models we know for true sustainability are natural climax ecosystems, which maintain a dynamic equilibrium where the quantity of living equals the quantity of dying. In finite systems, anything short of this must by definition, and pursuant to our analog of the 2nd Law, collapse. It requires reason for us to embrace our feelings and emotions with this understanding. It seems that we need to consciously cultivate our senses of appreciation, satisfaction, gratitude, and even reverence, in order to successfully manage a more steady-state, equilibrated, sustainable life and livelihood. One supposes that another expression for this is emotional maturity.

Hydraulic and Other Fallacies
    There are a few misconceptions about what emotions are that arise from a weak conceptual analogy to liquids, and how this will entail a comparison to certain hydraulic phenomena. The first might be called the beaker theory of affect, that suggests that we only have a finite amount of an emotion, such as love, to distribute among those around us. Spouses, of course, will demand all of “that” kind of love. Ninety-nine and a half just won’t do. Another such metaphor is more familiar in its hydraulics. Emotions are like volcanic forces that arise from the furnaces down in the id. When they find no release, they build up pressure and will eventually find other ways to come out, sometimes in explosive eruptions. Or emotions that we suppress, or stuff down deep inside into anaerobic environments, also build up pressure as they fester and ferment, and these need to be vented from time to time by a process called catharsis. We have Freud to thank for some of this. Too few have objected to this: Hey, wait a minute. Where is this stuff held in storage? Isn’t emotion a process instead, such that these high-pressured hydraulic forces are actually created on the fly?
    Some other metaphors have confused us from time to time. Emotions are the springs of action, so we are encouraged to stay the right amount of wound up to keep on ticking. Hot cognitions can be burned or seared into memory. We might say that emotional arousal will increase the likelihood of memory consolidation during the retention stage, but consolidation may not be the best word, since, being a process, it isn’t that solid. Neither is our memory an equivalent to data storage, since our sensory and linguistic experiences are interwoven in memory with the felt affective states. Daniel Goleman writes, “Cognitive scientists have been seduced by the computer model of the mind, forgetting that, in reality, the brain’s wetware is awash in a messy, pulsating puddle of neurochemicals, nothing like the sanitized, orderly silicon that has spawned the guiding metaphor for mind.”
    Finally, emotions can reify experiences and cement them into memory in ways that make them seem like physical or metaphysical realities, and will validate any extraneous stuff that might entail. A large number of our experienced psychic states have been turned into deities and planes of existence in the formation of religious doctrine. God is infinite love, despite His followers’ hatred of the infidels. Emotional intelligence is also knowing when not to construct a maladaptive emotion out of a bad metaphor. Some metaphors can still be used: they just shouldn’t be mistaken for schematic diagrams of how things work in reality. Emotions that get buried by suppression or repression can still do a lot of subliminal stuff “down in there,” but it isn’t because they are circulating bad or toxic juju or eating away at our insides.

Emotional Self-Management
    Although self-management and skill in deferring gratification will correlate strongly with generalized intelligence, the relationship between the two isn’t always causal or consistent. Smart people still do very dumb things for stupid emotional reasons. Affective self-management shows itself as a need in cases where personal reactivity and defensive overreaction leads to maladaptive behaviors. Triggers are pulled and traps are tripped hundreds of milliseconds before they come to our attention. Desensitizing the triggers and altering the traps isn’t a thing that’s done simply, with top-down executive commands from the PFC. It’s done by indirect and roundabout exercises that will alter the affect that we associate with the memories that act as their triggers. These are learned techniques, and they can also be taught.
    It’s difficult, where not impossible, to alter our evolved affective needs and emotional reactions and responses. But what we do with them after they arise can affect how we respond to them when they arise in the future. We can also alter the ways in which they accompany our memories. David Greenburg (2017) introduces a Mentalized Affectivity Scale (MAS) as a three-component structure underlying abilities of emotional regulation: “Identifying emotions (the ability to identify and describe emotions, examine them, reflect on the factors that influence them); Processing emotions (the ability to adjust, reappraise, modulate and distinguish complex emotions); and Expressing emotions (the tendency to express emotions outwardly or inwardly.” The  regulation of emotions may modulate their amplitude up or down, or alter their quality, or discontinue them in ways that are not the same as repression and suppression. This last bit is important: emotional self-management isn’t the same as emotional repression or suppression. It may simply be a looking aside, a discontinuation, or a cessation of the production of neurochemical factors.
    Issues of emotional self-management don’t usually arise until feelings and emotions get us into some sort of trouble, such that we now have to call the policemen of reason, or the firemen of therapy. We tend to set up another false dichotomy here as well: if we turn against the bad feelings, we have to turn against the good ones as well, as though the alternative to feeling bad is the affective anesthesia of reason. But seriously, who doesn’t love Mr. Spock? This whole myth of the antipathy of head and heart is delusional, used mostly by folks who can’t use their heads all that well. People with difficulties in thinking will claim to be right-brained instead, unaware that right-brained thinking is even harder to do. Neither is holistic thinking the simple-minded, new age thing they believe it to be. Affective or emotional self-management merely gives us smarter emotions.
    People apply this false dichotomy between reason and the passions to their misunderstanding of Buddha, thinking that the opposite of suffering cannot be any form of pleasure. While, sadly, Buddha isn’t depicted in the Suttas as laughing, or even smiling all that much, he does speak at some length on the pleasures worth enjoying. Not the least of these are the four Brahmaviharas, abodes of Brahman, being metta (loving-kindness), karuna (compassion or helpfulness), mudita (supportive or sympathetic joy, sort of an expanded naches), and upekkha (equanimity). In addition to his 25 sobhana cetasikas, I would be quick to add four other pleasant states he praised elsewhere: khama (forgiveness), katannuta (gratitude or thankfulness), garava (reverence or deep respect), and khanti (patience). The entire second leg of his Eightfold Path, Samma Sankappa, or Right Intention, might be thought of as training in affective self-management. This is not about the elimination of feeling: it’s just the elimination of stupid feelings that keep us swamped in suffering. Suffering is optional with Right Intention. A big part of the exercises on this part of the path is the deliberate substitution (tadanga) of inferior emotional states for superior ones. Yes, you’re allowed to make such judgments, just not by the Buddha of new age Facebook memes.
    It’s often imagined that enlightened, equilibrated, or wisdom states entail more calm or serenity than powerful feelings and emotions. Steven Sloman even writes, “As a rule, strong feelings about issues do not emerge from deep understanding” (Kolbert, 2017). But there may be a lost distinction between strong and powerful here. In physics, power is defined as the rate at which energy changes form to get work done. Strength or force is one of the ways to do this, but it implies expending energy against resistance or inertia. The alternative path to power is to expend less energy by finding a way around the resistance or inertia. Sometimes this is called sensitivity, sometimes adaptive intelligence or adaptive fitness. Further, some of our more exalted states may in cases and at times be felt as strongly as rage or indignation are felt, but they might only find outlets through grinning, dancing, laughter, or tears. And they are certainly known to hold the power to turn a misguided life around.

Cognitive Behavioral Therapies
    The most significant and successful attempts at affective self-management as a form of mental-health therapy are called Cognitive Behavioral Therapies (CBTs). These are a collection of therapeutic techniques that try to interpose cognitive schemata into the space between stimulus and response. In some places, the cognitive schemata are called evaluative beliefs, which implies, as it should, that more than a simple intellect is at work here. The world that we respond to with our behavior is not the world itself as sensed, but the one that comes to us pre-interpreted by a vast array of cognitive processes, including our sensory illusions, inaccurate memories, cognitive errors, dysfunctional emotions, and maladaptive behavioral scripts, most largely unconscious. The term belief also implies a degree of entrenchment or resistance to change. CBT is dually-oriented to both cognition and behavior. Successful therapy will rely on our ability to alter or adapt the way our reality is constructed in our minds, thus altering the perceived world to which we respond. As these systems of therapy develop, the focus on fine-tuning our rational thinking will likely broaden further to incorporate new understanding of what thinking really is, particularly its sensory, affective, and generally messier and bloodier neurological and neurochemical components.
    The CBT therapeutic process is oriented to achieving specific goals, and this is normally transparent to the patients, meaning that they are told how the process works and are required to participate or collaborate consciously. With the exception of a specific doctor or therapist-to-patient relationship as the basis for guidance, the similarity to Buddhist practice is obvious, and shows strong Buddhist influence. The specific procedures will vary, with CBT being considerably more streamlined and abbreviated. Skipping some of the steps that the Buddhists still take isn’t necessarily an improvement, however. One version identifies seven steps to the process: 1) identifying thoughts, feelings and behaviors; 2) understanding the links between thoughts, feelings and behaviors; 3) making changes in behaviors or acquiring skills; 4) making changes in thoughts; 5) challenging our thoughts; 6) distancing or defusing from thoughts; and then 7) practice. There are other versions. In addiction therapy, the goal is to construct new schemata that override the dysfunctional target schemata, displacing these with greater salience, relevance, value, immediacy, and, if possible, pleasantness, at least in the long term. This will be more effective if rational reconstructions can be sensitized or alerted to the powers of all four of our categories of anticognitives, all seemingly in place to reinforce and defend long-standing and entrenched set of evaluative beliefs.
    That said, purely cognitive CBT approaches can be expected to fail in this area of self-management for the same reason that courses in critical thinking fail. There is more to the mind than thought. We can’t sidestep the dimension and power of hot cognition. The limbic system has a say in these things. We won’t change that just by fiddling around in the prefrontal lobes with more muscular top-down commands. The limbic system must be convinced of the worth of any change, and this is done with new or revisited experience, not with our reason and orders from above. The cognitive reappraisal needed for most therapeutic improvement needs to reassign affective valences to triggers and related memories. Employing perspective shifting, rethinking, renaming, abstraction, distraction, distancing, suggestion, and other forms of cognitive flexibility, can all help keep us from intensifying our connection to painful thoughts and memories. But unless useful connections are made to emotional responses made in implicit memory, the changes may be superficial. We need to re-feel as well as rethink, and add that to remembered experience and even to implicit memories. It isn’t sufficient simply to feel less or nothing, instead of feeling poorly, as this will not correct what has gone before.

Reappraising and Revaluing Values
    With affective self-management comes the power to assign reconsidered relevance, significance, value, or meanings to our experience, things of our own choosing. But first we need to learn how to distance ourselves or turn aside from our present feelings. Robert Sapolsky (2017) writes that avoiding temptation is “usually more effective if doing that better thing has become so automatic that it isn’t hard. And it’s often easiest to avoid temptation with distraction and reappraisal than willpower.” Nietzsche wrote, “Looking aside: let that be my sole negation.” Distraction here is another form of reappraisal, as it must consider the temptation to be irrelevant or unworthy of attention or pursuit. Evaluative reappraisal of our choices or goals is the first significant key we have to true and authentic agency. Willpower without the affect that’s involved in evaluation is little more than illusion and lip service. Even where the choice is a trivial matter, our salience network relies on a sense of relevance to direct any of our attention.
    In modern culture, we are fairly continuously bombarded with applied valuations. Wherever ideology, economics, politics, religion, or advertising are involved, this applied valuation is by design, and wholly external to the individual, who must be persuaded to adopt it. Even the most ordinary mind isn’t entirely passive or helpless here. We can learn that we don’t like things as much as we were promised we would, or that that product didn’t fill the big hole in our being that was created by its advertising. But most of us don’t seem to put repeated lessons together into a plan to resist such attempts at persuasion. We just run out and buy the next big thing or idea, or the new and improved version of the thing that just disappointed us. Having such a plan is a first assertion of dignity and sovereignty, but it has a great deal of cultural conditioning to overcome, carefully cultivated insecurities, and some needs for things to quiet them. This makes a beginning. We dangles a carrot in front of our nose, then we goes wherever the carrot goes. But it’s only the first step in true agency to say that a thing may be worth more or less than advertised.
    The second step is more challenging still. Nietzsche termed it revaluation of values, or transvaluation. This isn’t just taking charge of the values given to things and adjusting them up or down. Its the power to say that this value itself is worthless, that to even claim that this value was an important part of existence was a fabrication, or a failure. This doesn’t just remeasure things, it messes with the measurements and metrics themselves. It’s outside-the-box evaluation, and it might describe things with entirely different lexicons, or inhabit an alternate universe of discourse. Affective self-management will be essential to overall self-control and its other components of self-awareness, self-restraint, self-efficacy, self-motivation, and self-directed behavior. The real key to this is in whatever capacity we can learn to choose what values we value, how we can choose to weigh one value against another, and ultimately, and this one takes time, to choose our feelings themselves according to their value. This part requires some practice and patience with progress. It requires reassessing memories and their emotions as and when and after we pull them up, and before we put them away again. We can do nothing with these when they’re filed away, and nothing as they arise unbidden as reactions to stimuli and triggers. This is one of the correct conclusions drawn from the infamous Libet experiment, which has also led to incorrect conclusions about agency.
    The neocortex gets involved with our social behavior, long-term planning, deferred gratification, behavioral inhibition, and deconditioning, and it’s also involved with rationalizing or justifying decisions already made by the more primitive parts of the brain. There are emotional rewards that accompany its successful operation, or its avoidance of maladaptive options, even though these emotions may be mild and much of the work is rational. But it just isn’t quick enough to change reactions or responses as they are emerging from below the conscious threshold. Changing what emerges next time requires changing what gets put away this time, in recursive loops, in and out of awareness, and extended over time. The erroneous Libet conclusions look at time periods only a few hundred milliseconds long, not the hours or years it might take to exercise true self-control and agency. Affective self-control is circuitous rather than top-down like rational, and it involves altering memory through reiteration of recalled states in different and hopefully better affective states of mind. This takes dedicated time and effort, and emotional retraining wants overlearning so that healthier responses will be closer to the surface.
   The extinction of hyperreactive emotions via exposure therapy is used in clinical therapy to treat disorders like phobias, anxiety, and drug dependence. The extinction of fear is a major theme in reappraisal. Richard Brodie notes, “Because evolution favored safety, we have a lot more fear than we need.” A few fears are innate. Many can be the result of single-trial learning, retained as flashbulb memories. They can manifest as phobias, anxieties, and the many symptoms of PTSD. Fear extinction or unlearning is usually practiced by negative reinforcement, a gradual decoupling of the stimulus from the response, and the memory from its associated emotion. A powerful positive experience may be better than many neutral ones, as the memory isn’t merely shifted into the background to be gradually forgotten. Vipassana bhavana, Zazen, and other forms of meditation, self-hypnosis, age regression therapy, hypnotic regression, breathing exercises (including modified forms of the generally discredited rebirthing-breathwork) can be used to bring subliminal emotions to the surface for review. Repressed, suppressed, and dissociated memories may not always be reliable, but their emotional expressions won’t lie about themselves. Ideomotor responses can be readable cues to real time progress if verbal reports aren’t possible. But we ought to be aware that some of these methods can add false or confabulated memories, particularly if prompts are given.

Resentment and Neuroplasticity
    Resentment is re-sentiment, to feel the same thing over and over again. A resentment is an affect that we hammer ourselves with repeatedly. Each time we experience it in that way, the affect attached to the memory strengthens its connections. Ill will cycles in loops and viscous [sic] circles, growing more rutted or entrenched with each cycle, while recall grows more asymmetric to favor the unpleasant, or a sense of loss. Memory isn’t just about conceptual frames and schemas. The associations that memories are tied to include the emotions that are felt at the time, as well as sensations, perceptions, similar experiences, and lexemic tags. Among these associations, it’s important to remember that an emotionally charged memory initially occurred in a specific context. If a specific resentment is to be effectively managed, it should be remembered first within this context, with a focus on that rather than the emotion itself. This helps us to recruit higher-order brain functions into the process.
    Remembering feelings and emotions initially follows what’s called the peak-end rule, the tendency to perceive and remember experience by the average of qualities it had at its peak, including the dismount, if that’s scored, and to neglect the average qualities of the overall experience. However, the affect in memory is plastic as well. Re-sentiment is a cognitive loop that takes neuroplasticity in the wrong direction. And it’s extremely costly to the brain. When we revisit a memory, we add our current emotional or feeling tones to it before we put it away again. Memory is dynamic. When we bring up a bad experience, only to feel it the same way all over again, we add to its intensity with our re-sentiment. Back it goes into the subconscious, to continue eating at us, with shaper teeth or stronger jaws. But when we bring some traumatic memory up in a more elevated state, as one of new light, or tolerance, or understanding, or perspective shifting, or forgiveness, we alter its charge and the hold it has on us.
    We may also have deep, personal, emotional issues that we are told must be resolved before we can move on to perfecting ourselves. Some of these issues may promise to take years to resolve, and involve expensive therapies. Perhaps we must return to correct our childhoods. We have things repressed and suppressed down in there and need to release that pressure, perhaps with catharsis. This may be pursuant to the hydraulic fallacy. But in addition to the option of going back or deep within, or going knocking on doors in our old neighborhoods with apologies and forgivenesses, amends and amendments, and eventually resolving these back issues, we also have the option of using neuroplasticity to revalue these issues as unimportant relative to the other things we have to do in life, to strip them of some or most of their emotive force, and decide to move on instead, and then spend our efforts on more productive and forward-looking personal projects. This may require us to proceed as imperfect beings, but we have still set down some of the burdens of our resentments.
    Forgiveness isn’t forgetting, it’s simply a group of ways to disconnect the affect from a memory and lighten the affective load. We can decide to either forgive the offenders or harbor a grudge toward them. With forgiveness, the nastiness of a memory can be almost fully overwritten, over time. This has been an explicit practice in Buddhism since the beginning, especially the practices of vipassana bhavana and Zazen meditation, where thoughts and memories are allowed to come and go in an atmosphere of equilibration and serenity, to be re-experienced, on purpose, in a higher frame of mind. We don’t try to put bad memories behind us, or run away from them, or deny the importance they have had for us. Emmiliano Ricciardi (2013), commenting on the neuronal basis of reappraisal-driven forgiveness, notes, “Granting forgiveness was associated with activations in a brain network involved in theory of mind, empathy, and the regulation of affect through cognition, which comprised the precuneus, right inferior parietal regions, and the dorsolateral prefrontal cortex.” The dlPFC “is part of a cognitive circuit of top-down control that mediates the volitional suppression of negative affect.” This was in studying cognitive reappraisal, however, such as rethinking an aggressor’s motives, or finding nobler explanations, or reevaluating the experience as less detrimental. This study does credit “a benevolent, positive evaluation of meaning” as modifying emotional charges, suggesting that more positive affect needs to be recruited. It’s still safe to say that reason doesn’t rewire the limbic system. These researchers weren’t experimenting with more affective strategies like distancing, or washing the memory clean of resentments with strong, positive feelings during transcendent states or ayahuasca journeys.
    Affective reappraisal is why elucidogens are so effective in therapy. You dig stuff up and clean it before you put it back. I don’t think we’re going to properly address the therapeutic role of psychedelics until we look hard at their effect in reevaluating the emotional associations to specific traumatic memories. Memories recalled in these altered, positive states will pick up new affective associations made in that state, modifying the memory, and in highly pleasant or positive states, upgrading it. We can add forgiveness to the bitterness, understanding to the resentment, growth to the setback, so when it goes to sleep again, it goes back altered, and usually improved, just a bit less apt to gnaw at us subconsciously.

Classifying Emotions
    Lisa Feldman Barrett, in How Emotions are Made, writes, “Your brain’s most important job is not thinking or feeling or even seeing, but keeping your body alive and well so that you survive and thrive, and eventually reproduce. How is your brain to do this? Like a sophisticated fortune-teller, your brain constantly predicts. Its predictions ultimately become the emotions you experience and the expressions you perceive in other people.” And further, “Emotional intelligence, therefore, requires a brain that can use prediction to manufacture a large, flexible array of different emotions. If you’re in a tricky situation that has called for emotion in the past, your brain will oblige by constructing the emotion that works best. You will be more effective if your brain has many options to choose from.” This is an argument in support of a better articulation of emotional states that she calls emotional granularity. This might be too hastily dismissed by some as over-analyzing our sacred feelings. There are cases where naming things might steal their power or spirit. The name is, after all, just what a sorcerer needs to gain command of the demons. There really is something to this. A name is a handle in semantic or declarative memory, it’s another way to get a grip on a memory. Getting a grip can at times be an exact synonym for affective self-management.
    The most commonly recognized set of categories of emotion, as proposed by Paul Ekman, sorts them into six basic groups: happiness, sadness, anger, surprise, fear, and disgust. The basic taxonomy of emotions is by no means settled, and at best, these six can be said to be the six most recognizable states as seen from the outside, shown on the human face. As such, these should play a major role in social communication, and perhaps in contagion via motor neuron networks. As established as this is, however, it’s still missing important readable states like interest, and it certainly doesn’t cover the whole territory of basic internal states like frustration and anxiety. Robert Plutchik later developed a wheel of emotions, with eight primary emotions, grouped on a positive or negative basis: joy versus sadness; anger versus fear; trust versus disgust; and surprise versus anticipation. Each of these eight represents a scale of intensity with a midpoint, with joy between ecstasy and serenity, sadness between grief and pensiveness, anger between rage and annoyance, fear, between terror and apprehension, trust between admiration and acceptance, disgust between loathing and boredom, surprise between amazement and distraction, and anticipation between vigilance and interest.
    Some have taken the so-called infant emotions, together with those arising in earliest childhood development, as forming the foundation and skeleton of the affective structures in adults. Present at birth, or in the first two months (per these constructions, or an amalgam thereof), are contentment, disgust, distress, interest, and frustration. We also have the startle reaction (Moro reflex) that anticipates fear. By six months, we have usually experienced anger, fear, happiness or joy, humor (or funniness), sadness, and surprise.
    Categories might be established around any number of dichotomies. Some emotions are prospective, like hope and fear, others retrospective, like shame and regret. Some concern mainly ourselves, like pride and insecurity, others are social, like anger and admiration. Some emotions can be domain general, like happiness and frustration, and others, domain specific, like indignation and lovingkindness.
    We have something of a linguistic habit of classifying emotions in opposite pairs, most notably love-hate, pleasure-pain, praise-blame, fame-disgrace, and gain-loss. But this makes almost no sense whatsoever neurologically or neurochemically. There is, however, a binary approach to each individual affective state, whose valences can at least be dichotomized. Being moved by an experience may be in any number of directions: to move away from that state, to move father into it, to move to one or another dissonant side, to make the affect stay or go, to prolong it or shorten it, to strengthen or weaken it, to value it or dismiss it. Subjectively, these may be felt as pleasure-displeasure, approval-disapproval, and like-dislike. But love and hate are not opposites. Indifference or apathy is the opposite of both.
    Affective neuroscience approaches this study by way of the neurological and glandular underpinnings of affect, emotions, feelings, and moods. The discipline has grown into a much-needed counterweight and complement to cognitive neuroscience, which seems to want to avoid the messiness of our cognitive juices. We are still some distance away from integrating the two. The affective systems involved are still too complex for a comprehensive theory. Jaak Panksepp has noted evidence for seven primary-process affective networks in the brain’s subcortical regions: seeking, rage, fear, lust, care, grief, and play. Panksepp (1998) sees the core function of emotional systems in providing “efficient ways to guide and sustain behavior patterns, as well as to mediate certain types of learning.” We should also point out that there are emotional reactions sourced outside the brain as well, in the more distributed endocrine system. Affective neuroscience will seek to identify specific types of affect by their geographical pathways through the brain. It’s unknown how articulate this will become, or when it will be able to distinguish between guilt, embarrassment, and shame as precisely as language and introspection can. Studies in fMRI imaging may one day give us reliable mapping of generalized areas and connectivities in the brain for specific emotions and states. Most studies to date just concern the locus of triggers and source regions for some states (such as the amygdala for fear, or the anterior insula for disgust), but the emotions themselves move quickly to span much larger neural networks.
    Hugo Lövheim’s 2012 cube of emotion visualizes eight basic emotions as expressions of the absence or presence of three neurochemicals: dopamine, noradrenaline and serotonin. This set of eight includes shame or humiliation, contempt or disgust, fear or terror, enjoyment or joy, distress or anguish, surprise or startle, anger or rage, and interest or excitement. But this model doesn’t account for hormones like cortisol, peptides like b-endorphin and oxytocin, and other neurotransmitters like monoamines and norepinephrine.
    The more recent approaches to emotional classification in psychology are multi-dimensional, but are usually simplified to only two axes. One system, proposed by Lisa Feldman Barrett, called the theory of constructed emotion, uses the axes of valence and arousal to describe a “core affect,” and locates emotions along one scale from low activation to high (intensity) and another from unpleasant to pleasant. We can even add a third axis to this, along a line running from subject passivity to assertiveness or dominance. This is like a black and white picture of a state, more of a measurement, with little to offer the poet in terms of color, intention, or subject matter. The most basic axis is the general hedonic tone, which the Buddha called vedana, the reaction to contact whereby one is moved to either approach or withdraw. This attraction vs aversion choice was likely the first decision that ever needed to be made by the first single-celled organisms.
    Though generally not the case, sometimes sorting emotions by positive vs negative is problematic. Where a surprise may be generally positive, some people don’t like it one tiny bit. Anticipation can be literally dreadful. The religious can mistake smugness for serenity. Some seem to enjoy anger or indignation for the juices these will get flowing. Humor can be spiteful in schadenfreude. Surrender can offer challenges as well, if we don’t take care to distinguish letting go of our burdens from letting go of our hopes for a victory. On the whole, these occasional ambiguities and ambivalences needn’t interfere with sorting pleasant from unpleasant, and we can always articulate the exceptions. Sadness, however, can present a special problem. In some cultures, sadness is a treasured and even sacred state of mind, as we see in the Japanese wabi-sabi, and the touching Portuguese word saudade. Winfried Menninghaus, with the Max Planck Institute, speaking of the use of sadness in art, writes. “negative feelings are particularly effective in capturing our attention, are experienced especially intensely, and enjoy privileged access to memory.” There is an aspect of this being a sort of advance training for the first-hand experience. We can sample feelings in movies that aren’t our own, with only limited suffering (except maybe trying to watch Ishtar). There is a value to affective texture in memory, or to having many flavors in the pot. We like richness and complexity of feeling, even if this means unpleasantness in varying doses. We like to integrate this complexity for the sense that we are sampling the whole spectrum of life. We needn’t get platitudinous and claim that balance requires us to be unhappy as much as we are otherwise. That one is more a function of our skill at affective self-management. With sadness, some of the problem is also linguistic. Some forms we seek, and some forms we seldom if ever do. The assortments to be made here, therefore, remove the poignancies that we most like to dwell on into a positive category of pathos.
    The above reductive attempts to organize emotions geometrically might have their merits, of sorts, and they do hold a fascination for us thinkers and thoughtful types. They also make it easier to conduct self-reported surveys of affective states. But we get the feeling that something important is missing, namely, the enormous variety and texture of the emotions and affective states that we have in real life, Barrett’s emotional granularity, the nearly infinite possible flavors of our neurochemical soups. We set the reductive attempts aside for a moment, without rejecting them, and explore another approach. I tried my own exercise at building a new set of categories here, inductively, or from the ground up. I began with a big random pile of hundreds of emotions and affective states, and one at a time, started sorting them into groups. When one came up that had no group to go into, another group was started. No geometry was used, and at no time was the number of groups counted. The groups simply expanded until everybody had a comfy home. Eventually two kinds of piles began to appear: those states that encouraged us to approach something or go further in, and states that suggested we avoid something and get further extricated or gone. Within that was a strong suggestion of another potential sort, between how we feel about ourselves and how we feel about or among others. I didn’t take the exercise that far. The two groups of categories were these, with each category holding a cluster of states specifically named in Chapter 3.3:

Affect Suggesting Approach:
Affection-Affinity, Agency, Anticipation, Appreciation, Commitment, Confidence-Courage, Connectedness, Contentment, Desire, Empathy-Supportiveness, Engagement, Equanimity, Exuberance, Friendship, Happiness, Interest, Love, Pathos, Patience, Playfulness-Humor, Relief, Security-Trust, Surprise

Affect Suggesting Avoidance:
Anger, Anguish, Anxiety, Condescension, Confusion, Cruelty, Defensiveness, Depression, Disappointment, Disconnection, Disgust, Distress, Displeasure, Distrust, Envy, Exhaustion, Failure, Fear, Frustration, Hurt, Insecurity, Irritation, Loneliness, Loss, Mania, Masochism, Neediness, Reticence, Sadness, Shame, Shock, Surrender

    The full exercise is given as Chapter 3.3, Emotions and Affective States. It’s included there in full because a conscientious and articulate took at these human conditions of ours, which are all common enough to have their own  names, contributes to our affective self-management. It helps with our own emotional literacy to know the names of our feelings, because these give us greater access to them. Once again, when the sorcerer gets hold of the name of his demons, and spells their names correctly, he can make the damned things run errands for him. We can look at each of these individual states and remember when we felt it, or know that’s one we haven’t collected yet. We can weigh, in remembering, what value that feeling had for us at the time, or how it sent us off in some wrong direction. We can also ask how the pleasant feelings might mislead us, and how the unpleasant ones might actually help to redirect us.
    The challenging emotions pertinent to the situational domain have already been discussed in that chapter. These included anxiety, cognitive dissonance, disappointment, fear, frustration, insecurity, loss of confidence, resistance, stress, superstition, uncertainty, and vigilance. There were also feelings of clumsiness, failure, impotence. inadequacy, incompetence, and stupidity. The go-to maladaptive responses to these tend to be found listed among coping strategies and cognitive biases. They also tend to be explained or rationalized with defense mechanisms and played upon with logical fallacies.
   The specific anticognitives pertaining to the emotional domain are given in Chapter 3.4, Cognitive Biases, Chapter 3.5, Coping Strategies, 3.6, Defense Mechanisms, and Chapter 3.7, Logical Fallacies.



2.6 - Personal Domain

Intrapersonal Intelligence, Emergent Selfhood, Anatta or No Self,

The West Plays Catch Up, Self-Schemas and Scripts, Shifting Identity,
Ego Defense Made Easy, Integrity and Character

“Honest criticism is hard to take, particularly from a relative, a friend, an acquaintance, or a stranger.” Franklin P. Jones

Intrapersonal Intelligence
    In this domain we’re concerned with what we sense, perceive, feel, and think of ourselves. This is Gardner’s Intrapersonal Intelligence, having to do “with introspective and self-reflective capacities. This refers to having a deep understanding of the self; what one’s strengths or weaknesses are, what makes one unique, being able to predict one’s own reactions or emotions.” Of course, much of this domain is built up around the feedback that we get from activities in other domains, especially the social and cultural, to be discussed later. That means we’ll be talking about ego. In the intrapersonal domain, activities occur either within the hairy fleshbag that contains us, or the extensions that we’ve added in the things we own or believe in. In the flesh, we are just ourcells, even though the majority of our cells are in the three pounds of non-human microorganisms we carry around. This is the world of self, whatever that means, and where to start is with what that means. Gnothi seauton, know thyself, was one of the maxims found at the entry to Apollo’s temple at Delphi. We’re not sure where it dwelt before that. Per Socrates, the unexamined life is not worth living. By this we learn our strengths and weaknesses, and learn to anticipate our own reactions and emotions as a first step towards their management.
    There are diminishing returns to be had in maintaining a self, particularly in its micromanagement. Self-study will carry the hazard known as hyper-reflection. This might also be called psych-major’s disorder, wherein students might briefly catch each of the abnormal psych conditions they are studying as they look for the symptoms in their autobiographical memory. It’s the Barnum effect. Lives thoroughly lived in general will sample most of the symptoms at least once. Another important aspect of hyper-reflection is an excess of self-consciousness, sometimes called Humphrey’s Law and best described by the poem “The Centipede’s Dilemma,” by anonymous:
A centipede was happy quite, until a frog in fun said:
‘Pray tell which leg comes after which?’
This raised her mind to such a pitch,
She lay distracted in a ditch, considering how to run.
    There is much to be said, particularly by Daojia and Zen masters, about letting the self just be, like an unworked piece of wood, or like water that simply responds to its place, moving zìrán, just so of itself, or spontaneously. This is related to wúwéi, not doing or acting, especially when wéi is regarded as acting in the thespian sense, according to script, a per-formance, or acting through a form. Although this Zen way of being is called chūxīn, beginner’s mind, or original intention, it’s really the idealized end state of a great deal of practice. Much unlearning or de-conditioning precedes it.
    Our persona or personality is a mask or facade, the part of ourselves with which we approach the social world. This is still personal, descriptive of what’s within us, even if it’s superficial and barely within us. A mask has two sides, and the side that faces ourselves, including what we sense, perceive, feel, and think of ourselves, we can call the ego. It’s here that we react to the world reacting to our persona, with its flattery or censure. It’s here that we assess ourselves in comparison to others, beginning in mid-childhood and continuing far too long. It’s also here that we seek to maintain something analogous to a positive affective pressure, or an optimum amount of self-inflation that we call our self-esteem. Further in, and a bit less chained to the outer world, is the private self, with its numerous dimensions. The most important of these dimensions is the autobiographical memory that gives us our sense of continuity, and the raw material for the narratives we spin in telling ourselves who we are. The most vivid of these autobiographical memories will tend to be linked to our most exceptional emotional events. Looking back across this paragraph, note how necessary it is to use metaphor in describing the inner world. This is about all we can do without invoking neuroanatomy. The names that we have for our emotions represent a finer articulation of our inner world than the names that we have for the inner structures of that world, even though their overall taxonomy is still up for grabs. Various attempts have been made to chart the dimensions of the personality. Among the best known is the Five Factor Model or Big Five personality traits: 1) openness is a spectrum from curious to cautious; 2) conscientiousness, from diligent to careless, 3) extraversion, from outgoing to reserved, 4) agreeableness, from affable to nasty, and 5) neuroticism, from nervous to secure.

Emergent Selfhood
    Self can be regarded as a structure, regardless of its level of ontological reality, or the speed with which it changes, alters its components, or adapts to circumstance. I would have to go farther than Daniel Dennett, who asserts that “the self is simply a ‘centre of narrative gravity,’ an empirical account of the self,” and disagree with his dismissal of emergence as irrelevant to the development of self. It smacks of behaviorism and the denial of something irrefutably obvious, even though we can still agree that subjectivity isn’t a physical thing. Qualia are not required to be physical things in order to be regarded as real, any more than processes are. “Experiences and feelings have irreducibly subjective, non-physical qualities” (SEP). They are simply “the introspectively accessible, phenomenal aspects of our mental lives,” and unavailable to a second person except by description and inference. They are also the most naive parts of naive realism. Qualia do not entail a violation of Occam’s Razor, like Cartesian dualism does, by positing a metaphysical or religious dimension to house them. Qualia can be substantive without having to be substances. When our experience of self has any kind of agentic effect on the world around us, some version of reality must be accounted for. The moment we see an emergent property supervening or having effects on the forces out of which it arose, we owe its reality a second look. This is not the same as granting it a fundamental, substantial, or eternal existence, however. We can still have a temporary, sometimes useful autobiographical self: it just isn’t going to heaven or hell, or back into meat when it’s done.
    Some of us are ready to accept, until further notice, that there is no real or independent thing in consciousness, that it’s not an objective sort of thing. Or technically, it’s the opposite: a process, a verb instead of a noun, and in this sense, it belongs in a different world, perhaps a different dimension. And it may represent the whole of that world or dimension, without extending far forward or backward in time, and importantly, not existing at the beginning of all things, not a member of the founding forces of the universe. As such, it’s likely not transferrable. Others are still arguing for pan-psychism, almost always these days using the word quantum, but without the accompanying math. We need not abandon emergence to appreciate Dennett’s “multiple drafts” model of consciousness, and its companion assertion that there exists no Cartesian theater wherein the plays are played out by dramatis pesonae as “a bundle of semi-independent agencies.” Except as mere qualia. We can still call the experience qualia and assign it a non-noun-like reality in proportion to any effect the experience may have in and upon our lives. It’s probably safe to say, though, that consciousness is not the self, but merely part of the experience or sensation of self. Antonio Damasio offers, “In parallel with engendering mental patterns for an object, the brain also engenders a sense of self in the act of knowing… . The presence of you is the feeling of what happens when your being is modified by the acts of apprehending something … . Consciousness … is the unified pattern that brings together the object and the self.”
    Our thoughts, stories, and emotional dramas lack the substance we tend to ascribe to them. They too are qualia. The fact that they are convincing to us is no proof of their existence, let alone their truth. By stripping them of this illusion of substance, we may regard them as light or massless enough to be shifted around, according to the work that we want them to do. The sense of self is also a thought, a story, an emotional drama, subject to refinement for any number of practical purposes, provided the subconscious or the limbic system can be convinced, or convinced that maintaining a white-knuckle grip on our insubstantial psychic possessions is unnecessary. Per Philip Gerrans,  “the self itself does not exist as a persistent entity, but is a fundamental cognitive strategy.”

Anatta or No Self
    From Greece to Asia, from around 25 centuries ago, Vedic philosophers were teaching that the true inner self was the Atta (Pali) or Atman (Sanskrit). This was a spark of the divine that lived within us, separated from the greater divinity as if in a playful game of hide and seek, with the purpose of personal evolution and eventual reunion with the divine, known as Brahman. The Atta would keep reincarnating (etymologically “going back into meat”) until this wisdom was gained. The body, the meat, was only the vehicle for the spirit. This duality was to persist and permeate Western cultural belief to the present and beyond, with one big, early exception, and some doubtful Greeks as well.
    Buddha objected. In describing the first of his Four Noble Truths (poorly understood as “existence is suffering”) he said that our existence is dukkha, literally thirst, but better understood as unsatisfactoriness, or frustration. Any living and sentient thing, regarding itself as independent of the larger reality, was faced with a lifetime’s succession of needs and wants that would never be fully or lastingly satisfied. He elaborated this into a triptych with two more “marks of existence” like this. The second mark is anicca: impermanence or transitoriness is the rule, echoing Heraclitus with his panta rhei, all is in flux. Third, he proclaimed anatta, that there is no such thing as this atta, this undying, reincarnating self. Existence overall is impersonal. Resistance to this idea, and the squirrelly mental machinations people use to get around it, is seen even in some literally-minded Theravada Buddhists. Nobody healthy wants to die. But the Buddha did NOT teach reincarnation, he taught rebirth. There is a temporary, conditioned process that we can regard as a self. Self is a sort of place where five skeins or threads, component factors or dimensions of our existence, get tangled up for a while. A self emerges or arises out of this tangle of prior conditions. These components will provide the necessary conditions for this new spirit-seeming or soul-like process to emerge into our awareness, just as heat, oxygen and fuel provide the necessary conditions for flames to exist. Self is an emergent, temporary, and ever-shifting construct, a subjective sense of reality. It dis-integrates when its components disentangle.
     There is a real I, but it isn’t real in the sense that this body is real, and not in the sense that the religious soul is postulated either. The self disappears at death, although the not-self strands may continue. When a new self re-arises or re-emerges, it might even pick up “memories” of lives that went before. For the skeptic, this might simply be from accounts left in the culture, the lasting effects of previous kamma or karma, properly understood as non-moralistic cause and effect. But morally speaking, this does at least mean that any good we are able to do lives on, as does the bad, and this has more meaning for the grateful than it does for the ingrates. What appears to be a continuing self is a compounded thing, or much more correctly, a process. It has a conventional and experienced reality, but not a fundamental one. There is no doer apart from things getting done, no thinker thinking. What we sentient beings sense, perceive, feel, or think of as such an identity is a process that emerges out of the interplay of the component processes that condition or form us. There is no spiritual or mental homunculus that dwells in the human brain or heart, operating the meat machine. Yet we seem to betray our illusions a little every time we say “my spirit” or “my soul.” If this spirit or soul is who we really are, then why are we making our inmost being an extraneous possession like this? Shouldn’t the first person be the spirit itself? Or are we admitting that we are living our lives at some distance from our real nature? If this were a mere trap of language, why have we not rebelled against this and created a popular grammatical form for the real me and you?
    Rebirth is explained in a traditional way by Peter Santina in The Tree of Enlightenment: “When we light one candle from another candle, no substance or soul travels from one to the other, even though the first is the cause of the second; when one billiard ball strikes another, there is a continuity - the energy and direction of the first ball is imparted to the second. The first ball is the cause of the second billiard ball moving in a particular direction and at a particular speed, but it is not the same ball.” The new flame is now the same process and uses the same kind of fuel, and oxygen from the same room, and heat from the old flame. There is still a continuity there, in the actions of the transference, in the starting of the fire, and in the manufacture of the candles. Reincarnation will often be used (or abused) to rationalize the injustices of mortal life, why bad things happen to good people, or good things to bad, or why events in life appear random when somebody is trying to tell you instead that there are rules that ought to be followed. But the fact that all things will ultimately have causes does not mean that all things happen for reasons, or are unfolding according to some law or plan. It’s perhaps a lot more sane to admit that not everything happens to us by means of some moral law. Good or ethical behavior increases our odds of living a better life, this we can see, but, like Zhuangzi said, “perfect sincerity offers no guarantee.” Since rebirth happens, and at least life itself will keep returning, we do the future beings a big favor by practicing good karma and leaving a better world instead of a diminished or ruined one.
    The self isn’t precisely an illusion in Buddhism, as it is in the Maya and Samsara concepts of Hinduism. It’s a convention. It’s not completely unreal, it just isn’t what we’d like to think it is, and it certainly isn’t going to last. It’s a sense of something real, but it’s distorted. This conventional self can’t exist without any of its components, particularly the body. Neither is the world itself an illusion. The world of Samsara is as real as Nibbana, and not just a bad dream. Nibbana and Samsara ultimately refer to the same world, the real world, just experienced differently. What’s unreal is the world as we think, feel, and perceive it to be. If you have tried to imagine a world that’s stripped of our organic sensations like sight and touch, perhaps as a vast, moving field of full-spectrum energy, in varying densities, streaming through time, always changing, with countless nodes or pockets of self-organizing energy feeding on energy gradients, you likely have at least a closer picture of reality than the one our senses give us, even though the best you can do is still laden with sensory and cognitive metaphors.
    We hold beliefs about what we are, and the nature of the world that we live in, that turn us into whining and ineffective participants, obsessing on this or that, throwing our lives away for things we are only told that we need. Yet we are also able to hold views that include a self that sits near the center of our world and is able to correct most of these difficulties. The Buddha referred to himself in the first person. He recognized that the sentient beings who came to him were people, who had boundaries. Self is formed from our experiences in the world. Humans are genetically evolved to make and use these sorts of constructs. They have uses, and these allowed our progenitors to survive and breed our ancestors. But a self doesn’t come into the world in order to collect experiences. It’s the experiences that give rise to the self. As Dogen put it, “To carry the self forward and realize the ten thousand dharmas is delusion. That the ten thousand dharmas advance and realize the self is enlightenment” (Little-d dhamma or dharma refers to any object that can be grasped by the mind, including beings). Buddha “found that, when the inner world is studied closely, all that can be found is a constantly changing flow and what is taken for an intrinsic self or soul is just the sum of certain factors of the mind that are all impermanent and in constant flux. He also found that attachment to any of these impermanent factors inevitably leads to [unsatisfactoriness], so the way to internal freedom and happiness that the Buddha advocated was to learn to accept and live in the face of impermanence without clinging to anything” (Fredrik Falkenstrom, 2003).

The West Plays Catch Up
       Much of the effort spent in a human life is investment in the continuity and integrity of our perception of a fundamental self. There are investments in finding it, keeping it going, keeping it the same, keeping it protected from challenging information, keeping it from not feeling wrong or ashamed, and maintaining its sense of sovereignty or independence. Now Buddha suggests that it may not be desirable at all for us to protect this fundamental self from change, or even from eventual dissolution, especially dissolution into wiser or more awakened ways of seeing things. The fundamental self is little more than a mental image formed in a stream of mental experience on attending a stream of physical experience. It’s one that costs a great deal of energy to maintain. If we were to recognize our sense of being a fundamental self as no more than a constructed mental image, perhaps given to us by millions of years of evolution to perform specific cognitive tasks, and admittedly useful in addressing many of our various physical and social needs, we could still make use of it in conventional ways to perform whatever functions it does best. Also, to recognize it as a construct would help set us free to do some useful re-construction. We could then free ourselves from being its slave or servant, and begin to adopt new notions of who we really are that lead us into less trouble. We could then begin to get over ourselves.
    Playing catch up, David Hume asserted that there could be no self without a perception, just as there is no consciousness without an object. Nietzsche pointed out some of the linguistic contributions to the sense of self in his final notes, compiled as The Will to Power: “We set up a word at the point at which our ignorance begins, at which we can see no further, e.g. the words ‘I’ ‘do’ ‘suffer’… Psychological history of the concept ‘subject.’ The body, the thing, the whole construed by the eye, awaken the distinction between a deed and a doer; the doer, the cause of the deed, conceived ever more subtly, finally left behind the ‘subject.’ Our bad habit of taking a mnemonic, an abbreviated formula, to be an entity, finally as a cause, e.g., saying of lightning ‘it flashes.’ Or the little word ‘I.’ To make a kind of perspective in seeing the cause of seeing: this was what happened in the invention of the subject, the I.” William James, at about the same time, distinguished between the physical self, the social self, and the private self.
    Given the above, it isn’t really surprising that the less religious forms of Buddhism have found something of a home among a growing number of scientists, and particularly neuroscientists, despite the system having derived from introspection and self-report. You’ve gotta admit that Buddha was one helluva introspectator. There are a number of variations on just what the five threads are, the skeins or skandhas that tangle for a while to realize the self. The Buddha’s were rupa, vedana, sanna, sankhara, and vinnana: material form, affective response, perception, mental formations, and awareness. Ulric Neisser (1988) sees five “functionally complementary dimensions: ecological [the embodied person interacting with the environment], intersubjective [the being in interpersonal relationships and communication in the social world], conceptual [the being as nurtured by culture and language, with developed social schemas and scripts], private [the sense of self structured by personal schemas and scripts, with their operation, maintenance, and defense], and temporally extended [the being in time, sensed as continuous, accumulating, growing, and predicting].” I would allocate these to the sensorimotor and situational; native and social; native, cultural and linguistic; personal; and accommodating domains respectively, but with some of the temporally extended bleeding into the personal domain in functions of autobiographical memory.
    Overlaying Neisser’s five dimensions with these ten domains suggests the usefulness of nested analogies in examining and fleshing out theories. For example, the missing metacognitive dimension opens a new self-knowledge category of attention and agency, self as what’s in the moving spotlight and also any management capacity we might have in the use of that spotlight. We have some element of choice in the attention we pay in highlighting what parts of current neural activity that we are experiencing as self. It’s far from absolute. In metacognition, we have some agency in cropping, framing and lighting ourselves. We have adaptive functions in this. We can push our own buttons, override our impulses, consider alternative behavior patterns, and suspend acceptance of prematurely drawn conclusions.
    Shaun Gallagher (2013) sees self “constituted by a number of characteristic features or aspects that may include minimal embodied [sensorimotor], minimal experiential [sensorimotor], affective [emotional], intersubjective [social], psychological/cognitive [personal], narrative [personal, linguistic], extended [different idea from Neisser, what we own or identify with], and situated [situational, cultural] aspects.” He also refers to a multitude of variations curated by Galen Strawson (1999), among which are a cognitive self, a conceptual self, a contextualized self, a dialogic self, an emergent self, an empirical self, an existential self, a fictional self, a material self, a physical self, a representational self, a semiotic self, and a verbal self. Some of this points out the difficulties that polysemy presents to taxonomy, but the scope and range of these resolved and aggregated meanings suggests a territory no more extensive than that covered in the ten domains being developed here.
    Neil Levy (2001) reports that George Ainslie (2001), in Breakdown of Will, “suggests we see selves as consisting of nothing more than constellations of interests competing for control of behavior; to the extent to which self-unification strategies succeed, some interests get swamped beneath others. The unified agent is then able to act on their own preferences and values, without fearing that their plans will be short-circuited when the opportunity for some more immediate reward presents itself. Unification is something we achieve in the process of intrapersonal bargaining, cajoling and coercing, through which each sub-agent attempts to secure the goals its seeks. When we begin the process of shaping ourselves, we do not seek coherence or agent-hood; there is no ‘I’ coherent enough to seek these things. As the process continues, however, a self comes into being; it is built out of a coalition of sub-personal mechanisms and processes, but it has sufficient unity to pursue goals, to present a single face to the world, and to think of itself as a single being. It is now minimally unified. But unification typically does not cease at this point. From this point on, it – we – may continue more or less consciously to work on itself, shaping itself in the light of its values and ideals. Sub-agential mechanisms build the self that will then continue to shape itself.” And as for the forces of coherence, “Ainslie argues that our responses to the threat of our own inconsistency determine the basic fabric of human culture.” We don’t want to let go of what we’ve assembled.

Self-Schemas and Scripts
    From Wikipedia, “The self-schema refers to a long-lasting and stable set of memories that summarize a person’s beliefs, experiences, and generalizations about the self, in specific behavioral domains.” The scripts are the procedures and protocols specified or suggested for these generalizations. The internal cognitive structures of the private self, our personal or self-schemas and scripts, are our cheat sheets and handy reference guides to life, interpretive templates that we apply to our perceptions to sort them into the proper places in the larger schemas of ourselves, and whatever protocols we apply to guide our behavioral responses to life’s various challenges. Transactional analysis offers an extremely oversimplified reduction of our complexities to the ego states of inner parent, adult, and inner child. Freud gave us the id, ego, and superego. Harry Stack Sullivan characterized three selves as the good me, the bad me, and the not me. Schema boundaries like this are lexemic, and carry the hazard of reification, of being turned into entities instead of figures of speech. We should only use them if we can remember this.
    We inherit an ability to recognize schemas and scripts, especially the social role and behavioral archetypes (as these are cautiously defined herein). But these schemas and scripts still need to accumulate mnemonic content since no content is inherited. Information about ourselves seems to be prioritized in attention and memory. We are especially ready to confirm expectations about ourselves and defend ourselves against challenges. Schemas and scripts are generally self-perpetuating, and they pal around with stereotypes for extra security. But is this all we have to hang onto? It almost takes a dedicated self-schema of “perpetual student” and a script of “heroic learning” to override these static ideas of self and maintain ourselves in dynamic places of learning and personal growth.
    Self-schemas make pictures out of our memorable moments like we make constellations out of stars, connecting our brightest dots, but perhaps paying too little attention to the ordinary or dimmer stars and those gaps where no light shines. We connect these dots into a picture of ourselves. We will then confabulate or interpolate what might be missing between the points. Not surprisingly, the self-relevant dots are memories associated to more affect than the others, like significant losses, painful experiences, and trauma on the less pleasant side, and elsewhere, on curriculum vitae, formative moments, milestones, statuses gained, peak experiences, trials passed, personal bests, high scores, promises, goals, and major gains. We have a lot of material to build these with, besides our external identifications, beliefs and ideologies. Objectives for self-improvement will often target these internal markers and milestones over more objective criteria. It isn’t surprising that so few of us will regard ourselves as average or ordinary, except for those suffering from depression or low self-esteem, who are busy marking their disappointments instead.
    Our scripts have a stronger reliance on autobiographical and procedural memory. Similar to the constellation of schemas, we draw on investments of time and energy, like music practice, sports, vocational training, or schooling. We can build identities around the things we’ve trained for, or the behaviors and strategies we’re skilled at. Of course, our familial and social roles and relationships usually come with elaborate rules of etiquette and protocols that we’re also expected to adopt. We will often simply describe ourselves and others using the name of our skills or vocations, which will also imply the behavior that others might expect from us. The implicit nature of procedural memory in training a skill makes it easier to circumvent self-consciousness, doubts, and any anxieties related to self-confidence and worth. This makes occupational therapy a fairly reliable place to start building self-esteem and a sense of competence, which can then spread to other areas of life.
    Self-schemas and scripts, even though physically structured by neural nets, remain little more than just cognitive maps and models, as ifs and what ifs. They’re not the realities they purport to represent. Parts of them are no more than confabulations, inserted between our shiny or painful spots. No matter how much we love them or rely on them, they really don’t need to be taken with life-or-death seriousness. The affect we attach to them might have us convinced otherwise, but it’s possible to train ourselves to remember their lack of real substance when emotions emerge in response to a threat to the veracity of their claims, and respond cognitively to inhibit these reactions, never mind their big head start. Unfortunately, what will make these models attractive often differs from what makes them true, and the former has more weight. Being convinced of something contributes nothing towards making it true.
    Self-schemas and scripts are our portrait painters, our autobiographers and historians. They are charged with maintaining our sense of integrity, moral character, consistency, continuity, and self-worth. Unfortunately, they will tell us shameless lies whenever it suits them, and they will keep secrets out of the light and away from fresh air, safe from these two great disinfectants. Self is both deceiver and deceived. The errors and delusions built into these will often go unseen, becoming cognitive vulnerabilities, and even setting us up for psychological problems that can practice themselves into psychological disorders and neuroses. We can be the last to know. It might not be necessary to spring for a therapist or sign up for self-help workshops to uncover these vulnerabilities, but it usually helps to find a reliable perspective from outside of ourselves. Even the old hermits in classical Asian paintings needed an occasional friend to guffaw with. The famously introspective Buddha was even heard to say “The whole of a holy life is association with good and noble friends, with noble practices and with noble ways of living” (SN 45.2).
    Self might be imagined as a vote or straw poll, taken among the numerous factors that constitute it, an ever-shifting coalition of smaller, wannabe selves. The wannabe selves are like squeaky wheels, all trying to get the attention of attention, to grab the light. These are metaphors, not neural modules. The smaller their number, the less people call you complicated. A fully integrated or unified self might be regarded as ideal, as we are spared the trouble of changing masks and faces as we move from one situation to another. And in fact we get the word integrity from this, from being a single person. But such a condition will usually require finding just the right niche to occupy. This is captured in the Japanese notion of ikigai, literally ‘reason for being,’ where we might combine what we love with what we’re good at, what the world needs, and what we can be paid for. But this is rare, and we’re normally challenged to minimize our compromises and the size of our wardrobe of costumes and masks, unless we’re slaves to fashion. In fact, more often than not, we will be pressured to internalize mutually incompatible schemas and scripts, leaving us partitioned, self-contradictory, hypocritical, and schizoid. This forces us into even more circumstances where we are asked to admit to error, or solve some bit of cognitive dissonance, which most of us seem ill-equipped to do. And it doesn’t help that our  reactions to cognitive dissonance are strongest when it comes to our self-schemas and scripts.
    On one level, self-schemas and scripts might be respectively regarded as synchronic (same time) and diachronic (over time) versions of our identity. Schemas rely more straightforwardly on semantic memory, while scripts have stronger involvement in autobiographical and procedural memory. Schemas may be open or closed, as for example, a perennial student vs. an economics major. Scripts may also be open or closed, as seen in the difference between an explorer and a tourist. Stereotypy is the behavior of a closed script, and improvisation of an open one. “I am a scientist” would be a closed identity, with aspects of both schema and script, constraining one to scientific method. An open identity might say instead that “I am an investigator” and still claim, “I employ the scientific method as one of my primary tools.”
    There are down-the-road difficulties in regarding the construction of our self-schemas and scripts as investments, into which we have poured our lives. But they do in fact constitute our most personally relevant investments. Their reliability is vital to our self-assurance and self-esteem, and yet we frequently misunderstand their practical functions, take them as the very definition of who and what we are, defend them from any necessary modifications, resist resilience or adaptability, and thereby compromise our own success in life, and thus our self-assurance. Things gets twisted up and around. Both our knowledge and our ignorance can be driven by our anxieties, insecurities, and fears here. We give up large chunks of our potential experience in the world just to avoid feeling anxious over threats to these cognitive structures. It isn’t surprising that so many of our high- and low-tech military strategies apply here as metaphors, like fortresses, walls, armor, burned bridges, darts, traps, knives, slings and arrows. Injuries and threats to our skin are used a lot too: being burned, frozen out, stabbed in the back, collecting scars, and going numb. It’s all about the boundaries here, what Laozi called our place of death. The more we have to defend, the more we need to be defensive: fear for the boundary comes with the boundary. While permeability has its challenges too, at least it doesn’t turn us to stone.
    There are aspects of self-hypnosis to our acts of self-definition, especially in statements that we repeat out loud. “I am A and not-B” has different effects on us than saying “I resonate with A but not so much with B.” Korzybski’s General Semanticists and E-Prime practitioners have strong opinions about even using the verb “to be” in the first place. Weak linguistic relativity (more on this later) correctly suggests that saying “I Am” helps to create perceptual constraints of doubtful value. We also encounter some peculiar epistemic models when we use the possessive, as when we say “my mind.” That makes me wonder who I am, if not my mind.
    We each develop our own criteria for acceptable self-schemas and scripts. We are of course subject to some self-serving biases, like the Lake Wobegon effect, wherewith we’re all more above average than we are, smarter, more attractive, more competent, and better drivers. Whenever we ought to be vigilantly catching ourselves cheating, we can prove to be exceptionally creative. We will play our games right up to the edge where the strength of our rationalizations meets the limits of our self-esteem, and stop just where the anxiety of cognitive dissonance starts to surface. Self-schemas and scripts don’t necessarily any incorporate features of objective self-awareness. The eyeball goes unseen by the eye in search of the eye.
    Attribution refers to the causal factors we assign to our behaviors, or to those we see in others. We play with causal attributions to our benefit, such that we tend to take more credit for success than we may deserve, and lay more blame for failure on social and environmental factors than is warranted. We use a difficult upbringing as our excuse or justification for bad behavior, which Steven Pinker calls “confusing explanation with exculpation.” We like to think of ourselves as effective and competent agents, so that failure often entails a shift to the passive mood, a victim’s position, or a disease mentality. The tree was in the wrong place and wrecked my car. The alcoholic is undone by a liquid that’s somehow “cunning, baffling, and powerful.” The criminal is not guilty by reason of temporary insanity. But most crime, sans a brain defect, can be rationalized by some temporary insanity arising out of social influence. Naturally, we credit the good we do to our own good character. Sometimes we’ll need to affirm that we have some aesthetic taste, or absent that, that we are at least consistent with current fashions, and this binds us to following these external influences, and buying shit we don’t need. Even more often than this, we like to think of ourselves as moral, or absent that, morally justified. Well, it’s not that we just like to do this: we insist. And if we have to be hypocrites about it to avoid that nasty cognitive dissonance, so be it. Of course the whole nature-nurture debate is a false dichotomy, and becoming more so the more that we learn of epigenetics. And the agency of metacognition often goes completely ignored, except where it gets a nod as conscience or character.
    The trend in modern psychology is steadily towards greater attribution to environmental factors in personal development. It’s going too far, as usual, although no real objection can be made to blaming bad effects on such forces as childhood adversity. As argued elsewhere, a significant portion of the data contradicting or modifying these conclusions is being prematurely dismissed as non-normative. The environmental attribution ultimately will only leave us with what Hannah Arendt called “the banality of evil,” that there isn’t really anything exceptional about evil, it’s just a kind of stupidity that’s taken too far and we don’t know how to stop, and this is driven by an ignorant culture. Zimbardo's Lucifer Effect looks at this problem as well. But there usually seems to be that disobedient ten percent or so of experimental subjects who show more independence in agency, conscience, character, and spirit. The ten percent is set aside as non-representative of the norm, instead of investigated. We need science to start looking into these exceptions as well, particularly when they challenge the conclusion that human beings are largely helpless when it comes to self-determination. It may be true that humans largely are, but the exceptions might hold some secrets and useful workarounds. What is it about those who won’t claim the Nuremburg defense, “befehl ist befehl, orders are orders”? Both personal moral disengagement, and deferral to society as a whole as the superior moral agent, have led this society as a whole down some pretty dark paths, and one-way, dead-end streets. Instead of excusing ourselves by pointing to environmental influences, we need to champion that kid who asks “Why is the Emperor naked?” What is it about those who practice Satyagraha? Sure, it’s non-normative, and downright exceptional even. We do hope to not be alone on a soapbox on a corner somewhere. Culture desperately needs these guiding lights.

Shifting Identity
    All four sets of our major anticognitive functions have dastardly deeds to do in this personal domain, but none are so villainous or self-defeating as defense mechanisms. Insecurities about who and what we are are companions of doubtful value from the beginning of life. Our parents are normally rank amateurs at meeting the needs we need to have met, and we usually also have siblings maneuvering for their attention. The other kids are playing their own deadly games, jockeying for social position and connection. We might have to go to Sunday school and get threatened there with everlasting hell for not having imaginary friends. The advertisers start in on us as soon as we’re old enough for cartoons, telling us what we would consume and play with if we were the good kind of children. It’s really no wonder that we start grasping for things that seem to promise a place to make a stand, some sort of reliable foothold, or holdfast, something we can be confident on. Unfortunately, we seldom stop there. Securities need backup securities, and insurance policies are needed for those. It’s still a good thing that we keep adding more and more elaborate schemas and scripts: this is what it is to live and learn. Where we go wrong is in adding them to what we believe we are or must be. Once we do that, we can no longer submit them to questions and challenges. We can’t allow them to be criticized because this criticism becomes a criticism of what we are, an attack on our very person. Neither can we let go of them because this becomes a personal loss, a lessening of who we are. Alexander Pope penned this trivial truth,  “A man should never be ashamed to own he has been wrong, which is but saying, that he is wiser today than he was yesterday.” Self-criticism may be shunned by the new age folk who pray for total, unconditional self-acceptance, but you can see, in real time, the clutter collecting in their minds.
    Contrary to what we like to imagine, set beliefs and identifications are not necessary to real security, or to a real identity. They are only necessary to our illusions of security and identity. Once again, there are differences between open and closed schemas and scripts. To say to yourself, “I really like what Buddhism has to offer” is a different thing from saying “I am a Buddhist.” The first leaves you a lot more free to question, examine, or reject. The second will be reluctant and perhaps afraid to let go of any questionable part of the Dhamma (and yes, Buddha actually did say some pretty dumb and silly things). This non-identification, of course, is an actual teaching of Buddha, but that doesn’t stop Buddhists from ignoring it anyway.
    It’s one thing to have all of the possessions we need, and quite another to enter a state of neediness should some thing thought to be needed go missing. It’s one thing to enjoy a handful of good friendships, and another to be less of a person if our friendships have proven unsatisfactory. Neediness itself isn’t necessary, and it can make life a lot more difficult with an accumulation of burdens that may need fierce and frequent defending. Insecurity only leads to a need for more security. We can be far happier by substituting an adaptive intelligence for accumulated knowledge, or by adopting an authenticity of feeling (assuming some level of self-management) for an emotional state that must be constantly maintained against disconfirming experience. Learning to be grateful for what we already have is one of the greatest tricks going, and it goes well with mostly wanting the things we’re least likely to lose. Permeable schemas and scripts have the advantage of allowing a continuity of identity to persist while parts of the system, even large parts, maybe even core parts, are being repaired or replaced.
    An illusion that consistency or persistence means remaining the same can present difficulties. If our identities and beliefs are seen as changeable, others might call us flip-floppers, wafflers, and vacillators. Yet remaining the same person is the opposite of personal growth. Being unable to grow because we are so fiercely defending what we have become so far should actually leave us less confident about our identity and its security, as well as intelligence. It’s really a question of whether our faults are correctable or lasting. We ought to be seeing the real threats to personal security, self-confidence, and self-esteem in such states and conditions as failure. One of the great benefits of beings a flip-flopper, waffler, or vacillator is in how this will allow us to embrace contradiction and paradox in ourselves, provided we can do this without hypocritical partitioning. “Do I contradict myself?” Walt Whitman asked. “Very well then I contradict myself (I am large, I contain multitudes).” This attitude can take a lot of the pain out of cognitive dissonance as well, and it gives us a better start on seeing different points of view in others. It’s got things to contribute to our sense of humor, too. It’s only up to our sense of integrity to make sense of it. Whitman did this simply by owning it.   
    We certainly should be wary of identifying with and believing in things that can be easily taken away by forces outside our control. It’s one thing to simply deny evidence and arguments that challenge them. It’s another to have them physically ripped away. When we affix our egos to our great successes instead of our best efforts, we are just begging to think ourselves losers. We don’t really make ourselves physically larger by owning or identifying with things outside our skin, or with ideas that come from without. This is only an illusion. But if that thing you had faith in gets ripped away, that’s where you get your crisis of faith and dark night of the soul. The illusory self might in the end be all we can truly call our own, but is it its largeness or its smallness that makes it more vulnerable? Does it feel better to think ourselves larger or smaller, more dense or more diffuse, as a strategy of self-defense? We are only defending ideas here, thin air. It might be best to let go of the idea of vulnerability altogether, given the size and the power of the remainder of existence, and think instead in terms of adaptive intelligence that lets us avoid threats and even seize opportunities in what only seems like a threat to what doesn’t even really exist.
    We have a sense that our personal schemas and scripts are investments of time and energy that must be guarded against devaluation. It’s natural to try to maintain a positive assessment of our own good judgment, but cognitive dissonance can arise in some of its most disagreeable forms here. As is often the case, Twain said it best, “It’s easier to fool people than to convince them they have been fooled.” We have an evolved heuristic called “escalation of commitment,” sometimes just called stubbornness, that will rely on choices already made because change threatens both the effort of learning something new and the still-greater effort of unlearning, patching, or remodeling the old. We get especially whiny about having to unlearn. When used as justification in specious reasoning, this is called the sunk cost fallacy, another phrasing of which is throwing good money after bad. The everwise Anonymous advises, “Don’t cling to a mistake just because you spent a lot of time making it.” This is an expression of the effort heuristic as a cognitive bias, a one-dimensional mental correlation of the value of something with the amount of effort that was expended in acquiring it.
    Thomas Gilovich (2008) shares an observation by Robert Abelson: “The similarity between beliefs and possessions is captured in our language. First of all, a person is said to ‘have’ a belief, and this ownership connotation is maintained throughout a belief’s history, from the time it is ‘obtained’ to the time it is ‘discarded.’ We describe the formation of beliefs with numerous references to possession, as when we say that ‘I adopted the belief,’ ‘he inherited the view,’ ‘she acquired her conviction,’ or, if a potential belief is rejected, ‘I don’t buy that.’ When someone believes in something, we refer to fact that ‘she holds a belief,’ or ‘he clings to his belief.’ When a belief is ‘given up,’ we state that ‘he lost his belief,’ ‘she abandoned her convictions,’ or ‘I disown my earlier stand.’”
    The most temporary and intense of our self-identifications will occur when we take our own emotions as the essence of who we really are. This can be reinforced linguistically, too, as by a kind of self-hypnosis. I’m angry, I’m sad, I’m afraid, I’m offended: these have the power to hijack our entire sense of identity, and our resultant behavior, until they pass, and they aren’t always willing to stop and entertain arguments for the resumption of personal dignity and self-control. Sometimes people need to follow these emotional hijackings up with the posting of bail bonds and subsequent arraignment. This should be taken as a sign. Counting to ten and taking slow, deep breaths can sometimes avoid this expense.
    In the event we aren’t doing much growing at present, or we find ourselves in need of a boost in confidence, we can entertain an internal illusion of our growth by thinking less of ourselves in the past. You might observe this in a recovering alcoholic who is relishing the horror of what he used to be by imagining all the nights he can’t remember clearly in the worst possible light. This might help a bit, but it’s still a trick. There are, of course, other reasons to pump up our sense of self, if only just a little bit, just enough to overcome anxiety or self-doubt, like liquid courage without the liquid. If this becomes a self-fulfilling prophesy and doesn’t hurt anyone, who wouldn’t give this a pass? We just ought to remember that what works is likely to get maintained, and when this sort of thing rigidifies, it can start to work against us.

Ego Defense Made Easy
    SNL’s Stuart Smalley’s daily affirmation was, “I’m good enough, I’m smart enough, and goshdarnit, people like me.” You just knew, of course, that these defenses just wouldn’t hold up against even the lightest of assaults. We have a sense that our developed personal schemas and scripts are brittle or fragile because threats to them make us feel anxious. The smallest bit of criticism, censure, or rejection, might present itself as a threat of trauma to the ego. Even being called into question might give us a bout of stage fright. Despite the fact that life is full of these challenges, that they come and go with some frequency, and despite having shown ourselves resilient and even courageous time and again, we still fear the next time as though we’ve learned nothing about life’s resilience. Finally, sometimes we’re afraid to see the strength of our self-esteem tested, so we will under-perform or self-handicap. This way, a poor performance can be explained simply by not having given it our all.
    There are useful labels to adopt with regard to ideology in general, and its specialties of politics and religion, that still maintain a maximum resilience, and the ability to change and grow. They represent open systems of learning that don’t really need defending. With ideology, the label is eclectic, the approach that allows you to pick and choose the best from a range of sources and perform your own synthesis. True believers will object to our doing this, believing that the ideology, exactly as packaged and marketed, is intended to function properly only as a whole. Recovering addicts in 12-step programs are warned against taking a “supermarket approach,” working only the steps they approve of. There is a saying in recovery that “my best thinking got me here.” Here means both to the bottom and into the recovery program. When your cognition has proven itself flawed, it’s often wise to question it, and give other ideas a chance. So there is something to be said for suspending some disbelief, giving a system a chance to work as a whole, and examining how the parts fit together. The combination of parts may result in some synergy in the process. But after this, there may be no need to keep all of the pieces. If labels are needed, these open types allow us to come and go, dip in, duck out, and suspend both belief and disbelief.
    The most resilient and defensible strategy in politics is simply to declare independence from partisan platforms and examine each plank thereof on its own merits. You will remain free to vote as you will, except in some party primaries, although this may still leave you without a viable candidate whom you could stomach voting for. The summum bonum for political parties and organizations is found in their total membership rolls, and secondarily in campaign funds receivable. But to get to this, they will pre-package serious compromises to please the common denominators. They would rather have a million members speaking with one voice than be one group speaking with a million voices. The individual planks in the platform suffer and the best ideas seldom get heard. While any sane government will implement proportional representation, for this and other reasons, sane government remains largely an oxymoron.
    Some of the most destructive forms of self-defense in religious belief are being seen (in 2018) in the anti-intellectual and anti-science trends supported so stridently by evangelical Christians, particularly in the United States. This is a massive demonstration of the backfire effect (both a defense mechanism and a cognitive bias), a strong attitude polarization found in overreactions to polemical speech, and showing a natural affinity for false dilemma. When the “security” held in a world view is threatened, the true believer will simply dig the heels in, or double down on an error. America’s right of free speech has permitted a gradual increase in open criticism of Christian doctrine. This has led to more strident declarations of dogma and faith. This in turn has led to both the critic’s recruitment of logic and science and to increased attempts to use humiliation or shame as religious claims are reduced to absurdities by the latest scientific discoveries. Unfortunately, science has suffered a bit in the process, as its public image has come to resemble religion, with its preachers, believers, and proofs. Not to be outdone here, the evangelicals have turned ignorance into a point of pride, such that attempts to shame or embarrass them are now taken as compliments on the strength of their faith. The critics, of course, are seizing this moment as a rare opportunity to put some cracks in the theocratic walls without getting imprisoned, tortured, or burned at the stake for heresy. With religion, a open declaration of agnosticism will leave us more free to come and go, and explore what might be worthwhile in any and all denominations. This isn’t as uncomfortable to believers as the harsher declaration of atheism, which can still get you murdered in many countries, because god is love. You can still be both an atheist and a Buddhist, a Daoist, or a Confucian, however, if you absolutely have to tell people something less threatening than pagan or agnostic. Since those are called religions, they will mistakenly assume that there must be a god in there somewhere.
    A cognitive detachment from identifications, ideologies, and beliefs isn’t the same as non-involvement. In fact, this liberation means the freedom to explore involvement with fewer constraints. We have less loss, theft, and wear to worry about. You can suspend both belief and disbelief. You can still like things that others believe in. You can still “really really” like them. Most reputable scientists “really really like” the theory of evolution. They are being untrue to the scientific method, however, whenever they claim to believe in evolution. Neither do scientists prove things right or wrong, except in the old sense that prove means to test. Mathematicians construct proofs, but even those don’t necessarily transfer from the chalkboard to the great outdoors. De-identification can be regarded in a similar way, as a license to explore with fewer constraints. I am, on one level of reality, caucasian and American. This is the identity of a lot of Americans, although far fewer of those are as far left and anti-authoritarian as I am. But I have no home in those boxes. I find it a bit embarrassing to be both a white man and American. I even find it embarrassing to be human these days, though being a hominin, a primate, and a mammal is still more or less OK. Distancing myself from those identities is actually a bit of a relief, not really a loss. I can shift scales, get cosmopolitan, and think globally, in ways that patriots cannot. I can see possibilities for diplomacy and peace in ways that patriots cannot. I’m proud to be a Terran. My world is 58 times as big as America, and 20 million times as old. And it won’t be going away soon, like all nations must.
    We only do harm to our self-esteem with our most common approaches to protecting our self-esteem. And this includes the latest fad of thinking that self-esteem is a right and should be unconditional and unearned. In this contemporary Western culture, the advocacy for unconditional self-love has become an epidemic of narcissism, and ironically, this is most prevalent in new age self-help programs built on revised versions of mystical traditions that sought to diminish the ego. Buddha didn’t say a tenth of the things that these fools quote him as saying. What he taught was “getting over yourself.”
    It’s work to build yourself into a being that’s both capable and worthy of respect. We make ourselves into victims by allowing advertising, propaganda, and proselytizing to manipulate our insecurities and fears, and then let other forces tell us who and what we are, and what we need and want. We tend to look for self-love in all the wrong places, and in things that actually harm us. Up front, it’s more work to examine the pieces that want to become a part of us before we let them inside and integrate them into our being. But in the long run, it’s easier, and we spend a lot less time and energy defending the errors we’ve made.
    Another approach to self-esteem is simply to let it go, what Castaneda’s Don Juan called “losing self-importance.” Pride is a two-edged thing, or else it should really be represented by two different words. There are problems with the words humility and modesty as well, particularly when they turn into a vainglorious self-effacement. The West is in love with self-love to the point of vainglory and narcissism. They say you have to love yourself before you can love others, but then they’re faced with trying to love the messes they’ve made of themselves, and they find themselves with no time for others. They even say you have to get yourself all fixed up first before you can go out and change the world. An alternative, not situated between these extremes, but somehow embracing the best parts of the bunch, is authenticity, just being who you are, generally OK with that, but ready to admit and correct error. Then if you need to feel better, you can go off and do the world some good. Maybe pack an extra sandwich before going to your self-help class, and give it to a homeless person on the way to becoming a better person.
    We still need to find an appropriate place for ourselves within our social and cultural contexts, and this will always entail some requirements that might look a lot like compromise. But if we look closely, we would never become much of a self without interacting in these contexts. We would be feral, speak in odd noises, and likely be unable to invent a stone tool. Ubuntu, an ethical philosophy out of Africa, takes this a step further and claims that we have no self or ena until this is built out of our life’s social and cultural interactions. I am because we are. I wouldn’t go so far as to say we are born with no nature, but we don’t take this very far without interaction, learning, and feedback. We need others and we need culture, in addition to needing to be what we can of ourselves. Being exceedingly idiosyncratic is hardly a surefire way to secure us a lot of friendships. Nor will speaking to ourselves in our very own language win us a spot in cultural history. Real intelligence is adaptive, and adaptation is something different than compromise. To make something of the raw materials provided by society and culture, and to give what we can back in ways that can be accepted, doesn’t lessen us. We ought to be true to ourselves, but we needn't be idiosyncratic in idiotic ways.
    Finally, sometimes the adjustment, correction, maintenance, and defense of our self-schemas and scripts becomes too much to handle, and we run and hide, escaping from purpose, commitment, and responsibility. A self becomes emotionally unpleasant, even intolerable to occupy. The cat sits in the midst of confusion and simply starts licking itself. Alcohol and other addictions are often the escape of first resort. Religious conversion is often a result of hitting this kind of bottom. Less-defined versions of fugue and fantasy can serve that purpose without the commitment. Suicide is somewhat more extreme, but has the advantage of permanence. We can simply change ourselves by destroying ourselves. Self-destructive behavior can give us the freedom from,” but it makes the freedom to” a lot more difficult to find.

Integrity and Character
    To what extent can we have some confidence in what we know without the support of belief? Can “really really liking” an idea be superior to believing in it? Can we have identity without identification? How far can we go being merely eclectic, independent, and agnostic? Are we really giving anything up, other than a false sense of security and an unearned self-esteem? Can the autodidact lay a legitimate claim to a higher education? We may in many cases be giving up public certification and all of the benefits that go with that, but that doesn’t make us imposters. It's harder to get through some doors and past some gates without the officious membership card. One is reminded of the Scarecrow’s diploma, awarded by Oz, after he’d shown enough brains to get him to that and beyond. We might even go one step further than saying “I'm an eclectic, politically independent, religious agnostic” and simply say “I really really like eclecticism, independence, and agnosticism,” opening up our schemas and scripts all the way. I’m not sure if going that far extracts enough from diminishing returns, but it’s a question to ponder. We do still need somewhere to make a stand. Perhaps we can just learn to enjoy saying, “I’m someone who doesn’t want to stop learning, so I’m going to be someone who prefers questions to answers.”
    Looking around at how human beings behave in large groups, and how they regard themselves, we might wonder, just what’s so frightening about being unique, an individual, someone independent in thought, or at least able to question what’s being taught and widely accepted. If it isn’t frightening, just what is it? We know how important it is to be a social animal. There were few greater terrors in our hunter-gatherer times than expulsion from the tribe. And we know how important culture is, to stand on the shoulders of giants and ancestors. But is being your own person really such a threat to that, or is that just what most people have been led to believe? And why have they been told that?
    The word integrity has the same root as integral and integer, being a whole thing, not fractional. A person with integrity can maintain a stronger sense of continuity in moving from context to context, without changing facades or summoning internally inconsistent aspects of character. The work of integrity is to minimize conflicts of interest within the self, or between the selves, and maximize internal compromise and diplomacy where gaps must be bridged. That we can ever be a single or perfectly integrated self is as much an illusion as being a self in the first place, but we remain able to create some coherence out of the sense and idea of coherence. We will always contain sub-selves pointed in different directions, adapted to specific kinds of problems, but we can improve on the degree to which these parts compromise each other, or the whole, by using an idea like integrity to integrate them. Integrity can also help us maintain a sense of dignity, or of having higher standards. There are things we might be sorely tempted to believe, simply because to do so would be tremendously gratifying or comforting, even if deep down we know them to be untrue. Cognitive dignity helps with the discipline here.
    Despite its thespian connotations, character is probably the word that best describes the ideal that allows us to thrive without all of the popular social and cultural prosthetics. Maybe the best version of this idea comes from the Chinese word , which is also translated virtue, morality, goodness, ethics, and kindness. It’s the moral force that comes from being true to yourself and to your path. This of course begs the question of what it means to be true, and then what the true itself is. The Zhouyi, or Book of Changes, charges yǒufú  or “be true” in 26 places, yet nowhere does it state what being true really means. But it does claim that being true is as good as being impressive, and that it’s as good as a bond. The appeal is to our nobility, and if we’re pretending to not know what it means to be true, we need to look a little deeper, and with a more rigorous honesty. To be true is simply to quit lying to ourselves. Being or holding true is also a perfectly legitimate linguistic translation of Gandhi’s term satyagraha. There are also a number of great lexemic contributions to the idea from the ancient Greeks, like virtu, areté, andreia, dikaiosynē, phronēsis, and sōphrosynē. Buddha could offer another list for building a self with the promise of well-being, starting with appamada, a combination of heedfulness, diligence, and caring. None of these will ever come as easily to us as salvation from praying to saviors. They are hard work, a lifetime of it. But you get what you pay for.
    The metacognitive domain, to be discussed at length later, is the primary locus of corrective measures we can take on our personal problems. This is the home of constructive feedback and therapy, and/or honest self-appraisal and mindfulness practice. Some of the available metacognitive practices seek a process called ego dissolution, a deconstruction of one or more of our self-schemas and scripts. Being resilient, humans will of course have to replace them with something, after the experience is over. If the experience is potent enough, we may see that the old model didn’t function as promised and we can rebuild with major changes.
    Specific anticognitives pertaining to the Personal Domain are listed and discussed in Chapter 3.4, Cognitive Biases, Chapter 3.5, Coping Strategies, Chapter 3.6, Defense Mechanisms, and Chapter 3.7, Logical Fallacies.



2.7 - Social Domain

Fellowship with Others, Social Emotions, Social Role and Behavioral

Archetyping, Sociobiology, Belonging and Behaving, Individuality,
Consensus and Diversity, Us Versus Those Others

“Know thyself” is a good saying, but not in all situations. In many it is better to say “know others.” Menander

Fellowship with Others
    This is the ancient domain of life around the fire, with the sun, moon, and stars overhead, and also that of chattering to each other in the trees. Much of the life to which humans have adapted is social, and largely at intimate family and tribal scales. These adaptations extend backward through deep time, even before the first primates. It isn’t surprising that our emotions most frequently emerge out of the dynamics of social interaction, or that the effects of those interactions are at the heart of so many self-schemas and scripts, in how we see ourselves through other eyes. As with most of the other social animals, including the manta rays, the complexity of social interaction and the size of social groups appear to be major evolutionary drivers of brain size, at least where unrelated to body mass. For many, the importance of social functions may even surpass their own importance as individuals. An enormous portion of our emotional lives are adaptations that guide us through the complexities of social living, and these arise ontologically prior to reason. This is what we developed for morality before we had mores. This is what underlies our most basic moral sense and grew into social mores. The domain is pre-linguistic, although it does still encompass earlier forms of communication, including facial expression and gesture, as well as signals and other sounds.
    This is the domain of Gardner’s interpersonal intelligence, “characterized by … sensitivity to others moods, feelings, temperaments, motivations, and … ability to cooperate in order to work as part of a group.” But this “is often misunderstood with being extroverted or liking other people… . Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers.” Social intelligence or competence can be studied in several areas. The social skills identified by the Employment and Training Administration of the US Department of Labor include these: coordination (adjusting actions in relation to others’ actions), mentoring (teaching and helping others how to do something), negotiation (bringing others together and trying to reconcile differences), persuasion (motivating others to change their minds or behavior), service orientation (actively looking for ways to be involved compassionately and grow psycho-socially with people), and social perceptiveness (being aware of the others’ reactions and able to understandingly respond to responses). The last of these is an important part of theory of mind, but we should add the kind of savvy that can reliably predict anothers’ behavior. Respect for others (a willingness to look twice instead of judging too quickly) ought to be on that list, too. So should a sense of humor, of the non-schadenfreude variety.
    Face-to-face, we have a mix of ancient non-verbal, proxemic, cultural, and linguistic dimensions, in addition to the primate and hominin relations in the social domain. It makes sense to develop our models, and even morals, on an understanding of what we are, and that begins with mammals and primates. When we look at evolution, we should first look to earlier lifestyles to which we’ve had time to adapt. Anthropologist John Gowdy writes of a Eurocentric misreading of the natural man as the economic man: “this man is naturally acquisitive, competitive, rational, calculating, and forever looking for ways to improve his material well-being.” But these “deep” directives do not seem to fit well to hunter-gatherer peoples, and strongly suggest a cultural source. On principle, neither a primatological nor an anthropological perspective should assert that an is is an ought. It might as easily suggest something to be to be overridden or overwritten, but with due regard for its evolved place within us. Generally speaking, factors behind these differences will be reserved to the cultural domain, while the more universal issues will be reserved for the social. To some extent, we can look to cultural universals for clues to the pre-cultural parts of ourselves. These can be clues, but not confirmations. In this domain are the original, unspoken, unwritten sides of the social contract.
    Lawrence Kohlberg tracks six stages of human moral development: On a pre-conventional level sit obedience and punishment, and self-interest or what’s-in it-for-me; on a conventional level are interpersonal accord or social norms and law-and-order; and on a post-conventional level are human social contracts and principled ethics. In this domain we regard the third of the six, interpersonal accord or social norms. Johnathan Haidt (2003) refers to the social as the moral emotions. He charts two important features of these, as disinterested elicitors (reactions to good or bad things happening to others) and prosocial action tendencies (effects of social encounters on ourselves that motivate some positive or negative response). Then he identifies four families of moral emotions: other-condemning (contempt, anger, disgust); other-praising emotions (gratitude, elevation, moral awe, being moved); other-suffering emotions (sympathy, compassion, empathy); and self-conscious emotions (shame, embarrassment, guilt). In speaking to the inherited nature of the moral emotions, Haidt clarifies that “Nature provides a first draft, which experience then revises… ‘Built-in’ does not mean unmalleable; it means ‘organized in advance of experience.’” And after the fact, “we do moral reasoning not to reconstruct the actual reasons why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment.” Six central themes, axes, dimensions, or “modules” of social morality have been delineated by Haidt and others, to wit: care-harm, fairness-cheating, loyalty-betrayal, authority-subversion, sanctity-degradation, and liberty-oppression.
    The social domain encompasses a number of Maslow’s needs, especially those for safety or security (physical, social, employment, and resources); belonging and love (friendship, family, community, and sexual intimacy); and esteem (supported confidence, reflected self-worth, sense of accomplishment, and respect of and for others). In addition, we can call out needs for (or wants of) refuge or safe harbor, connection, succorance or consolation, and being soothed or comforted. We also have needs or wants to shield ourselves from abandonment, separation, or rejection. Maya Angelou put her notion of our social needs more poetically with “the four questions we are constantly asking: Do you see me? Do you care that I’m here? Am I enough for you, or do you need me to be better in some way? Can I tell that I’m special to you by the way that you look at me?”

Social Emotions
    Some chart the social emotions biaxially. The Stereotype Content Model (Susan Fiske, 2006) uses the dimensions of warmth and competence as these are perceived in others. Others have referred to the warmth dimension in terms of the other being non-competitive, and the competence dimension in terms of the other’s status, but the elicited emotions are the same. When we see high warmth and high competence, we might feel pride or admiration for good people doing good. High warmth and low competence might elicit pity or compassion, as for the elderly or the disabled. Low warmth and high competence might elicit anger or indignation, as towards those jack-booted overlords or the ostentatious rich. Low warmth and low competence might elicit contempt or disgust, as towards a criminal element or moochers. Both low warmth and low competence might also be nothing more than our own biased or inaccurate perceptions, however, and propagandists will try to train the masses into perceiving these qualities in targeted out-groups, scapegoats, and manufactured enemies.
    We might distinguish between morality and ethics in a couple of ways. Morality tends to be more implicit and pre-reflective, even where explicitly learned while growing up. Ethics, as a branch of philosophy, implies a more reflective examination and explicit choice of behavioral values. Neuroscience might point to the differences in the vmPFC and dlPFC respectively. Moral quandaries activate the amygdala, vmPFC, and insula before the dlPFC even begins to ponder the problem at hand. We are finally beginning to learn that the vmPDF is the more potent of the two, as well as first in line for moral judgment, so exemplary moral living benefits greatly by keeping that old vmPFC cleaned of its fears and resentments. The Buddha had the idea that we humans are basically good or moral, even though part of this comes from an innate sense of shame and aversion to wrongdoing (called hiri and otappa respectively) that are unrelated to cultural instruction. We have a normal, natural, and genuine way of knowing right from wrong called natural virtue (pakati-sila), and sometimes rendered “without-crisis morality.” Under normal circumstances, we will behave ourselves reasonably well. It’s when our complications lead into confusion and crisis that we increasingly stray, often increasing the confusion, exacerbating the crisis, and engaging in vicious cycles.
    Affect remains a far more potent force than reason in our moral judgments and subsequent behavior. One has only to observe the discrepancy between religious pre- and proscriptions and the behavior of religious adherents. The hypocrisy that we see in true believers isn’t surprising at all, considering that this morality has developed in ignorance (and repudiation) of what we are as biological organisms. What’s needed is a reconfiguration of moral reasoning that’s grounded in a better understanding of human nature and the ways we relate to each other emotionally. Penn Jillete offers: “The question I get asked by religious people all the time is, without God, what’s to stop me from raping all I want? And my answer is: I do rape all I want. And the amount I want is zero.” Life itself has evolved a couple of tricks to help us get along with each other, not least being oxytocin, what Sapolsky calls the hormone of parochial love. This helps us form more lasting bonds with those we come into contact with, before our familiarity or contempt has a chance to drive us apart. It doesn’t help us much with our xenophobia, though.
    The social emotions themselves aren’t a cause behind our twisted thinking. Rather, it’s the things we will do to feel the pleasant ones and to avoid the unpleasant ones, and this starts out early in life. We learn the cognitive and behavioral, the social schemas and scripts, gradually, but cumulatively, from the beginning. Much lip service is given to the importance of early childhood social development. But if human beings truly understood how important this was, all modern societies would be more fully structured around it. While it might go too far to require a license to parent, there might at least be rewards for parents willing to learn some things (for years) beforehand. Childhood adversity, whether this is simple poverty and malnutrition, or privation in equal access to culture, or physical and sexual abuse, is about the worst way to begin, and recovery often warrants social intervention and even denial of parental rights. This even causes the brain to play hurry up in its growth, prune connections, accelerate pubertal development, and give up on its its PFC, amygdala, and hippocampus development sooner than it should need to. Distress in adolescence may interfere in development of the anterior cingulate cortex with consequences to impulse control, error detection, and other higher level functions (Tyborowska, 2018). In lay terms, kids grow up too quickly, precluding more mature development, and they can get stuck with antisocial traits.
    Childhood ideological, religious, and political indoctrination is normally considered a parental right and social necessity, but if we really understood the long-term effects of impermeable beliefs and self-identifications on our chances for cognitive maturity, we would give this a much eviler, skunkier eye. The social integration of children with peers is also fraught with issues. Relational aggression, the use of friendship and status to manipulate others, the use of insults, humiliation, and isolation in jockeying for social position, is a real challenge to prevent, especially when children really need time to play among themselves, out of sight of hovering adults. The public school system offers little choice in the company children keep for much of the time, while homeschooling may deprive them too much of social learning. Another problem found in modern public education, that isn’t found in smaller tribes and frontier towns, is a lack of vertical or age diversity. Children are almost always kept with other kids their own age and so they lose the chance to teach the younger ones and learn from the older. This is a little better in larger households, but there the parents will often be too busy to give them much personal attention.

Social Role and Behavioral Archetyping
    As in the personal domain, human social perceptions and behaviors can be discussed in terms of schemas and scripts. It’s going to be difficult to fully isolate the cognitive functions and dysfunctions that operate in this domain from those in the cultural domain. Generally, the social dimensions are those we develop in living life among others without explicit training and rules. This would include natural family and community relationships and the unwritten habits and mores of human society in general. Obviously, society and culture coevolve. Joseph Henrich (2015a) has noted that “Comparative research from diverse societies shows that human social behavior varies immensely across a broad range of domains, including cooperation, fairness, trust, [in-group favoritism/cheating, costly] punishment, aggressiveness, morality, and competitiveness. Efforts to explain this global variation have increasingly pointed to the importance of packages of social norms, or institutions. This work suggests that institutions related to anonymous markets, moralizing religions, monogamous marriage, and complex kinship systems fundamentally shape human psychology and behavior.”
    Sometimes unfortunately, as it is for sustainable hunter-gatherer societies exposed to modern times, the transition from social to cultural norms can be one-way. It’s more than a small challenge to move back to the village from the city, even though many of us can testify to the wisdom of that. Dan Ariely (2009) writes, “When a social norm collides with a market norm, the social norm goes away for a long time. In other words, social relationships are not easy to reestablish. Once a social norm is trumped by a market norm, it will rarely return.” One obvious consequence is that life gets more convoluted. For example: “Money, as it turns out, is very often the most expensive way to motivate people. Social norms are not only cheaper, but often more effective as well.” The variations noted by Henrich at least ought to be examined sub specie culturae: what demands are these cultures making on us, and what allegiance do we owe there? We have elaborate cultural schemas and scripts that structure our social relationships, and even our responses to others within them. In describing them, images of mazes and rat races are often used.
    We begin our socialization with a prior domain, the native, and the innate knack we have for living with others. The term archetypes was used there, in a somewhat more rigorous sense than that used by Jung, which in its turn is grossly misunderstood by large numbers of his followers. What these do have in common with Jung’s is their being inherited, universal to the species, and arising out of unconscious processes. Archetyping is a function of the native domain that underpins the social. There is nothing metaphysical or new age about this: archetypes are simply areas of evolved cognitive predisposition and perceptual facility. They are genetically enabled processes. The genetics don’t contain or encode semantic or eidetic content, they only encode protein manufacture. Evolved social intelligence begins with recognition of social roles and behaviors. As we grow, archetypes evolve into stereotypes, which facilitate quicker and more automatic judgment in dynamic situations where more ponderous intellectual consideration might be a luxury. Much of this is all about predicting the behavior of others. But stereotypes and other forms of category-based inference have their shortcomings and are easily abused in the cultural domain.
    Social role archetyping is an inherent, inherited predisposition to recognize and classify certain typical personal, familial and social roles. They can be inferred from the universal recognition of these roles across diverse human societies, and in many cases, by behavioral evidence of their occurrence in other primate species. They pre-exist social, cultural, and linguistic domains. Some candidate examples are adoptee, adversary, ally, alpha, bully, caregiver, challenger, child, coward, cuckold, elder, explorer, father, fool, gossip, hero, infant, lover, mediator, mother, rebel, sage, sibling, spouse, suitor, thief, sucker, sycophant, and trickster.
    Subliminal perceptions of neurochemical cocktails and combinations of involvement of different parts of the brain might play a major role in their recognition. Again, we don’t inherit cognitive content, and archetypes are not encoded as concepts, ideas, or symbols, as many mistake them to be. They are merely a preparedness or readiness to perceive, and then build or organize perceptions around this as schemas, a priming to sort experience in certain ways or according to specific criteria. Social role archetypes underlie many of the schemas we encounter in understanding the social domain and any innate sense of social structure we inherit. Most archetypal social role categories run contrary to a literal sense of social equality, and are either concerned with diversity, functional specialization, or social ranking. Errors can be made in social archetyping as a result of personal exposure, particularly in childhood. For example, some families are such that bully and father are easily confused.
    Behavioral archetyping is a similar predisposition to recognize and classify certain typical dynamic situations found in social settings, stretching from causes to their consequences. Evolution has shaped the processing of our perceptual input to detect certain types of social behaviors. These pre-exist social, cultural, and linguistic domains. Some of the candidate examples are adulation, alliance, altruism, apology as contrition, apology as explanation, quid-pro-quo balance sheets, banishment, being watched, betrayal, bluff, boasting, censure, cheating, coalition formation, commiseration, competition, cooperation, crime, counsel, deception, dominance, exchange, fairness, feint, flattery, fraud, gossip, gratitude, grooming, hospitality, humor, incest taboo, influence, insult, intimidation, negotiation, nurture, obligation, persecution, praise, prank, reciprocity, reconciliation, refusal, rescue, retaliation, sacrifice, seduction, sport, stinginess, straying, submission, supportiveness, surrender, suspicion, theft, trade, treachery, trust, and xenophobia.
    As with social roles, subliminal perceptions of neurochemical cocktails and combinations of involvement of different parts of the brain may play a major part in the recognition of these archetypal behavioral patterns. This could also underlie any innate sense of morality we inherit, although much of our innate moral behavior can still be overridden by strong emotions in both children and adults with poorly developed PFCs. This is where we encounter Donald Brown’s human universal of “classification of behavioral propensities.” The inevitable and unlearned emotional reactions we have in encountering these behaviors form the inherited substratum of our moral sense. The behavioral archetypes underlie many of the scripts we encounter in understanding the social domain. These develop into two distinct kinds of behavioral scripts: the procedural scripts will allow us to anticipate the sequential unfolding of typical situations and predict their outcomes, and tactical or strategy scripts will consist of problem-solving and game-playing heuristics.

Sociobiology
    The biological studies of this domain are primatology and sociobiology. This has naturally met with much resistance from the human exceptionalists, but it’s a fair bet that at least most of the social behaviors that we have in common with other primates are evolved and innate in us as well. Slaves to black-or-white thinking will think this means genetic determinism, so we should have no nature at all. Evolutionary forces involved in an individual’s survival into adulthood and reproductive selection ought not be overlooked, but multilevel trait and group selection, with a place for adaptive societies and altruism, forms significant parts of current sociobiology. The commonly seen social science aspects are undertaken by anthropology and sociology. Many in these fields try to avoid primatology, to their detriment. In theory, maladaptive processes are gradually selected out of our heritable traits, but natural selection has been interfered with, and to some extent overridden, by human culture and civilization. Consequently, some of the best information about our original nature can only be gleaned from studying our so-called primitive cultures. The biological view can also be instructive of moral nature without invoking is-ought assumptions. The work of primatologist Frans de Waal is full of good examples.
    Much of the work of Western game theorists and economists is actually discovering something more Western, and more cultural, and more typical of white undergraduates who sign up to be studied or experimented upon, than representative of humanity in general. Greed is not a given, and non-zero-sum games occur both in nature and in uncivilized societies. “I-cut-you-choose” remains the best strategy for peers. However, we do not inherit task-specific behavioral roles and programs. We merely have rough interpretive heuristics, guided largely by affective responses to typical situations. We build our repertoires of social schemas and scripts as overlays on these. Social intelligence is an ability to perceive or identify our social roles and behaviors, identify or recognize mental states in others (whether shared or disparate), and interact within social groups in a reciprocal meeting of needs. Our greatest challenges here include the fact that we haven’t had time to evolve native skills adapting us to current levels of population and social complexity.
    The  notion of the ‘looking glass self’ (Charles Cooley, 1902) parallels the African Ubuntu philosophy, suggesting the self is largely constructed out of feedback from social interactions. We gradually learn to see and then define ourselves as others see us, and we shape ourselves according to expectations and demands of others. This is just a piece of the puzzle. The false dichotomy of nature-nurture is alive and well in our current theories of attribution, and currently, nurture is enjoying a turn on top, while old fashioned character is increasingly under-appreciated. It’s likely true, in terms of social behavior, that the environmental factors are more dominant in the majority of us the majority of the time. But those of us still rallied with Pinker (2002) for the death of the blank slate idea haven’t fully given up on the functionality of human nature and the individual’s character in overriding environmental influences. Thankfully, one of the surviving ideas is that criminals are still held personally accountable for their actions (however poorly that personal accountability might be implemented). At least we haven’t expanded the M’Naghten Rule to say that all criminals have been driven insane and are therefore unaccountable. As argued elsewhere, part of the problem here is what I’ve termed the “normativity bias,” a ruling out and disregard of the exceptional in search of human norms. We dismiss the higher-functioning individuals as too atypical for study, instead of asking what they have to teach us. This said, however, despite our inherited nature and neurological priming, even the most independent and exceptional people are extensively shaped by their culture.
    It’s claimed that the majority of information exchanged socially between individuals is non-verbal. Some give this a precise percentage, somewhere in the 90s, but those are the same people who use but 10 percent of their brains. Some evidence for the simpler claim can be seen in the performances of stage magicians and psychics doing cold readings. For most, the cues tend to be more subliminal but will often still get read. Mimicking other people’s speech and gestures can be a sneaky way to entrain them to our ways or make them like us more, but this happens naturally as people synch up and grow closer. And we can be made subliminally anxious or agitated when we pick up subtle dissonances with the behaviors we expect, giving us feelings of distrust or suspicion. We’re subject to all sorts of subtle cues that trigger or prime us for specific responses. We’ll pick up cues from gestures, postures, and micro-expressions that inform us without needing to rise into full awareness. We’re sensitive to implications embedded in the physical distances we maintain between us. Much is given to inferring what’s concealed and trusting what’s revealed. Physical movements other than yawning are sometimes contagious. We read emotions pretty well in the tone of voice and inflection in others, even among speakers of tonal languages. This may be an adaptation to the need to communicate at night. All of these play parts in our empathy and theory of mind, but get conceptually subsumed under intuitive sense or gut feeling.
    Our excitement over the recent discovery of mirror or cubelli neurons has abated somewhat, but this is still an important discovery. These will fire both when we undertake an action, and when we observe others undertaking the same action. Their presence (or that of similar functions, or the behavior of regular neurons in mirror networks) is suspected in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex. They are seen in primates, earning a monkey-see-monkey-do association, and analogues are suspected in other species capable of social learning. They help us in learning behaviors from others, and in reading the intentions of others. Cecilia Heyes proposes a theory that mirror neurons are the byproduct of associative learning and not evolutionary adaptations, but the capacity for learning to do this is itself a pretty impressive adaptation. It also doesn’t matter much if these neurons are connected on multiple levels in whole modules, or they merely partake on one layer in deeper neural nets. A broader concept, called the perceptual-action model (PAM), looks to more distributed neural functions using learned associations with others, and is especially concerned with the phenomenon of empathy. This supposes that our nervous systems evolved to map others’ mental states onto our own to infer their intentions and other states of mind.

Belonging and Behaving
    Our sense of community, tribe, or extended family is just as fundamental and important historically as the nuclear family. Etymologically, affiliation is being adopted as a son or daughter of the group. In most social species, the consequences of inbreeding has led to evolved protocols whereby members of one sex will leave the home group eventually, while having somewhere to go. Beyond this function, eviction from the group in which one has grown up entails a less necessary risk of death and an urge to avoid this at any cost. A wish to remain part of a group urges support of the various currencies of the community economy, especially trust, cooperation, harmonious behavior, mutual supportiveness, quid pro quo, and contributions of useful information. The need to maintain group harmony doesn’t mean that all parts need to be the same, but there are a number of things that a community needs to agree upon.
    Emerging into a responsible adult role in a native community will often entail rites of passage that emphasize the “more of you” that will be expected from you now. Joining a new community might also entail a test, of character and worth. Sometimes an initiation will involve a high price, even ritualized abuse and humiliation, as an entry fee, as a demonstration and cementing of commitment representing sunk costs. It might also create a salient emotional experience that becomes a shared bond with other members. And it’s also a declaration of the group’s worthiness of such a sacrifice. In very few cases are you allowed to refuse to do that thing with the goat. In addition to rites of passage and initiation, some form of oath-giving is normally involved, an avowal of the sacred or core values of the group, to which even your own life may ultimately be considered subordinate. Unlike yourself, these values may be beyond price.
    Both the higher and lower status positions on a social ladder are places of higher stress, so much so that the majority (68.2%) seem content enough to stay within one standard deviation from the norm. Low status individuals can usually find enough social support to supplement their own resources and keep going. The stress for the leaders is usually on obtaining and maintaining their alliances and dominant positions, wherein power can be regarded in terms of having others assist in meeting their needs. There are, however, nobler examples of the highest ranks. There is more to this than the right to peck, thump, and discipline underlings, and service the harem, of course. The words dominion, domain, and dominance ultimately come from the word for home, so the word really means to take charge within your boundaries and to keep both order and peace. Order and peace have costs to their maintenance, but the settled or stratified hierarchy reduces the overall stress.
    Among humans, it’s been shown that we will be greedier and less socially pleasant when we feel powerful or wealthy, which might keep us politically stirred up by encouraging challengers to dethrone the worst leaders. Human beings still seem to lack intelligent ways to put their best people in charge. Occasionally, we’ll get a leader who can inspire loyalty and admiration with the ability to hold true to a higher purpose, principles, or character, though we’ve learned that such virtues are rarely inherited. This leadership is done by setting an example of compelling behavior, not by making examples and compelling others’ behavior. It’s more social than cultural or political, and the mirroring of exemplary behavior and imitative learning is more to the point than obedience. We might call this the difference between leadership and management (or tyranny). But the route to the top in human societies can be brutal enough to eliminate our best and most sentient candidates. A leader, by setting examples, and in demonstration of likes and dislikes, can alter the value of both things and behaviors. This is heavily abused in advertising, where successful people are shown increasing their well-being even further by using a specific kind of soap. Damasio (2000) offers, “the consequence of extending emotional value to objects that were not biologically prescribed to be emotionally laden is that the range of stimuli that can potentially induce emotions is infinite.” We want to learn to disentangle or deconstruct these associations to see if they really are a part of leadership or success.
    Down in the lower social ranks, life is simpler, even though “if you’re not the lead dog, the view’s always the same.” Impression management remains a great deal of work even here. Social identity is a careful balance of blending and standing out. The most basic and minimal strategy here is assuring others you’re not a drain on the community economy, or its currencies (once again) of trust, cooperation, harmonious behavior, mutual supportiveness, and useful information. As an honorable person, you’re not a thief or some other kind of crook, and not a threat to the children or elders. Above this, a majority would like to enjoy some kind of physical, social, sexual, and reproductive success. This means being attractive enough to get that halo effect that conflates being handsome or pretty with being good. Being attractive begins with being well, and being the right kind of clean and clothed. It’s going to help us a lot if we can tell a good story and demonstrate a sense of humor. We want manners that don’t offend. Some things must be displayed and some must never be. We want to express respect, appreciation, and gratitude. Of course, given these wants, the scariest thing is a publicly circulated, negative evaluation, which comes from being watched and seen, and talked about, in secret first. This means being on guard morally even when you think nobody’s watching, and this is where the below average person breaks down. Even conspicuous generosity and public accolades won’t make up for a failed reputation. This frequently means circling the wagons with other like-minded hypocrites and congregating in shared denial.
    The real tests of character are taken anonymously. We find them in random acts of kindness, and in “paying it forward,” to beneficiaries who may not be encountered again for repayment, and doing favors and giving gifts with no explicit quid pro quo. A much-misattributed misquote began its life saying: “The purpose of life is to discover your gift. The work of life is to develop it. The meaning of life is to give it away” (David Viscott, 1993). The teleology implied in the word purpose there can be reconsidered as an elective. Virtue ethics is a focus on how to be, not on what to do. This almost necessarily becomes a situational ethic, in danger of being lost when the rule of law (or the rule of lawyers) takes over. Heroes are thought heroes because they are exceptions. The more common condition is the cowardice of inaction when action is called for, and it’s the most readily excused by the public at large. Buddha said it was easy to be moral when everything is going your way. But some people side with the good only from lack of temptation. The default human social condition is compliance, complaisance, or complicity.
    At its best, a moral identity is distinct from moral obedience. Honesty is generally easier than lying. Since there is less to remember, it’s the default condition. It doesn’t begin as control of the deeper urge to lie. It merely steps up its game to counter an urge to lie. Pinker (2002) finds a moral sense built into our neural connections: “The moral sense is a gadget, … an assembly of neural circuits cobbled together from older parts of the primate brain and shaped by natural selection to do a job.” Frans de Waal has seen this in other primates as well. But the moral person is heeding both inner directives and following social mores. Importantly, as Pinker asserts, moral progress doesn’t require the human mind to be naturally free of selfish motives. There is a natural, evolved moral foundation that can be cultivated as an social ethic, either complementing or countermanding the strictly selfish. Before being corrupted by neoliberals, corporate persons, and manipulated markets, the old libertarian ideals of free markets, invisible hands, laissez faire economics, and enlightened self-interest could build moral societies, though only as long as the tragedies of the commons could be prevented and the most basic needs of the people met. From that fading perspective, the real point of liberty is found in the lessons we learn about duty and responsibility when we have no choice but to face the consequences of our own decisions. But we’ve made a real hash of that.
    Evolution naturally selects for supportiveness in both eldercare and child rearing, since the relationship between elders and children frees the tribe’s middle-age adults to meet the other pressures of group selection. Some moral behaviors related to fairness are already seen in toddlers, and de Waal has shown that even Capuchin monkeys are outraged by unequal pay. We are learning that a distaste for bad behavior begins to develop in infancy. Trust, mutual supportiveness, and some version of the Confucian “What you don’t like done to yourself, don’t do to others” seem to be the inherited moral core of any healthy community. It doesn’t require a great deal of maintenance either, until it gets broken. But then repairs can be a lot of hard work. Trust is a specie currency, requiring some backing.
    There are levels of dishonesty that are regarded as essential, as lubricants to any smoothly functioning society, like little white lies, flatteries, self-effacing gestures, and pretentious manners or etiquette scripts. Beyond this, cheating, fraud, and deception are common in nature for both acquisition of resources and territory, and for mating opportunities, even well into adulthood. Such cheating is usually limited by our ability to rationalize the behavior against our self-image as moral beings, and limited again by whatever calculations we make of the risk of being caught and punished. Theft progresses in stages until it’s stopped, from begging to fraud to stealing to robbing. Some of us are more savvy with these calculations than others, and many are given years locked up behind bars to review the development of their PFC’s inhibitory impulse management.
    It does help us a bit with our moral self-control that risk perception tends to be disproportionate, tied to both an overestimation of potential loss and the fear of social consequences. The emotional impetus to do crimes of passion or honor, for revenge or vendetta, or out of jealousy or rage, seems able to override self-management capacity in an always-significant percentage of the human population. Some of this will be due to genetically and epigenetically compromised neural development, and some due to the lasting effects of childhood adversity. Whatever the cause, belonging is denied to these, and the full benefits of belonging need to be denied to these, until solutions are found. And society will continue to support punitive actions taken against those who betray the public trust or otherwise disturb the peace. Most of us even seem to be born with a willingness to retribute iniquity, and even show a willingness to pay costs for the privilege of doing so.

Individuality
    We make a lot of our social decisions based on the feelings we anticipate having when others react to the behavior we wish to choose. We fear making errors in public. We fear rejection and subsequent isolation. These feelings of fear are so unpleasant in themselves that we will live lives that are structured to avoid the potential anxiety of feeling them, living in a shell within a shell. Reputation must be maintained against gossip, even though gossip need not have any basis in fact. Reputational damage has led to many a suicide. Just seeing ourselves out of step with the group has us questioning our security against public censure, rumor, or ridicule. Most of us never really outgrow the fears of relational aggression that allowed our childhood peers and classmates to torment us. Threats of ostracism, scapegoating, stigmatization, and name calling act to press us into a more homogeneous herd or mass. We are so concerned with being observed that even a pair of eyes painted on the walls of a store can deter shoplifting behavior. A simple suspicion of being supervised reduces cheating. Even the subliminal suspicion of social scrutiny has some power to direct our lives, and you can be certain that this is known and utilized by people who would direct us.
    Our need to belong doesn’t completely preclude or thwart our need to be individuals, although it will often severely constrain the extent to which we individuate. Even at the simplest levels of social organization, the extended family or tribe, where everyone gets to play generalist, we see the usefulness of diversification and specialization, and this opens a door to exploiting our individual differences and talents. Our interdependence fosters a need for the adaptive resilience that diversity offers, as well as the coherence that a shared identity offers. We have our innate perceptions of social roles and behavioral patterns to draw upon and develop. Sometimes roles are chosen early just as a matter of luck, but then they get reinforced and we wind up with these as social expectations that are difficult to break away from. We become known for being or doing this or that, and it might make others edgy if we dare to do that other thing, and so we don’t, even when we want to. Along with a need to accept some degree of individuation and individuality comes a need for the society to develop a corresponding need for tolerance, even a warts-and-all degree of acceptance. Some societies are better at this than others: beatniks and bohemians do pretty well, while the religious fundamentalists will kill or imprison the infidels and apostates if they can, because god is love.
    Individuality should always play at least some of the time at the edge of the envelope of belonging. Getting away with stuff, or asking, or testing what we might get away with, is most commonly practiced between puberty and the maturation of the prefrontal cortex somewhere around our mid-twenties. Some of us mature more towards the beginning of this period and many seem to never outgrow it at all. The theory of natural selection suggests that, for the benefit of the species, this may be the ideal time for us to die, when there is greater risk of us propagating more boneheaded idiots, although there is much collateral damage simply in unlucky youngsters and innocent companions to boneheaded idiots. During this period, our societies tend to a be a bit more forgiving of violations of social norms, and even explicit challenges to these norms. Victimless assaults on the social order are more easily forgiven, since they violate no interests of others, beyond the interest of not being offended. Context dependency and situational ethics are more readily understood. Sins of omission are always more readily forgiven than sins of commission, but with maturity comes a much greater expectation of consideration for others and voluntary efforts to not omit others. Even in youth, however, harder lines are often drawn where crimes have real victims, or where the sacred values taken as the society’s most essential glue are compromised.
    Normativity bias is a term first introduced here for seeking the norm or center of the bell curve as the most valuable or useful information in a set of data, often ignoring what the exceptions have to tell us. Examples abound in psychology, where human norms are even used as the first measure of mental health, calling to mind the Krishnamurti quote, “It is no measure of health to be well-adjusted to a profoundly sick society.” Both positive and Eupsychian psychology seek the measure of health in the exceptional. However, the truly exceptional data points are almost by definition anecdotal, and so they tend to be dismissed as non-representative and unscientific. Examples of this bias can be found in conclusions drawn about the Asch, Milgram, and Stanford Prison experiments, and others like them, where a small percentage of subjects, say from one to three out of ten, will refuse to display the disappointing moral and behavioral characteristics exhibited in the norm. These can’t simply be dismissed as not the droids you’re looking for. This offers information that should be relevant to hasty conclusions drawn about fundamental attribution, nurture-over-nature, inequalities of character, and agency. What is it about this low percentage, and can it be taught? Or is the question: can the learned behavior that’s so disappointing be unlearned? Zimbardo, at least, suggests methods of unlearning this normalcy, and even offers instruction in heroism. We have good reasons to stop dismissing the non-normative anecdote and the exceptional individual. How, for instance, can we ever learn all that we can about the computational abilities of the human brain as long as we disregard the autistic math savant as being a non-representative sample lying outside the scope of our investigation?

Consensus and Diversity
    Shared beliefs, values, and practices hold societies together, and opinions are held and shown at the door like membership cards. There are undeniable social benefits to believing what others believe, or what others tell you that you must believe. When given a choice between being popular or accepted and being right, most of us will opt for acceptance. Because of the fears and anxieties over loss of a sense of belonging, consensus is self-perpetuating. Peer pressure names the primary mechanism. This doesn’t take hold until a behavior is encountered multiple times, or by multiple members, with the latter being more effective. A study published by Daniel Haun, et al (2012) reported that “2-year-olds and chimpanzees are more likely to copy actions when they see them repeated by three of their peers than if they see the same action done by one peer three times.” Solomon Asch, in his widely-known 1950’s conformity experiments, showed that, when faced with obviously incorrect information, around 75 percent of the participants openly denied clear evidence if their own perceptions conflicted with the majority opinion. Social forces like political correctness apply steady pressure in reaction to our speech, and more often than not will get us to succumb to self-censorship. Bandwagon appeal will force a good truth into compliance with democratic principles. But truth is not a vote, and consensus is wrong sometimes.
    Recall that professional wrestling has shown elements of pretending to be something it isn’t, and that this construction is called a kayfabe (pr. kerferb), a carnival term. It extends well beyond the ring into all manner of public activities, where participants remain in character outside the ring. Breaking character, or going off-script, can threaten a world-bubble that’s much larger than individual players, so it just isn’t done. Human society can be a lot like this, especially in areas like etiquette and protocol, and human culture even more so, particularly in subcultures like the art world, or the fashion world. These bubbles remain inflated, even when they make little practical sense, even when compliance might cost people half of their salaries. It’s a folie à plusieurs, a folly of the many. You go along with it, or you get out, or you make others uncomfortable and even angry. Normally these kayfabes are fairly delicate. They just don’t hold up well against laughter and ridicule, or apostasy. They require a relative unanimity, not some smart-ass kid asking why the emperor is naked. Where the stakes are higher, however, as with institutions constructed around the sacred values that form a society’s core beliefs and cohesive force, such humor and ridicule might be explicitly disallowed. Politics defends itself with patriotism and the threat of charges of treason. Religion, of late, has been more clever than that, and has somehow raised unquestioning faith to a higher moral station than critical thinking and knowledge, such that criticism, however well supported by facts, becomes high praise for holding to a still-higher faith, the evidence of things unseen, whatever the hell that means. Social ridicule is thus rendered unable to unseat the toxic schemas seated by peer pressure.
    We are born with tendencies to assume agreement, and to assume that our inner worlds are generally the same, even though we’ll still maintain some need to feel special, and perhaps especially well-informed. We tend to believe what we think others believe, while thinking they already believe as we do. By default, we tend to believe that other people agree with us. We hear more echoes than we hear critical feedback. We’ll tend to mainstream ourselves. These tendencies render the social consensus more or less self-perpetuating. We move like murmurations and bait balls. Without conceptual, behavioral, and technological breakthroughs too powerful to be questioned by the group, we tend to develop our societies slowly, and more randomly than not.
    We are neurologically, emotionally, socially, and culturally biased to seek out norms to imitate, as well as to notice, and try to somehow correct, those who violate these norms. It seems the default position is to be uncomfortable with the abnormal, such that most of our exceptions to these reactions must be learned. Young children learn what they assume to be social norms as they grow, and we can observe what seems to be an instinct to react negatively, even angrily, to peers and others who violate them. This might begin with gossip and teasing, and may only look like joking at first. But the pressures of bullying and relational aggression are no joking matter to the recipient, who often has to relearn more compliant behavior or suffer social isolation and life with diminished prestige or self-esteem.
    The term “domain (or universe) of discourse” refers to a set set of entities under current discussion, together with an assigned lexicon or set of terms. In logic, it’s the relevant set of entities that are being dealt with by quantifiers. Here, it’s the context or reference frame that’s preestablished around a topic under discussion, often with a specific sub-vocabulary of the language in use. In scientific discussions, the vocabulary and taxonomy of that particular field is used, and if additional terms are brought in at all, they will be stipulatively defined for that purpose. In the social domain, we can interpret this idea more broadly to mean all of the forms of social intercourse, and not merely the vocabulary being used in discourse. The effect of establishing such a limited universe in social situations maintains predictability, comprehensibility, and order. But it does this at a cost, especially of true originality, outside-the-box thinking, and beyond-the-envelope exploration. We simplify our choices, but perhaps at the great expense of vastly better ones. We’re seldom so tight as E. B. White’s “Everything not forbidden is compulsory,” but we can lock a lot of options down and out. Noam Chomsky, talking politics, writes, “The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum.” We edit the lexicons, the universes of discourse, the cultural discussion, and the evidence regarded as admissible, to preserve social consensus, while leaving a lot of our vocabulary outside the permitted usage. Constraint in our frames of reference, done for the sake of finding consensus, might leave us only with an illusion of diversity, but this will often seem enough to keep the masses contented, at least until something truly superior struggles its way through the membrane.
    We human beings, raised to maturity with all the benefits to be found in the social, cultural, and linguistic domains, can be pretty impressive creatures. Without these benefits, we really aren’t much more impressive than the other apes. It’s unlikely we could ever reinvent fire or the wheel on our own, or even a simple stone tool more complex than a rock. We would even be hard-pressed to defend ourselves against any animal larger than a bunny rabbit. We really need each other. We just don’t need all of us, or anywhere near this many of us. Society, culture, and civilization functions as a hive mind, an intelligent complex, with an agency of sorts, but missing both consciousness and a conscience. Except for the flickering few bright lights, it will tend to run on autopilot, so the members who would be better off taking charge, and directing the damned thing, wring their hands and ask instead, “where is this thing taking us.” The military contractors tell us what the average soldier will look like twenty years from now, but nobody stands up to that and declares, “No. That's wrong. Instead, there will be peace.” The fallacy of groupthink regards pride of membership in a group as a substitute for thinking. Order becomes orders, uniformity becomes uniforms.
    Social worlds that run on consensus still behave as systems, however, and according to discernible characteristics, if not laws. They have moving parts that cannot be all alike. As in ecology, diversity translates to resilience and depth. Even the more rigorous religious faiths have inner and outer circles, distinguishing between the most committed and the general congregation or laity, and there are double standards and sets of expectations that accompany distinctions in responsibilities, privileges, and powers. Typical of many, the Cathar community drew a dividing line between the perfecti (renunciates) and credentes (lay believers). Expectations of purity or sanctity are higher for the committed. But greater rewards aren’t always awarded: the Master in the Zen temple might have to clean the latrine. Some specialization of labor is almost always welcome, particularly when this optimizes the exploitation of individual talent.
    Inevitably, diversity runs the risk of disagreement, and carries the threat of eventual schism. Societies can generally tolerate at least a small percentage of members living on or out past the fringe, like the shaman at the edge of the village, or the wildly inventive or prodigious eccentric. The utility of this may even drive the genetic frequency, or infrequency, of such individualistic temperaments. Somebody’s got to be on the lookout, watching outside the box. Somebody's got to stay up on the battlements, keeping a watch, though mostly just counting stars. Even strong disagreement is sometimes permitted, as with Gandhi’s Satyagraha, or Diogenes’ Parrhesia, as outspokenness, or speaking truth to power. The King had his Fool, who was free to speak his mind as long as it looked like jest. There always needs to be someone who refuses to drink the Koolaid, or else the society is in big trouble.
    We set standards for permissible levels of disagreement, and have multiple protocols for de-escalation, compromise, diplomacy, mediation, arbitration, and relationship repair. Confession and forgiveness then ease us back into belonging. We seem to have a general understanding that we have to pick our battles, although this seems to come with a poverty in understanding which battles to pick. More often than not, the correction of social problems, for both the group and its members, is like the arcade game of Whack-a-Mole. One-dimensional solutions are all that get proposed for multi-dimensional problems, or we fish for red herrings and let the snakeheads swim free. Of course, those who are the source of these issues are happy that things are this way. Then they can point to all those battles you failed to pick and claim that you’ve consented by silence. Society in general may not have considered the minority of disobedients and nonconformists respectfully enough.

Us Versus Those Other People
    If we are mighty, then woe to those not our kind, to whom we’ll be mighty unkind. The forces of our bonds make it difficult to identify with them and to appreciate their point(s) of view. This will be even worse where we use what we think they are to tell ourselves what we are not. We discount them across many measures. Ethnocentrism and xenophobia are the ugly flip side of our cohesion, and we too frequently add euphemistic labeling, dehumanizing terminology, name calling, and demonization to our cognitive appraisals, making fear and disgust our first affective response to the ongoing evidence of their existence. Zimbardo asks the question, “What does it take for the citizens of one society to hate the citizens of another society to the degree that they want to segregate them, torment them, even to kill them? It requires a hostile imagination, a psychological construction embedded deeply in their minds by propaganda that transforms those others into ‘the Enemy.’” This enemy must be de-individualized, rendered faceless and stereotypical. You don’t want to see them up close, as a true traveler might. A tourist, however, will pay a great deal extra to move through the foreign land surrounded by a shell of the familiar, and stay in fancy hotels that are just like those at home. Thankfully, such a shell will not fit in a backpack.
    The schemas and scripts of society can be analogous those of the person, but writ a bit larger. The parts of society and their interrelationships even have things in common with the parts of the self, although it’s dangerously incorrect to assume that governments are the brains of the body politic. The group identity is a bit like the ego, and can be just as problematic. No matter where we go, or how cosmopolitan we become, there always seems to be an us that we care more for, and a them that we care for less. We are usually the good guys, naturally, by virtue of the geographical wisdom in our birth. As a group, we have schemas and scripts, models and languages, mores and customs in common. We don’t really know why those other guys have failed to adopt them, but that all seems mighty suspicious. We have esprit de corps, while they have jealousies and schemes.
    Our belonging to a social order doesn’t need to degenerate into a collective narcissism or exceptionalism. Not all forms of pride are bad, and pride of place will look after a place, and will keep it better maintained. Taking a little pride in being somewhat more moral or ethical than those other people might actually lead to better behavior. But an in-group really needs eyes outside the group to double check that this really is better behavior and not a delusional view.
    The larger societies get a much heavier ideological overlay, especially in communities large enough to provide anonymity. Where there are people you don’t know, just running around loose, there arises a necessity for laws, and the rule of law. Many even get stuck with lawgiver gods. Banding together gives us strength in numbers, even the might that makes right. Unfortunately, so does joining a gang, or otherwise hanging around with a bad crowd. These alliances are circuitous routes to feeling powerful, and we share them with other primates. It allows us to imagine that we can displace responsibility for our actions, or rationalize unequal treatment. After all, deus vult, and orders are orders.
    Collective agency requires shared beliefs, ideologies, schemas, and scripts, and we need these to get things done that we can’t do alone. The greatest of several drawbacks to collective action is that, individually, freeloading or free riding makes the most sense, while the pursuit of individual goals will lead to depletion of shared resources, or the tragedy of the commons. The available solutions at larger social scales require institutions, something to force the internalization of external costs, and some punitive approach, or at least one of enforcement, towards free-riders and exploiters. But unfortunately, these free-riders, at least the most successful ones, are often the ones who hold the power to make the laws. As our  societies grow, we have additional problems related to anonymity and diffusion of responsibility that often demand such means of oversight as policing, tracking, and licensing.
    The cosmopolitan ideal was first propagated by the ancient Greek Cynics and Stoics as a workaround, to help us outgrow our parochial or sociocentric in-group biases. The patriotism fostered by city-states and states really ought to be thought of as playpens for immature minds. Mark Twain recommended an ancient cure for this fearful immaturity: “Travel is fatal to prejudice, and narrow-mindedness, and many of our people need it sorely on these accounts. Broad, wholesome, charitable views of men and things cannot be acquired by vegetating in one little corner of the earth all one’s lifetime.” The value of leaving the dock or harbor for other lands was reiterated eleven times in the 3000-year-old Book of Changes: it’s “rewarding to cross the great stream,” to get larger frames of reference and other perspectives on life. If we really need an inferior people to look down on, to regard as an out-group, most definitely not-us, we can usually find plenty right here at home, often in the person of those who demand that we look up to them without even showing us why that makes sense. They can be the patriots, and we can be the Terrans.
    In-groups themselves will also tend to have sub-groups that are in or out of seach other. Within a single society, gender, age, race, ability, socioeconomic standing, education, and moral character all encounter struggles for social power and influence. A democratic majority can be every bit as tyrannical as a tyrant without clear and enforceable constitutional or charter protections, leaving disadvantaged classes having to weave narratives of suffering before they can claim special considerations.
    Our in-groups no longer have to be geographic. Global communication has given us new access to the human data pool, and just about all the evidence we need to support or refute any position to most people’s satisfaction. We can usually find enough people out there now to make up any manner of group. Homophily is our want to bond with others of like inclinations. And because we can do this so easily now, we can come to believe that even the wackiest notions are widespread. This has both social and cultural downsides. We can tune out most of the human spectrum at will and actually reduce the diversity of data we receive. This has been called cognitive tribalism, and is also referred to as an echo chamber. We can join with Flat Earthers all around the globe on social media (or we can join just to troll and tease them).
    Moral disengagement is the worst final result of using out-groups to form an in-group identity. The dehumanization of other human beings, often from meaningless differences like accidental geography of birth, is right up there with the worst of our traits. It gives us war, genocide, torture, and the general embarrassment of patriotism. It’s a more complex process in the brain than simple dislike or disgust, and changing it requires more than the politically correct changing of names. At even the simplest level, the disengaged may claim, “they’re insured,” or “they’ll learn a good lesson from this,” or “they were given every chance to join us.” But this is the way of the lynch mob, the conquering army, the human military drone operating the mechanical military drone from thousands of miles away, and the idiot in the silo awaiting the orders to launch a nuclear weapon. It take a lot of special cognitive reframing to regard such destructive behavior as being in any way morally acceptable. And yet those most willing to undertake this are the same people who most loudly proclaim obedience to a god who commands “thou shalt not kill,” and they come out the other side of this behavior without guilt, and still feeling morally superior. Go figure. Some do commit suicide afterwards, however.
    The specific anticognitives at work in the Social Domain are listed and discussed in Chapter 3.4, Cognitive Biases, Chapter 3.5, Coping Strategies, 3.6, Defense Mechanisms, and Chapter 3.7, Logical Fallacies.



2.8 - Cultural Domain

Idols of the Theater, Memetical Metaphors, Gene-Culture Coevolution,

Spandrels and Exaptations, Transmission, Narrative Form, Hive Mind,
Ideology, Persuasion, Signal-to-Noise Ratios

“We do everything by custom, even believe by it; our very axioms, let us boast of free-thinking as we may, are oftenest simply such beliefs as we have never heard questioned.” Thomas Carlyle, Sartor Resartus

Idols of the Theater
    Culture is the transmission, sharing, and learning of information between contemporaries, between generations, and between the dead and the living. It’s our common pool of discovery and innovation. It’s simultaneously the most impressive and the most horrifying thing about us. Without it, humans are little more than physically disadvantaged apes. It’s highly unlikely that an isolated band of us would be able to figure out fire, or even invent a primitive hunting weapon. Transmissible culture as we know it, together with at least some natural pedagogy, occurs in only a few other species, notably among primates, elephants, and cetaceans. Even if we include the cetaceans, whose languages we’re still far from comprehending, humans are the only species where culture assumes a role in individual life that’s even more predominant than the social. It takes a lot more than a village to raise a child: it takes an entire culture, full of ancestors, all the way down.
    We’re creatures of culture as much as we are of nature and society. Joseph Henrich has begun to sketch what culture has done for our lineage over the last two million years, and has ideas about what this might have done to and for us genetically. His accent is on the positive contributions. Whether or not our knowledge resides in human brains doesn’t really matter much anymore, as long as we can get to it when we need it. Individuals are increasingly more ignorant than the rest of the world, or  the total database of humankind, which should add to our humility and gratitude when we can manage to learn from others. We’ll have no shortage of second opinions, should we ever bother to ask. We have evolved brains that never have to stop learning, as long as we can learn how and when to use them. This being a treatise on anticognitives, however, we will be concentrating on where culture has led us astray, on the the propagation and circulation of error, with the stipulation that we could not begin to correct these errors without culture’s happier side. Part of this aspect has been called agnotology by Robert N. Proctor and Iain Boal, stipulated as “the study of culturally induced ignorance or doubt, particularly publication of inaccurate or misleading scientific data.” Vera Pepa speaks less cautiously: “Human culture has mutated into a sociopathic marketing machine dominated by economic priorities and psychological manipulation.”
    One example of our gene-culture coevolution is found in the longevity of individuals past a biological prime of around 35 years. In the social domain, the survival of grandparents would be favored until grandchildren are grown, or mid-fifties, since these two age demographics can fill each others needs for assistance and education, respectively, while those in middle age group are out hunting and gathering, or whatever passes for that today. As our culture advances, there is even more use for an elder’s continued survival: the elders carry forward a lifetime’s accumulation of culture that can be passed to young and adult alike. This becomes somewhat less important as a culture’s rate of change grows more brisk, but a core curriculum of cultural literacy is still needed. Even a portion of our expanded cranial capacity and our extended childhood might be attributable to the usefulness of the culture that we’ve been accumulating for the last 2 million years, as Henrich would have it, or 200-350,000 years, as the “sapient” hominin exceptionalists would assert. The common assertion that ancient lifespans were a lot shorter than they are now will continue to be misleading as long as infant and child mortality are included in the averaging.
    Accumulated culture is subject to catastrophic damage. The fall of Rome left much of the world in a long dark age, although many of the older Western lamps were kept burning in the Islamic empire, in Al-Andalus and elsewhere. Devastating losses were incurred at the hands of the Christian zealots, who burned the Alexandrian library, and the tyrannical emperor Qin Shihuang, who burned China’s books so that history would begin with him. There were other great losses as well. We can never know how much was lost 74,000 years go, when Toba blew. The lesson is to always make backup, and stash it in a safe place. We who are writing for the archaeologists digging through our ruins should remember this.
    Francis Bacon, in his early theory of anticognition or agnotology, wrote of the Idola Theatri: “Lastly, there are idols which have immigrated into men’s minds from the various dogmas of philosophies, and also from wrong laws of demonstration. These I call Idols of the Theatre; because in my judgment all the received systems are but so many stage-plays, representing worlds of their own creation after an unreal and scenic fashion. Nor is it only of the systems now in vogue, or only of the ancient sects and philosophies, that I speak; for many more plays of the same kind may yet be composed and in like artificial manner set forth; seeing that errors the most widely different have nevertheless causes for the most part alike. Neither again do I mean this only of entire systems, but also of many principles and axioms in science, which by tradition, credulity, and negligence have come to be received.”
    The cultural domain includes Gardner’s teaching-pedagogical intelligence, “which allows us to be able to teach successfully to other people.” Having this also entails knowing how to learn, so this domain embraces both how we learn and how we teach. It also advances us into Piaget’s formal operational stage of development. Evolutionary psychologists will place greater emphasis on innate and evolved capacities, but even where these are predominant, they are usually shaped by cultural influence. On the other side of the question, but not really in opposition, Nisbett (2001) shows that measurable differences in cognitive styles can be found between collectivist and individualist cultures, marked by field-dependence vs. field-independence, and holistic vs. analytic trends, respectively. East is East and West is west. Common rules of thought are to at least some extent born of culture. Culture has an enormous impact on our emotional lives as well. We’re taught what to value, what to need, what to want, how problems are solved, and how to react to being less than satisfied, at least until we can start to teach ourselves. We’re bequeathed the yardsticks and measures with which we’re supposed to do our appraisals. And these are used to mold and manipulate us by those who know how to do so, to serve their own interests, with too little regard for ours.
    Social constructionism is more of a theory of culture than one of the social domain. According to this, we will create models of the social world and then reify them through our cultural exchanges and linguistic descriptions. These models become sources in themselves of meaning and value, and even the sense of what’s real and what isn’t. These models vary between cultures and the variations can be wholly incompatible, like the divine right of kings and democracy, or else can require translational transformation, like AC and DC current. Models are upheld by consensus, and wherever hopes and dreams are pinned on them, they are desperately defended. Leaning to one obvious side of the nature-nurture debate, social constructionism may put too much faith in our ability to create a reality that can withstand the consequences of ignoring our evolved biological nature. It will rationalize as much as it can of these consequences, often citing a deeply flawed nature that must be overcome with renewed and amplified cultural vigor. Still, our shared assumptions about reality set the stage for most of our interactions within the culture, and we will invite social consequences in challenging them. Even worse, we freedom fighters may be forced to be creative, make up our own meanings and values, and hope that others might still befriend us.
    The subject of language is discussed in the next chapter, as though it were its own domain, even though it’s clearly an extension of the cultural, and also the major mode or medium of cultural transmission. This includes ordinary spoken and written language, specialized lexicons, formal logic, mathematics, musical notation, taxonomy, correlative systems, nested analogies, and even the systems of divination with finite numbers of vocabulary elements. There remains much controversy over the relationship between language and the evolution of both culture and the brain, and the role of language in shaping more ancient modes of perception and cognition, and this will be discussed there. However, conceptual aspects of linguistically structured and accessed thinking, such as those of ideology, will be discussed here in the cultural domain, where they haven’t already been presented earlier. Precursors of the spoken and written languages that we know today, including mimicry, facial expression, gesture, vocal signaling, sensory and conceptual metaphor, and symbols or general semiotics begin in the native domain, but are developed in the social.

Memetical Metaphors
    Meme theory, or memetics, is an extended metaphor or analogy that can be somewhat useful in understanding our cultural evolution. The word meme, coined by Richard Dawkins in 1976, is an analog of the gene, and is defined as the smallest replicable unit of culture. This is a non-living, non-conscious, opportunistic replicator that symbolically infects a host’s mind, directing the host to replicate its pattern. It isn’t a meme if it just sits there. Examples of memes include slogans, words, catch-phrases, melodies, icons, inventions, logos, and fashions. They can be transmitted, always from person to person, through any compatible medium, although they can also be held in cultural storage for extended periods. They are literally communicable, i.e., spread by communication. Memes compete, mutate, and respond to selective pressures. They have epigenetic analogs. At a minimum, memes can be as real as other entities in the emergent world of awareness, as mental objects, although they can also be individually paired with specific neural interconnections. It’s important to point out that meme theory isn’t science, even though it comes from scientists and is described by some explanatory science. It’s merely a template, an analogy, an exercise in correlative thought, and a source of ideas.
    Memeplexes are bodies of memes organized as schemas, but also show self-replicating behavior. They consist of memes that replicate together, coevolve, and coadapt. If the meme is the code entry, the memeplex is the subroutine, and the schema or script is their software or program. Extraneous memes, not fundamental, or even relevant to thes coherent configuration, can still attach themselves to successful memeplexes and hitchhike or piggyback their way to an unearned success, or they can eventually become conjoined like life’s mitochondria and chloroplast endosymbionts. Jack Balkin notes, “memes form narratives, social networks, metaphoric and metonymic models, and a variety of different mental structures.” And adds, “some meme complexes may act like cultural patches that allow people to work around the deficiencies of their hardwired heuristics.” Memeplexes can speciate or schism, as we saw in Martin Luther’s Reformation. They can also be taken apart to salvage their more valuable pieces, as we see in eclecticism. And they can be remixed or recombined, as we see in syncretism.
    Henrich stresses how important mimicry is to cultural transmission, and places special importance on the role models provided by the successful and prestigious among us. Bait or promises are important parts of a memeplex’s reproductive strategy. Bait will often appeal to baser instincts using hooks, hot buttons, fear and anxiety, revenge, insecurity, dragons to chase, crowd enthusiasm, or freedom from culpability. There are other biomemetic [sic] tricks as well, camouflage and stealth for instance, and even trickster memes like decoys or the Trojan horse. Pseudoscience might be called an example of camouflage, but will betray itself by saying things like “scientists believe…” or “science proves… .” In spreading, success will beget success. We watch others copying the thought of the prestigious and so we come to hold them in still higher esteem. Some memeplexes will employ adversative transmission, moving those who hold them to sabotage or attack competing complexes. Many adversative replicators will even grow, or at least rally and consolidate, by creating adversaries or enemies. There may be dire threats regarding what happens to expats, traitors, infidels, and apostates. These are common in both politics and religion. To some extent, we see it even in polemical arguments in science and philosophy, especially wherever needless false dichotomies are set up. The harm done in adversative replication will tend to be invisible to participants.
    In the end, the success of memetic replication comes down to its fitness, where properly understood as “adapting to fit a niche.” The success of any non-sustainable program or script is necessarily temporary, as unsustainable behavior must by definition extinguish itself. However, the time frames that frame the unsustainable behavior can extend beyond any perceived horizons and so consequences remain unseen by the nearsighted. Memeplexes can thus bring about their own end without this being recognized. Shorter time frames are seen in memeplexes that are more obviously autotoxic or self-destructive, as with suicide, martyrdom, military sacrifice, honor killings, and the victim or disease mentalities. Others may not be fatal, but they diminish the quality of life to painful degrees, notable examples being the setting of land mines and female genital mutilation.
    Toxic memes will move to preempt or disable their competition, regardless of merit. We can also just call them bad ideas, where bad is both contagious and destructive. The sexiness or seductiveness of toxic ideas come with social diseases for which prophylactics may be warranted. Toxic memes interfere with or implant obstacles to future learning processes, or preempt meaningful dialog. “I am the way and the truth and the life. No one comes to the Father except through me.” They may just poison the nearby wells. Sometimes they are able to twist new information around to fit their own mold. Vaccimes, or immunomemes, confer resistance to these. Common examples are scientific method, skepticism, fact checking, logical analysis, and demand for proof. Laughter is sometimes another. To beat the analogy even more to death, we also have the antivaxxers out there, warning us to not be judgmental because every idea is true in its own way.   
    Immunomemes can be overused. A knee-jerk or overreactive cynicism, or even just an impatient skepticism, can be just as conducive to ignorance as gullibility. It’s an autoimmune disorder. It doesn’t bother to first understand the thing it’s reacting against, or to see that an issue might be fuzzier than a black-and-white or false dilemma. It will often forego reading comprehension in its haste to reply. We see this a lot in the losing battle against ignorance, such as hypersensitized social media forums that are dedicated to debunking misinformation and pseudoscience. Uprooting the memes and memeplexes that are already implanted is a more difficult process, and more cognitively costly than vetting new information before we admit it. This only justifies a portion of a conservatism or orthodoxy that rejects all new memes out of hand. Some of this conservative rejection is required by the nature of culture, and demand for proof is one form of vetting.

Gene-Culture Coevolution
    With that said about memes, culture itself is the best analogy to evolution itself, with subcultures standing in for genomic variants, and failure to meet a people’s needs as negative selection. Group cooperative skills, the exchange or appropriation of new information, and a prestige bias for the informed, all represent adaptive intelligence. Like evolution, a culture will be a cumulative project. While culture informs and helps to develop our individual minds, our brains change a lot more slowly. Linguistics even has analogs to predictable rates of genetic drift. Genetic evolution is still happening, and has probably even stepped up its pace, despite our ability to remove so many of the old (and still desperately needed) selective pressures. Our cultural evolution has introduced forces of it’s own, selective ones that can kill us in new ways, and positive ones that affect sexual selection.
    Cultural changes alter our environment, which affect both epigenetics and selective pressures on our genes. But to have this effect, the changes to the environment and our interactions within it have to persist for a large number of generations, as they’ve long been doing, for instance, with social learning mechanisms, fire, tool use, clothing ourselves, cooking and food preparation, agriculture, urbanization, ritualization, and not least, language. Prehistoric practices of communicating around a fire at night may have already had effects on how we listen to others’ tones of voice, as a substitute for reading facial expressions, and in how we favor learning by narrative. The array of adaptive strategies that a nomadic tribe develops allows it to occupy new environmental niches and learn even more adaptive skills. This has long-since, but gradually, turned the human species into a generalist, while at the same time, it’s resulted in adaptations expressed as racial differences, a process that could have led eventually to speciation. As far as our racial diversification has gone, it may only account for less than ten percent of the genetic differences between any two individuals, even though this did have significance in terms of developing new adaptive strategies and strengths. But ours was a false start towards further speciation, and already those differences are beginning to disappear again with globalization and racial interbreeding. Viva la raza and outbreeding enhancement.
    We see the beginnings of gene-culture coevolution in some animal species, as with learned diet and feeding preferences, nonverbal communication and signaling, migration patterns, foraging and hunting strategies, and sexual selection (I want that new kind of female, without the tail and all that fur). On coevolution, and contrasting memetics with genetics, Andrew Whiten (2017) offers: “Social learning and transmission provide the inheritance element and human invention the variants, the most successful of which are transmitted to future generations, generating cultural adaptations to environments around the world; and progressive, cumulative cultures show immense regional differentiation… [However, in culture] transmission is not only vertical, as in genetic inheritance from parent to offspring, but can be horizontal, between unrelated peers, or oblique, from unrelated individuals in the parental generation; moreover, because this involves neurally based learning rather than genetic change, such such transmission can be quite rapid…. Social information may be gathered throughout ontogeny and indeed across the lifespan, and in interaction with individual learning and practice, it can thus permit iterative and flexible forms of adaptation as circumstances change.” So there’s even room in here for Lamarckian evolution. Further, horizontal diffusion of alternative schemas between non-relatives (like unrelated memes and memeplexes) is much more predominant in cultural evolution than horizontal gene transfer in nature. And so is the transmission of toxic memes and other pathogens.
    Geert Hofstede, a 20th century Dutch social psychologist, developed a six-point “cultural dimensions theory” that included a power-distance index (the strength of social hierarchy, stratified vs. egalitarian); an individualism vs. collectivism index (relative self-importance and the size and number of social groups); an uncertainty avoidance index (the society’s fear or tolerance of ambiguity and diversity); a masculinity vs. femininity index (whether favoring assertiveness or nurture); a long-term vs. short-term orientation index (relationships to tradition and change); and an indulgence vs. restraint index (immediate gratification vs. a willingness to defer). Clearly this theory spans both the social and cultural domains. We can look at this now, through a lens of Henrich’s theories, in terms of cultural trends and tendencies that could have lasting effects on human social trends and consequent genetic evolution via selective pressures, including sexual selection. If we did this, we might consider adding a rural vs. urban living index, and one for the cultural aspects of tropical vs. temperate zone lifestyles, and yet another one for r-strategists vs. K-strategists.
    Lotem, et al (2017) writes, “When humans and other animals make cultural innovations, they also change their environment, thereby imposing new selective pressures that can modify their biological traits. For example, there is evidence that dairy farming by humans favored alleles for adult lactose tolerance. Similarly, the invention of cooking possibly affected the evolution of jaw and tooth morphology… .” We might even add evolved brain growth to that. He continues, “Culture exerts selective pressure that shapes learning and data acquisition parameters, which in turn shape the structure of the representation network, so that over evolutionary time scales, brain anatomy may be selected to better accommodate the physical requirements of the learned processes and representations.” He takes this a little farther than some are willing to go in the evolution of cognitive mechanisms. Given the typical rates of change in genomes, even in the midst of punctuating events, and even with powerful adaptive advantages that our newer cognitive mechanisms confer, it’s still hard to distinguish where wetware ends and software begins. That’s especially true for language. But we can be pretty certain that the issue isn’t black-or-white, and at least some structural modification of the human brain has already taken place. It may not be massively modular, however.
    Henrich points to the crossing of a Rubicon into culture-gene evolution that roughly coincided with the rise of h. erectus and h. ergaster. It’s at this point that “cultural evolution becomes the primary driver of our species’ genetic evolution.” This Rubicon happens to agree with my own theories that add punctuated evolution to the gradual, here with a Yellowstone event 2.059 mya. For the false dichotomy folk, this doesn’t mean that genetic bottlenecks created these species, merely that more adaptive and cooperative behavior would be more conducive to survival through this period. This would happen again 639 kya (h, heidelbergensis), and again with Toba 74 kya (behaviorally modern h. ignoramus). Something else of significance may have happened around 350 kya (h. neanderthalensis and rhodesiensis), although punctuation need not be geologic or climatic: a disease could do it, or a better mastery of fire, or weaponry. We took another big step with agriculture and urbanization, circa 10 kya, that’s also having some ongoing genetic effects. Henrich suggests, “Once these useful skills and practices began to accumulate and improve over generations, natural selection had to favor individuals who were better cultural learners, who could more effectively tap into and use the ever-expanding body of adaptive information available.” Elsewhere, he adds, “Many of our cognitive abilities and biases make sense only as genetically evolved adaptations to the presence of valuable cultural information.” Our genes are learning how to learn better from culture. He claims this evolved a new kind of role model, founded on prestige, and thus persuasive ability, that offered an addition and alternative to older role models based on dominance.

Spandrels and Exaptations
    Coevolution doesn’t mean that faculties will evolve to meet the needs of ongoing developments in culture, even when given ample time to do so. Alternatively, it may be that the cultural element itself will evolve to fit the currently evolved structures of the brain (and here, should we will it so, there may even be a teleological element). Cultural software may occupy parts of the brain with unrelated functions but compatible processing requirements, inputs, and outputs, and it need not displace the original evolved functions. It may also exploit configurations that evolved with no function at all, but which are nevertheless available for exploitation. Steven Jay Gould called these spandrels. After some useful piece of cultural software has occupied and exploited these neural structures, the successfulness of the exploit in conferring adaptive traits and advantages might reinforce future neurological mutations that improve its function here. As Francois Jacob wrote in 1977, “evolution is a tinkerer, not an engineer.” The function of a structure, a genetic configuration, or a trait can itself shift and evolve during its history, as feathers that evolved as insulation are retasked for flight. This is called exaptation. The shift in function of a trait is called cooption.
    Evolution has to work with what it’s already come up with. It can’t just say oops, then go back and start over. Cetaceans can’t go backwards and evolve hands. There is a ratchet effect that necessitates that solutions to adaptive problems be developed out of existing structures. Artificial cultural solutions, representing prepackaged attempts to solve multiple problems at once, might be likened to kluges, clumsy, inelegant solutions used as temporary measures until the right program is found. These may access and attempt to intertie multiple neural or psychological processes. Religion can serve as a perfect example of a kluge. It attempts to package a number of makeshift solutions to several psychological needs and wants: a sense of social belonging, comfort of ritual, security of identity, security of belief, non-threatening versions of altered states, moral regulation, rationalization for command or obedience, rationalization for the world’s lack of justice, and reassurance about death. We are much better people when we can feel reverence and gratitude, sense unity, grant forgiveness, and maintain an equilibrated, peaceful state of mind. We are a long way from finding the perfect replacement for such a kluge. We are so far away, in fact, that we might want to reflect on the reasons why religions have failed us so badly in matters of morality, integrity, character, and adaptive cognition. Religion’s much-vaunted improvement in morality is clearly a failure. Moral concern only seems to translate reliably to moral performance within the in-group or congregation itself, and that only where hypocritical gossip and backbiting is lacking. In fashioning replacements, we might rethink our strategy and address these needs and wants one at a time, but well, instead of all at once and poorly. Then we might look back with wonder on the days when religious zealotry wasn’t regarded a mental illness, requiring intervention, confinement, and quarantine.
    The above applies to just about any evolving cultural structure that hasn’t had enough time to adapt with fully genetic solutions or massive modularity. What constitutes sufficient time will, to some extent, be a function of the strength of any adaptive advantage being conferred, and the mortality rate from  selective pressures such as intergroup competition and environmental changes. And, of course, this progress is normally gradual and sometime punctuated. Radical changes in environment and lifestyle will certainly speed up the process, and this includes a number of changes undertaken since we became h. erectus and his various subsequent heirs. Language is probably the most interesting example of an ongoing gene-culture coevolution, but that’s for the next chapter.
    The last 2 million years of prelinguistic, but still-semiotic developments in nonverbal communication have given us the first big break in becoming a culture-dependent species, which perhaps even necessitated the evolution of language. Other punctuating events would likely include signaling and other communication skills, tool manufacture, control of fire, weapon development, clothing, containers, extreme migration, and later, animal husbandry, farming, urbanization, imperial conquest, and warfare. Extreme adaptive opportunities conferred by successes here, and the extreme group-selection disadvantages in competitively failing at these, would certainly tend to move everything along. Everything, that is, except the background rates of genetic mutation. Theories of massive modularity have been competing for some time with theories of developmental plasticity and connectionism. Nobody’s winning because it’s really just a little of this and a little of that, instead of everything or nothing at all. We know that evolution moves slowly, and we know that it hasn’t come close to stopping.

Transmission
    People aren’t always furnished with optional ideas and then encouraged to choose between them, especially during our formative years. Even when we can see options, we’re more often told what and how to choose. Why we choose they way we do is of interest to social and cognitive science. Much comes down to the personal relevance or attractiveness of the prospective input or idea, which is largely about the promises it makes, or the bait and the hook when the promise lacks authenticity. We don’t always learn to watch out for tricks and deception here. Repeating an earlier statement of escalation of commitment or the sunk cost fallacy, Twain wrote, “It’s easier to fool people than to convince them they have been fooled.” Children, throughout much of their extended childhoods (until they can fall in with the wrong crowd) are at the mercy of adults and their pedagogy. As amateurs at living, particularly at parenting, our transmission practices (like teaching and role modeling) and our knowledge of acquisition strategies (like imitation and investigation) are fumbling at best. Henrich, in pointing out how we have come to make better use of role models based on prestige in addition to competence, stresses the importance of CREDs or CRedibility Enhancing Displays. These may be costly demonstrations for wannabe role models, but the effects in the transmission of value are hard to beat. Just shy of that, not quite as expensive, not quite as rewarding, but usually pretty effective, is simply being a decent person living an exemplary life. That can be done for its own sake as well.
    The horror that is childhood adversity isn’t fully grasped yet. While this may smack of social engineering, required parenting classes and parenting licenses might not be such a bad idea. Or instead of requirements, society and culture could provide more incentives for better-educated parenting. Poverty and trauma can have lasting effects on neural development, multiplying the effects of our already deficient skills at child rearing.
    Adults provide role models for values and behavior and guard the gates to experience. Systems of public education are prone to sweeping new fads every few years, but they usually return to a focus on the mass production of normal adults, those fungible replacement parts for the great machinery of civilization. This plays to the cultural imperative to mimic each other in order to get along, while allowing only as much diversity and individuation as the culture can tolerate. We will certainly function better culturally if we share a core of cultural literacy and a more-or-less common language. But inadequate attention seems given to discovering and holding ourselves to the minimum cultural core that we need here. Beyond this minimum, there’s just too much to explore as a group. We need to split up more, to diversify into investigative squadrons, and set off to explore other ways of living, other cultures, other schemas and scripts, to maintain our diversity and adaptive resilience. To do this, we need better tools for critical thinking and affective self-management. If we can learn this, we will have no need to be told what to think or want. But the machinery of civilization itself might need to be adapted to this, and that's a lot of resistance to change.

Narrative Form
    Our brains are more natively accustomed to narrative than to reasoning, and a much larger percentage of us can participate with that. Storytelling, discussed already in the native domain, is a major mode, especially if the idea of narrative is thought large enough to include rumor and gossip. In part, it gets its transmissive power from taking the same pathways through time as our experience does. The processing is a close cousin to real life experience. There may be no hard or sharp line dividing this from myth or legend. That might be just a measure of how much hyperbole has been added, and the suggestion that it somehow explains something mysterious or universal, and of course of how readily the story is accepted by the culture as something worth passing along. Culture is loaded with its myth and folklore. We’re immersed in stories, nested in bigger stories. We trust our storytellers so much that we almost always allow them at least one conceit, even if that be time travel, dark energy, or a creator deity that lives in the sky and looks like your grandfather. From childhood, the storyteller simply had to be taller. Adults just look for other kinds of height, as from a dais, podium, or pulpit, or just big old gods. Our stories don’t lend themselves to reason or rational analysis while they are being told. The narrative is usually moving along too quickly, by design, for logical sidebars. Instead, we maintain our emotional involvement in the story, along with whatever conceit is in it, suspending doubt or disbelief until after the story has been told. You can react skeptically as you go, but then you miss the story. By the time the tale is told, you have a memory, along with affect and all. And it isn’t really common practice to dig it back out again simply to disbelieve in it. And when you hear that story repeated, as in Sunday school, these stories can take on the mnemonic depth of other, first-hand, personal experience.
    James Paul Glee writes in a 2017 blog: “The human mind is built to care much more about meaning—feeling that things make sense—than about truth. Humans seek stories that make them feel like they matter and they will revel in these stories—even if they are untrue or even if they are dangerous to others—if the stories give them comfort. This is a dangerous situation in a pluralistic society where we then end up with warring ideological tribes. In reality, humans are best served—and down deep know they are best served—by stories that are both meaningful and true. The salvation of a civil society is ‘storied truth’: deep, true things that make sense of the world in a way that empowers people as agents and participants in their society.” This poses a great challenge for storytellers with a conscience and a sense of deep time. A lot of science fiction and fantasy authors have been working consciously at this for a while now, trying to author a better world. The genre isn’t all about fantasy and escape.
    Telling a good story is a service to culture, and to any individual who can grow by it. Some stories, to be sure, are toxic. Abraham is celebrated for his willingness to sacrifice his son Isaac. It isn’t as well known, but the name Isaac meant laughter, adding a second layer to the horror that’s celebrated as a good thing, and not at all insane. Religious myths will rationalize inherited moral sentiments, including the maladaptive ones, giving them contexts and examples, until later concretized by more formal religious dogma. Aesop taught many a good moral lesson, long before someone thought to add the gratuitous moralizings to the endings. Parables and anecdotes that identify recurrent and archetypal life situations can furnish us with general tools of understanding. Seldom is this made better use of than in the teaching stories of Sufism, Daoism, and Zen (three faiths that still have some humor intact). Lessons about the spirit, or at least spiritedness, are learned on the secular level, too, as with Brer Rabbit and the Tar Baby, and the Briar Patch, and elsewhere with The Blind Men and the Elephant and the Emperor’s New Clothes. These stories give us a stock of plots, characters, behaviors, and strategies to keep as close at hand as though we had lived them ourselves. These story plots and scripts help to provide expectations and predictions, standard outcomes to analogous situations.
    We can also draw wisdom from pithier comments, quotations, aphorisms, and proverbs, tidbits too brief to form full narratives. Unless we’re careful, however, our fondness for these can bleed over into parroted, unquestioned truisms, platitudes, and deepities, like “All that is comes from the mind”? Really? The world we know is created by our neural processes, but there’s also a world to be known and shared. And many of the individual views of this world to be shared are irrelevant, wrong, or ridiculous in an absolute sense. Sometimes this can only be asserted by being rude. People repeat nonsense unthinkingly and often. It’s been pointed out that the jackass who wrote “there is no meaning outside the text” probably followed his wife’s grocery list fairly literally. Some others include: Judge not, lest ye be judged. Everything is relative, so nothing is true. Everything happens for a reason. I can’t fix the world, I can only fix myself. Faith is the evidence of things unseen. We even have super deadly ones: Heroes die for their country. My country, right or wrong.

Hive Mind
    The term hive mind is misleading wherever it implies a higher intelligence, intention, sentience, or consciousness. Like an ant colony, a corporation, or a human government, it acts as though it had awareness and purpose, but it’s merely following orders. There are good things to be said about this pool of information, and about how it creates solutions to ecological or systemic problems. The interconnectedness in a large population acts in a brain-like manner with a sort of intelligence far exceeding that of individual members. It has weaknesses, such as in not knowing where the hell it’s going, and yet the future can still be extrapolated when we study the rules by which it operates. In these rules lie the secrets about how to point the damned thing in better directions, but so far, those who know these secrets tend to use them for short-term personal gain instead. The massive majority can be herded into ideological corrals and their behavior influenced by emotional manipulation. To really change things means taking charge of both ideology and methods of persuasion, but neither of these will work without access to the multitude, ideas the people will pay attention to, and an ability to satisfy what the public considers its emotional needs. Good leadership, and its role models, want money, prestige, and charm. The prospects for this are unsurprisingly bleak.
    One of the great weaknesses of culture, perhaps at all levels above that of the tribe, is found in the assumption that the group is somehow possessed of a group mind, a political will, sovereignty, and agency, while the anonymous individuals are simply along for the ride. Populus vult decipi, that the people want to be deceived, may still hold true as far as perceptions of the public will are concerned. The individual submits to and obeys the higher order or mandate, which has neither mind nor conscience. He becomes an instrument, and somehow is relieved of responsibility for the actions of the collective. This is what Stanley Milgram calls an agentic state, although he damages the term agentic in the process. We wind up with followers citing the Nuremberg defense, “befehl ist befehl, orders are orders.” But when the society can successfully blame an adversive childhood for a tyrant’s atrocities, the buck never stops. When they are doing god’s will, the blame falls on a long-dead, hallucinating prophet. Who will take charge?
    We aren’t highly evolved just to socialize with each other. Like the ants passing ants, “we frisk each other for links” (Martin Plimmer & Brian King). We live in a world of reflected lights, like a big house of mirrors, where the fact that nobody’s in charge is usually what’s in charge. We’re largely path dependent, where our choices depend on decisions already made, many made long ago. This isn’t trail blazing - it’s mostly just tourism. We don’t develop our network of the paths we depend on in any rational manner. We tend to bring the missteps and mistakes, the extraneous and frivolous stuff, the analog of junk DNA, along with us on our journey. Henrich does point out that there may be unseen synergies in the frivolous-looking parts of the rituals that we pass down unexamined, instead of picking out the parts that make rational sense. Psychologists have studied the when and why of people’s willingness to copy the seemingly irrelevant steps used by another to get to a reward. We could apply this to they way language accumulates junk and annoying elements, or the way a religion picks up irrelevant bits of ritual. These practices and beliefs are often (implicitly) MUCH smarter than we are, as neither individuals nor groups could figure them out in one lifetime. This is also true of some institutions, religious beliefs, rituals, habits, and medical practices. This isn’t so much an argument against eclecticism as it is an affirmation of Aldo Leopold’s maxim: “To keep all the cogs and wheels is the first precaution of intelligent tinkering.” You check your bathwater for babies. We might want to give what seems silly at least a second look, like a quick suspension of disbelief, just in case something cool is hiding in there.
    Hazel Rose Markus writes, “The culture cycle is the iterative, recursive process by which 1) people create the cultures to which they later adapt, and 2) cultures shape people so that they act in ways that perpetuate their cultures.” Within this milieu, we are mostly just fungible, expendable pieces, that are like ants, “fast, cheap, and out of control.” For all the problems of complexity, the sheer numbers in the hive mind are a significant asset: we get more accidents, more discoveries, more geniuses. Hive mind allows us to overestimate our individual minds, but the contributions of individuals are usually pretty insignificant. And yet cultural advances ultimately come down to an individual creating or stumbling onto something new, almost always as a result of many other things he’s picked up from his culture. Invention is still largely a recombination of elements pulled from a pooled cultural heritage. We still need those individuals, and they still deserve some credit, but culture doesn’t advance by invention without the ability of the culture to learn and adopt the invention.
    Extremely complex systems learn to self-regulate against entropy, as long as the system is open to information and energy inputs. That the system be open is key, though: that’s embedded in the second law of thermodynamics. That’s why it’s vital for one culture to remain able to learn from the successes and failures of another. As I write this, the politically correct loons are just beginning to whine about cultural appropriation. We owe much to intergroup competition and cultural appropriation. This is analogous to finding mates outside the extended family or tribe, an analog of heterosis or hybrid vigor. And to fail to bring home something new from our travels is an analog of inbreeding or cultural incest. A culture can only grow in this manner if it hasn’t demonized the other so completely that it can see no detail. As an added benefit of cultural appropriation, a more diverse repertoire or skill set may allow it to expand into new niches.
    When healthy, very complex systems don’t require a center that can fail to hold. Grassroots self-government, devolution of function, and invisible hands can manage the thing just fine with only a nominal queen at the center. When healthy. But health is easy to lose as systems age and their structures ossify, or as they close themselves off to new information and energy. Cultures are seen in failure fairly often, failing in forms like nations, dynasties, and cults. The average lifespan of such entities is only a couple of centuries. The system grows unable to learn and the individuals within it can no longer perform any but the most highly specialized functions. They only seem able to focus on one problem at a time, and self-correction devolves into a frenzied game of Whack-a-Mole. Lack of scale or proportion becomes a bigger issue. Those in power often seem able to give their power base red herrings enough to keep them occupied, but their power base itself is doomed to the extent that its ignorance is encouraged. Desperately, the system tries to centralize control. Situational ethics becomes the rule of law, which soon becomes the rule of lawyers. Disorder slips into cascade failure of the integrated systems. And nobody knows how to fix it. As Juvenal remarked, “Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions, everything, now restrains itself and anxiously hopes for just two things: bread and circuses.” Rome falls, but this too takes more than a day.
   
Ideology
    Several relevant issues concerning ideology have already been discussed earlier in this work (Chapter 1.7, Conditioning, Persuasion, and Ideology), and need not be revisited here. Onward.
    In order to function smoothly as a larger or more populated unit, humanity had to learn a form of self-domestication that amounts to normalization or standardization, at least in certain areas of life and degrees. We needed to get more predictable and start meeting the expectations of strangers. We will often have to trust that others believe the way we ourselves believe. We also learned to trust the common beliefs even more than we trusted our own. The better we did this, the better we could infer useful things about the minds of others, what they wanted, what they were up to, and how we could turn that knowledge to our advantage. At the same time, we got better at detecting deceit and more innocent forms of selfishness in others. We became prosocial and altruistic in ways that apes never could, at least until lines were crossed between individuals and between groups. This all starts to fall apart if we don’t live in the same world and have a shared set of rules to play by.
    Culture has a somewhat stronger investment in conservatism than it does in innovation. The inertia of a culture parallels the apperceptive mass of an individual. A culture will be biased to favor where it’s already been and what it’s already learned. It will generally award praise and acceptance to the new only after a period of time, or after reaching some tipping point of mimicry, or adoption within the population. One kind of argument occupies a special place in cultural inertia, and even anchors or ossifies the law: the appeal to precedent, stare decisis, don’t change settled decisions. It’s that ratchet effect again. Despite our inclinations to xenophobia, cultural change often comes about most dramatically through diplomatic exchange and appropriation. Sometimes this first takes root in a counter-culture within the larger society. Yet cultural diversity is still more tolerated than welcome, regardless of its contributions to system resilience when the problems call for less familiar solutions.
    The ideologies that hold the culture together have the advantage of being comprehensive world views than can be passed along from dead ancestors to unborn descendants. At the same time, certain forms of them can be built upon and upgraded over succeeding generations. Buddha and Epicurus still talk to us, Diogenes and Zhuangzi still tease us, Homer still sings, and dear Hypatia still teaches. This all requires a degree of continuity that has to be conserved. At least minimal remnants of networks have to remain connected, and the documents must be preserved. You can’t stop the signal if you do that. But large-scale disruptions that seriously compromise resources, like the fall of the Roman empire, ice ages, supervolcanoes, epidemic diseases, and today, disruption of environmental support and climate, and the violence that ensues from these, can threaten to set a culture back for ages. Surprisingly, war has a lesser effect than these. It’s the events that take out the elders, the librarians, and the libraries that we most have to watch out for. And you’ve gotta make your backup copies.
    The success of an ideology really does nothing to validate its dogma. A successful ideology does, however, merit examination for the reasons for its success, the psychodynamics of its appeal and acceptance. If there is nothing of high value there, then at least we might learn something about how to counteract it or find a more appropriate substitute. We also shouldn’t be too quick to underestimate traditions, as these might have evolved to solve problems indirectly, if not rationally. Of course ideologies exploit this benefit of the doubt and assert that all accrued benefits result from not doubting any part of the schema and its prescriptions. The questions we will want to ask often have to come from outside the thing we are questioning. We may need to use a different lexicon, or stray from the pre-approved talking points. We may need to thump the idols hard enough to shatter any hollow ones.
    What appear to be cultural universals might be set aside from this cultural domain, placed in the native or the social instead. Cultural relatives remain. Within this, we will find a core or minimum of cultural literacy that needs to be acquired to be a functioning member of a culture or civilization. As with many wants and needs, there are wants and then there are needs. The wants frequently belong to the culture, and most truly belong to those who will try to instill them in us. The needs are our own, but to distinguish them from wants that have been installed as needs can be a challenge. To what extent must we conform to what we are told we are? How much do we want to flirt with this boundary? Might we get away with being “kind of a character”? What are the consequences of going all the way to beatnik? At what point to we encounter the desperate side of consensus and conventionalization, the fear that others have of seeing the stage sets questioned? At what point to we compromise our ability to make a decent living here, or even a marginal one? At what point are we murdered by the government or the church for taking liberties, or committing heresy or apostasy?
    When things do get set in motion away from a cultural norm, it’s often in the form of a runaway fad, like (but not limited to) the recent examples of behaviorism, cultural relativism, and postmodernism. It might take a while for the bandwagon to attract enough riders, or accrete a critical mass, or reach a tipping point. This can be particularly acute in academia, where reputations are at stake and the right side of history must be chosen. Neither is science immune to this, even though its very method requires it to embarrass itself now and then. There’s a paraphrase of Max Planck that we all really wish he’d said, “Science progresses one funeral at a time.” Paradigms come and go more or less reluctantly. The softer sciences, like psychology, and some other humanities, stand in still greater need of evolving some measure to tell them when they’re being ridiculous, maybe some kid kept on retainer to ask why the emperor is naked. But revolutions in cultural thinking at least accomplish a little towards keeping us humble. As I write this, physics has the cosmic puzzle nearly solved. We know what 4.9% of it is made of, never mind that the last 95.1% are unobserved imaginary placeholders that might only name the discrepancies between our observations and our models. But science pretends to know stuff, and those who want an alternative to religion can embrace this with fervent and fevered belief. We’ve spoken of the false dichotomies at the heart a of so many scientific debates. These are like bandwagons racing for pink slips. But the work to be done is elsewhere.
    Maybe the biggest downside to information drawn from the culture is that it’s all secondhand. You accept it with some degree of trust. Hopefully your new datum at least gets entertained, even vetted, before it gets accepted into your brain and wired up with your other memories. It’s common knowledge that we can’t live long enough to experience everything firsthand, and we’re genetically adapted to copying each other’s successes, and even successful accidents. We learn to identify others in our society who have impressive skills, power, or prestige and learn preferentially from them. Naturally we will get misguided when a culture elevates the incompetent and the corrupt. The Spanish have a wonderful phrase: “aprender en cabeza ajena, to learn in another’s head.” Naturally some of us may want to know something of where that head has been, avoiding any cognitive cooties. We can lean by listening, watching, and mimicking, and can teach by speaking, demonstrating, and showing. While there is room for personal perspective and creativity in both directions, cultural learning is still indirect, and more interpersonal than personal. This frustration has most of us spending at least a decade making whatever mistakes we can make all on our own. This seems to be built into the maturation process of the prefrontal cortex in teens and young adults.
    Institutions are physical and legal manifestations of cultural conservatism. The liberal institution is really something of an oxymoron to the extent that its mission is to preserve itself and replicate. Once instituted, an idea tends to become defensive. The pyramid, or the idea that lasts, the hierarchy with the broad base and narrow top, may be the most common model. There are other models, however, with better prospects for longevity, but they are also better equipped to embrace changes, and to change along with them. This sort of impermanence may be horrifying to the insecure. The model for this is the dendrite or tree, with diversity at its roots, and diversity in its branches, with the singular big idea as the conduit in the middle between them. With this model, we encounter such terms as grassroots, and seeding or going to seed. Here there’s also a decentralization or devolution of function, an undermining or dismantling of hierarchy such that authority is assumed by or delegated to its area of greatest relevance. To disallow diversity and subculture is death to a culture for reasons found in both systems theory and the 2nd Law: negative entropy demands open systems. And from another angle, biological entities are not made up of cells that are all the same. It’s the specialization and organization of the cells that make up the organs that organize the organism.
    The costs of innovation and development of prototypical cultural systems, such as intentional communities or ecovillages, will be raised even higher by cultural inertia. This is particularly acute when occupants of the status quo systems are insecure and afraid of anything that might replace them. This fear or insecurity can reach a dangeous pitch where rigidities like doctrines of infallibility are involved, as we see in cults and churches that are unable to consider criticisms of the leader or the doctrine. It’s only a matter of time for these, even though the time may be long. Galileo got his apology after a short four centuries. Originality is suspect. New ideas and methods must be first slaved to cultural inertia. Submitting to the status quo is a lot easier. It’s less work both cognitively and emotionally. We go along, and don’t make others dislike us and seek to hurt our feelings.
    Ideology is cultural software, as Jack Balkin puts it. Much that we have is a poor fit with our inherited wetware. A great deal of it is based on incorrect and wrongheaded ideas of who and what we are, like ghosts in a machine and such. With culture being as large as it is, experimenting with or trying out something better has to begin first within a subculture, or an incubator, and slowly improve its lot by setting good examples that others want to adopt, and protect itself by not appearing as a threat to a far more powerful status quo. Ideally, what we want is cultural software that works more effectively with our evolved neural substrates, optimizing our nature, playing to its strengths, and compensating for its faults. At their best, culture and education function remedially, augmenting our evolved skills, debugging and disabling faulty heuristics, or at least those which fail in this new level of civilization. But it may be hard to completely erase, unlearn, or overwrite ancient native functions. Given that the constructed parts of our minds get constructed in layers that begin at a very young age as foundational understandings, the importance of getting to children early with good information, basic critical and evaluative skills, and credible role models can’t be overstated. The way we raise our children as indoctrinated members of our culture may be our single biggest failing as a culture. The children are, in effect, sacrificed to the culture’s stagnation out of a fear of what worlds may come.
    Ideologies will often enshrine one or more sacred cultural values, which, we are taught, possess infinite or transcendental worth. It isn’t uncommon for these to be proclaimed to be worth more that life itself, referring of course to the individual believer’s life. For those who don’t know any better, the threat of loss of identity, belief, or belonging is terrifying enough to override even their basic survival instincts. Giving your life for your country and religious martyrdom are well-known examples. Propaganda and proselytizing are quite adept at instilling these, usually with promises of immortality, glory, paradise, or Heaven, and their opposites, of course, for failure. Advertising hasn’t been quite this successful, although numerous deaths might be attributable to brand loyalty, as to Budweiser beer and Marlboro cigarettes. What has to be learned is that even sacred values are optional, at least to the extent we can control or manage our fears that identity, belief, and belonging might change. For most, this is easier said than done, but it does help to re-include a value for our own lives back into the calculations.
    Systems of justice have largely failed to transform malefactors and make evil deeds go aways. A need for behavioral correction often follows from an unmet need for ideological correction, but this isn’t really what it seems. It isn’t so much that an ideology has given wrongheaded behavioral advice so much as it’s provided convenient rationalizations for abnormal, destructive, or hypocritical behavior. Punishment and rehabilitation remain confused, penalties aren’t tailored to teach lessons about the crime, and restitution as a form of learning is seldom even mentioned. Typically, crimes that have been  committed following some primitive cost-benefit assessment will usually be seen to pay the average player. The cure for crime is therefore equally the responsibility of the culture that permits such rationalization and hypocrisy in its values. The inconsistency and unlikelihood of facing real consequences is just a big green light. Clearly the individual still has to be held accountable, but the ideologies also need work, and some cultures are better at doing this than others. Unfortunately, the ideologically bound don’t seen capable of learning from other cultures with lower crime and recidivism rates.

Persuasion
    As with ideology, several of the issues concerning insidious persuasion have already been discussed earlier in this work (Chapter 1.7, Conditioning, Persuasion, and Ideology), and need not be revisited here.
    It’s a good thing for us to be persuaded by high quality information, or by better behavior. The kind of persuasion we don’t want appeals to weaknesses in our character and courage, our insecurities and fears, our artificial needs and wants. Ideologies that lie outside the realms of politics and religion don’t seem to attract the same level of effort or funding for persuading the masses to adopt them. Philosophers, researchers, and scientists hard and soft would still like to be thanked and praised by their culture for answers to questions and solutions to problems, and this still necessitates persuading those who matter of their diligence, rigor, and precision. And truth. Persuasion here is a lot more muted. But failure and disapproval can be taken just as personally.
    Rhetorical persuasion occurs throughout culture, but on large scales it has its primary residence in journalism. Here, attempts are often marked by more emotion-laden words. Ben Franklin wrote, “Would you persuade, speak of interest, not of reason.” Jumping to conclusions is also frequently done here, particularly with headline phrases such as “scientists believe,” or “science proves,” a phenomenon we might call premature enfactuation. We’re most concerned with insidious persuasion here, the sneaky stuff that only seems to be plausible. In a morally neutral summary of persuasive method, Philip Zimbardo enumerates “six characteristics of effective communications: Being aware of what makes messages “stick” is one way to better resist their influence. Messages that survive and don’t die on the message vine are those that are: 1) Simple, brief as possible but still profound; 2) Unexpected, sufficiently surprising to catch the attention of the audience; 3) Concrete, detailed examples based on real life experiences; 4) Credible, delivered by someone the audience can trust; 5) Emotional, makes audience feel as well as think, and 6) Tells a Story, in a narrative that can be remembered and retold to others.”
    Culture gives us one set of incentives to conform, be normal, follow the rules, or suffer consequences to our status and reputation. We have another set of incentives that are more innate and personal, to figure out how to stand out amidst all of this herd activity, to feel true to ourselves. The etymology of exist the word is to stand out. We reach for clues to help us with both sets of incentives, and this often sets us up in big contradictions, a general state of anxiety that others, with a particular set of skills, find easy to exploit. We are genetically adapted to want to be what others want of us, at least until we can learn to question their motives. We want to be in the majority, at least until we learn what an idiot the majority is. And perhaps the majority never learns these two things. We can infer this because advertising, propaganda, and proselytization really work, and that’s precisely why they’re so ubiquitous.
    There are plenty of highly adaptable tools now to use in media persuasion. Benjamin Disraeli cited “lies, damned lies, and statistics.” Evan Esar called statistics “The only science that enables different experts using the same figures to draw different conclusions.” Data presentation often has a greater effect than the data being presented. Frames and scales are manipulated to highlight and hide information. Humans seem especially deluded by short interval scales: the papers proclaim “crime rate soars” one day and “crime rates plummet” the next, while the year will show no net change. Biased or loaded polling questions are a great source of slanted statistics. So are shenanigans done in sampling. The presence of statements like “scientist believes x” or science proves y,” show the ignorance of journalists of basic scientific principles. They might betray their lack of proportion or scale in statements like “a supervolcanic Yellowstone eruption could kill thousands,” or “the impact that created the moon threw tons of material into space.” Careless or thoughtless attribution is rampant. False quotations, as perhaps most commonly seen of poor Buddha and Laozi, show the kind of disrespect we have for higher quality information. It’s more important to load the good names of these gentlemen up with memetic hitchhikers, like the narcissistic self-help drivel and vapid new age platitudes.
    Readers are often encouraged to look in one dimension only, while other dimensions tell a different story entirely. The talking points are usually set, at least implicitly, prior to the debate. UN population projections chart only human reproductive choices, with no regard for the potential cascade failures of environmental support systems. An insurance company expresses alarm at a 5% growth rate in a particular hazard, somehow failing to mention a 10% growth in the base population over the same period. Polls and surveys and their accompanying maps and charts often depend on metrics not prominently displayed. On a global map of national corruption, Nigeria looks pretty bad, and the United States pretty good. But what isn’t included is the monetary value of the corruption. Nigeria is penny ante compared to the buying and selling of US Congressmen, and the trillions in benefits that accrue to their corporate owners from mere millions in lobbyist bribes. Another trick lies in the naming of things. The “national debt” doesn’t include anything close to what the country is committed to paying back, which is several times this amount. It’s just found filed under “unfunded liabilities.” The Supreme Court admits that the government can’t prohibit the free “exercise” of religion, so the law prohibits “practices” instead, despite the words meaning the same thing. Private property is seized without due process of law in civil asset forfeiture, because somewhere in a tangle of stare decisis, the words “due process” got corrupted.
    With regard to the mechanics of persuasion, Henrich offers, “At the most basic level, cultural learning shapes the reward circuitry in our brains so that we come to like and want different things…. We evolved genetically to have (somewhat) programmable preferences, and modifying our preferences via cultural learning is part of how we adapt to different environments.” In reference to coevolution, “cultural learning reaches directly into our brains and changes the neurological values we place on things and people, and in doing so, it also sets the standards by which we judge ourselves.” We can add “and others.” Since this, in turn, affects how we succeed or fail within the culture, it also affects the outcomes of our mating behavior, and thus, sexual selection. How well we do in persuading others to like and accept us depends on how well we can adapt our values to be consistent with those around us. We recognize these values in others, especially competence, charm, and the means to prestige, and copy from the most useful models. A note is still in order with regard to celebrity endorsements and causes. To be fair, some groups of celebrities, especially actors and some of your more conscientious philanthropists, have assumed a sense of duty or noblesse oblige to use celebrity and success for the social and cultural good. Of causes célèbre and celebrities’ causes, the opportunity afforded by success can open doors for acts of good conscience that are unavailable to others. The reasons these causes are so often socially liberal is forever a puzzle to conservatives, but the big clue there is in the word conscience.
    Others will simply submit, and adopt the simpler values of the lesser group that simply follows their role models. Every aspect of this admiration of the prestigious is played to economic advantage in the field of advertising. An increase in tagged value, however arbitrary and artificial, is still an increase in demand and sales. Testimonials from successful and prestigious people can sell just about anything, with no regard to the relationship of the attestant to the product. Every anticognitive domain we are exploring here can and will be played, and the only real protection here is to understand the tricks, the heuristics and emotions being manipulated, and the identities of the cognitive biases, coping strategies, defense mechanisms, and logical fallacies being recruited. It’s a lot to ask of a normal person living the normal life that the culture deems fit, but the price of liberty is still vigilance.
    In 2016, Oxford Dictionaries chose “post-truth” as its word of the year,  defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” This follows the adoption of Stephen Colbert’s "Truthiness": the quality of seeming or being felt to be true, even if not necessarily true. Euphemistic relabeling, like bafflegab and sanitized language, hyperbole and lesser exaggerations, puffery that states no facts, and gaslighting are all used because they’re so effective in moving the hive mind and the minds of the masses around. Finally, one of the most insidious tricks is the use of the agentless passive mood in describing where the culture is headed. We see the latest projections by DARPA of what war and its soldiers will look like in 20 years, and it’s just a forgone conclusion that the people will have no say in this whatsoever. What can I do as a mere individual? Cry foul? Speak truth to power?
   
Signal-to-Noise Ratios

“Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world.” W. B. Yeats
 
  The game of Chinese Whispers, or Telephone in other places, begins with one person whispering a secret to the next, in a circle. When the circuit is complete, the last player announces the message out loud and hilarity ensues. Entropy really is the law. Rumor or gossip has an evolved function in the sharing of social and cultural norms and the noting of norm violations and violators. These aren’t factual constraints, only approximations, but like our evolved heuristics, they serve a purpose on average. We are born to gossip, to circulate noise, and the carefully articulated precision of information has not been as well conserved by evolution as our sense of the average general effect. Even ghafla, mindless distraction from the essential or sacred, is deliberately sought as paid entertainment. We aren’t born with an aversion to low signal-to-noise ratios. It’s an acquired taste.
    As cultural systems get more complex, they also grow more dependent on system integrity. The system components, even those “large and in charge,” become increasingly unable to mircomanage the details, even through well-specified chains of command. As we’ve seen, system integrity is largely a function of information and energy inputs. Chaos is not shut out by closing the system off, keeping it the same, or “fixing” it. The individuals trying to function within systems where they’re denied both devolution of function and bottom up or grassroots organization necessarily lack the ultracomplex minds needed to solve the complex problems. They’re forced instead into tackling one dimension of one problem at a time. The environmentalist trying to turn the culture away from ecocide is stuck in a game of Whack-a-Mole against one corporation or agency after another. Meanwhile, the top-down decisions are either rooted in corruption or they consist in decision strategies that are formulated for worst-case scenarios that are generally applied, regardless of circumstance, with huge inefficiencies in resource allocation.
    In ways, our layered, triune brain is analogous to bringing civilization to a backward rural area still with landline phones on party lines. The information to be communicated is several orders of magnitude greater than the local system can handle and the information isn’t organized according to its value. You can’t report your heart attack because Mrs. McMurtry is gossiping about Virginia being late for church again. The information is weighted only by its bandwidth, so the exaggerated counts more than the simply put, and value has no say in the matter. Factionalism, as the ability of people to be highly selective of their sources of data, hasn’t served those hungry for high quality information as well as it has the equivalent of gossip. Alvin Toffler called this overwhelm Future Shock. Our ability to apply standards to the information we’re getting, to sort it according to value, hasn’t begun to keep pace with the overload. A quick look into social media demonstrates beyond any doubt that error propagates at many times the rate of quality intelligence. As Twain put it, “A lie can travel half way around the world while the truth is putting on its shoes.” Isaac Asimov also had some things to say about the democratization of information in the American culture: “There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’” Of course, it’s good to have second and third opinions. It just isn’t very bright to assume that all of them are correct.
    We’re currently far too busy just trying to keep up with the changes, which are largely superficial and irrelevant. Interconnectedness is multiplying even faster than interconnected nodes. Technological systems are evolving on their own, some with no one person even able to sketch out their overall structure. Legislators don’t have time to read the laws they are passing: they just take word of their corporate owners and lobbyists, who just hand them bills to sign. Journalism of late has been supplying less original investigation and content, and doing a lot more recirculating of current opinion, and only a little further opining thereupon. This makes current affairs more of a closed system, and as we know from the 2nd law, this is where entropy does its worst damage. New energy and new information are needed to keep the system from stagnation and decay. There is, simply put, way too much to learn. The day of the know-it-all and the unconstrained polymath has passed, though some of us remain stubborn in our failure. There is still plenty of room for interdisciplinarity, however, and some hope for what E. O. Wilson calls Consilience. Even here, though, standards and judgements have to be applied to winnow the seeds, the germane, from the chaff. And even here, pausing to vet information on its way into our minds slows us down and limits our field of possibilities even further. Information enrichment, as discussed in Part One, has never been more important.
    Here is the rub: the sheer quantity of information can’t be processed by any one individual. Any individual who hasn’t developed a set of standards and values by which to sort, vet, and select incoming information will do one of two things. The first is to consume however much he can, at random, of the information that’s passing by, until full of average information. The second is to identify with lesser pockets of information, such as ideologies, and remain within those pockets, in a defensive posture against the intrusion of anything threatening cognitive dissonance, until full of partial information. This just becomes extreme polarization and little beyond polemical debate because the information in those pockets just feeds on itself. Both will incur heavy costs from dismissing the effort of judgment and confusing quantity with quality. Again, we have the opposite of an open system and negative entropy. The pockets must eventually self-destruct.
    The system that can’t overcome these difficulties is headed for complex and unpredictable cascade failure and heat death (like all of us, eventually). A deep recodification, restructuring, or reconstitution of this culture will only be possible following its collapse, when we can pick and choose what’s worth saving, what should have been worth keeping over all that was actually kept. This is what happened in the Renaissance, with pieces thankfully kept alive by the Muslims, and imported by them from India and China. Meanwhile, we do what we can with what we have. We might seed a new culture in a new cult, or a reinvented intention in an intentional community somewhere, or an environmental ethic in some remote ecovillage. And if we don’t succeed this time around, we may yet leave something of value for the archaeologists who are digging through our ruins.
    Some methods for vetting cultural information are presented in Chapter 3.1, Media Savvy and the Smell Test. The specific anticognitives most relevant to the Cultural Domain are listed and discussed in Chapter 3.7, Logical Fallacies.



2.9 - Linguistic Domain

Idols of the Market, Protolanguage, Nativism, Cognitive Linguistics,

Language Development, Linguistic Relativity, Semantics and Syntax

Idols of the Market
    While language itself has an objective existence, it also consists of another layer of qualities, or qualia, that we have learned to add to our memories, a layer parallel to associated sensory quality, perception, affect, idea, context, time of life, etc. Words are another class of memories, added culturally, with associations to other classes. And like these other layers of associations, the words act as handles for retrieving, organizing, and recombining memories. “Languages are a subset of culture that are composed of communicative tools (words) with rules (grammar) for using those tools” (Henrich, 2015). Human language isn’t the stuff of human thought, but when embedded with the rest of thought, it provides enormous new potential for the accessibility, order, and structure of all forms of thought. A managed vocalization or gesture, and now, a printed word or braille touch, becomes associated with an experience or memory along with its other qualities. Those associated with schemas go to semantic memory, those with scripts to procedural. Working memory has simultaneous access to both.
    This is the domain of both Gardner’s verbal-linguistic intelligence and his logical-mathematical intelligence. The latter processes are no less linguistic functions. They deal with logic, abstractions, reasoning, causality, numbers, and critical thinking according to analogs of grammatical rules. In a peculiar way, many of the anticognitives for these logical-mathematical functions are also affective, such as self-consciousness, timidity, and fear, as many have witnessed first-hand in school with such states as math and story-problem anxiety. Anticognitives in the linguistic domain are what Francis Bacon calls Idols of the Marketplace (idola fori). Bacon considered these “the greatest nuisances of the lot”: “There are also Idols formed by the intercourse and association of men with each other, … on account of the commerce and consort of men there. For it is by discourse that men associate, and words are imposed according to the apprehension of the vulgar. And therefore the ill and unfit choice of words wonderfully obstructs the understanding. Nor do the definitions or explanations wherewith in some things learned men are wont to guard and defend themselves, by any means set the matter right. But words plainly force and overrule the understanding, and throw all into confusion, and lead men away into numberless empty controversies and idle fancies.” He outlines two subsets of this kind of idol and provides examples. First, there are those words which spring from fallacious theories, such as the element of fire or the concept of a first mover. These are easy to dismantle because their inadequacy can be traced back to their derivation in a faulty theory. Second, there are words that are the result of imprecise abstraction. Earth, for example, is a vague term that may include many different kinds of substances, the commonality of which is questionable. These are terms are often used elliptically, or from a lack of information or definition of the term.
    Language does a lot for us. It’s our most important medium for cultural transmission and learning. With it we structure our personal explanations for how the world is the way it is, and our rationalizations for why we are the way we are. We model the world in the abstract. We frame our conceptual tools in ways that obey linguistic rules of recombination with other concepts. A lot of our thoughts are language dependent or language created. Many of the tricks we do with words to deceive ourselves and others have already been discussed, and are presented in detail in Chapter 3.7, Logical Fallacies. And narrative, which has also been much discussed, also has a place in the linguistic domain. Here we’re concerned with the development of language and how it helps structure the world of our perceptions, recognitions, and the memories that include feelings and emotions. This in turn offers insight into the ways that language can delude us. The verbal arts (grammar, logic, and rhetoric) are collectively known as the trivium. All three are subject to both error and manipulation.

Protolanguage
   Darwin wrote, “I cannot doubt that language owes its origin to the imitation and modification, aided by signs and gestures, of various natural sounds, the voices of other animals, and man’s own instinctive cries” (Descent of Man). If Henrich (2015) is correct, we began to communicate with each other in a significantly different way once we had become h. erectus, perhaps as early as 2 million years ago, and this different way of communicating began to allow the development of a transmissible culture that’s qualitatively different from tool use in chimps. “Language evolved via cultural transmission over generations to improve the efficiency and quality of communication. These communication systems had to adapt (culturally) to our brains, exploiting features of our ape cognition, and at the same time, created new selection pressures on our genes to make us better communicators. These genetic evolutionary pressures were powerful, shaping both our anatomy and psychology.” Benjamin Whorf refers to the “long evolution of thousands of very different systems of discerning, selecting, organizing, and operating with relationships.” Culture, in turn, began to slowly change us biologically and genetically, as we then adapted to such communicable technologies as tool use, controlled fire, more advanced weaponry, clothing, and the construction of simple watercraft that opened new ecological niches. These may also have been the first hominin to live in tribal or hunter-gatherer societies, to care for elders, to coordinate hunts in theory, and to care for the elderly and infirm. Forerunners of some of these behaviors are of course seen in other primate troops, but not as communicable techne. All of these either suggest a need for communication, and therefore selective pressures to advance these skills, or else just a fortuitous mutation that increased brain size and allowed these behaviors room to develop. And maybe it’s a coevolution of these two.
    Even as early h. erectus emerged, with brains two-thirds the size of our own, we’d already inherited a genetically supported array of communication skills: facial recognition, facial expression and micro-expressions, posture and gesture, postural and gestural mimicry and parody, mirroring, procedural demonstration, signal cries and calls, vocal mimicry, affection and sexual signaling, onomatopoeia, interjections and exclamations, dominance and status signaling, and proxemic distancing. Some of these earlier calls and cries may have been innate and universal, and others would quickly become specified to perceptual triggers. We can assume that neither h. erectus, nor h. heidelbergensis, nor h. neanderthalensis, had an evolved a vocal apparatus like ours, nor one with anything close to our own range of expression. Any vocalizations they had wouldn’t sound much like our own, or be as finely articulated. We might even begin our conjectures on how they communicated with something closer to semiotics than linguistics, and perhaps assume a vocabulary supplemented heavily with gestures, along with a very crude grammar based primarily on word sequence. Today, our phonemic array or repertoire is universal, as it’s driven by the anatomy of our vocal apparatus, although no language comes near to exhausting the possibilities of human vocalization. We can pronounce more than 30 million distinct monosyllables [calculated by me, while doing some work in phonetics], even without Chinese-style tonal inflection. Infants may experiment with ranges beyond what they hear others using, but this tends to narrow to the phonemes in use. It’s a big help that our phonemes are categorically distinct (there aren’t really any clear steps midway between b and p) because this enables unambiguous articulation.
    Protolanguage suggests certain sets of evolved cognitive features that are exploited by language learned in culture. The lower strata would be elaborate sets of vocalizations and nonverbal forms of communication, and genetically related to those found in primates. Human babies are primed to respond to human vocalization, if not yet to human speech. Our range of phonemes is universal, constrained by anatomy. No infant babbling exhausts the range of the International Phonetic Alphabet (IPA), and babbling is quickly dampened to phonemes heard in the environment. Protosemantics would want to look at semantic and procedural memory for cognitive processes that support the development of vocabulary, and especially heuristics involving classification. Metaphors will connect an entry in a source cognitive domain with an entry in another, and provide an opportunity to connect this to a lexeme in the linguistic domain. Learned lexemes, then, might be regarded as associated hyperlinks that are ultimately given semantic realities comparable to sensory and conceptual memories and metaphors. Protosyntactics would look at cognitive processes supporting grammatical relationships between lexemes, such as scripts, or causal inference for subject-verb-object relationships, or procedural and efferent memories to recall what verbs feel like when acted out. Hamrick (2018) concludes, from a collation of studies, that modern-language vocabulary development exploits our pre-linguistic processes in declarative memory, while grammar exploits our prelinguistic processes in procedural memory. He asserts that language is learned in brain circuits that predate humans and have long had other uses. However, this not to deny that these circuits have undergone at least some adaptive changes since we began to use language as we know it.
    The Language of Thought hypothesis, developed by Jerry Fodor, posits an innate, pre-linguistic “mentalese” that arranges and operates on ideas and concepts in the mind, analogously to how grammar operates our language. Thought has its own syntax derived from cognitive architecture and simple concepts will combine in rule-driven ways. But rather than constituting a grammar module, the grammars of languages may have evolved in ways that adapt themselves to this prior cognitive architecture. That language has been shaped to fit the human brain, rather than vice versa, is also the view held by Morten H. Christiansen. Language is an adaptive entity in evolution, parallel to our own. Of course we also have evolved mental capacities that enable language to do this, beginning with our nonverbal communication skills and anatomical footholds that are suited to neural reuse. And at least some of the parts of the brain to which language has adapted have likely had time to adapt to making better use of language. Christiansen states, “It is consistent with our arguments that the emergence of language influenced biological evolution in a more indirect way. The possession of language might have fundamentally changed the patterns of collective problem solving and other social behavior in early humans, with a consequent shift in the selectional pressures on humans engaged in these new patterns of behavior. But universal, arbitrary constraints on the structure of language cannot emerge from biological adaptation to a varied pattern of linguistic environments.” In other words, language is too dynamic and protean. It just won’t hold still long enough, or be universally consistent enough, to permit a dedicated language module to evolve.
    Pre- or nonlinguistic experiences and memories are accessed by language much as old-style libraries were accessed by card catalogues. Language is in part an indexing or filing system for memory (and for itself). Hyperlinks will provide a more modern metaphor. But it’s the memory, and not the lexeme or word, that carries the qualia, the experience, dimension, perspective, affect, richness, texture, connotation, and implication. The content is still found only in the way the neurons are connected. Protolanguage can be inferred from developmental norms and stages in both human children and in language experiments with apes. Roger Brown offered some of the first research in this field, observing some of the ‘thematic relations’ or ‘theta-roles’ of linguistic theory, such as agent-action, action-object, agent-object, action-locative, entity-locative, possessor–possession, entity-attribute, and demonstrative-entity. Early toddler sentences are limited to a small set of relationships like nomination, recurrence, disappearance, negation-denial, negation-rejection, negation-non-existence, attribution, possession, and agency. We can note three basic sentence structures that emerge early: request, assert, and negate. The big arguments here are over whether these functions are modules that arise (or arose) concurrently with language, or language is merely a software exploit of these cognitive processes as native abilities.
    In programming language, transfer learning is the ability to generalize a task away from its initial context and apply the lesson to a new problem. It implies a process of abstraction. Some animals, including birds, have some of this capacity. Generally, language as humans know it is characterized by such displaced reference. Knowledge and the experience of knowledge can exist independently of the here and now. Protolanguage seems generally unable to communicate about situations lying outside of immediate times and places, or beyond perceptions of the present. Nonhuman primates with basic language skills have trouble articulating states unrelated to current affective states, leading the eager human exceptionalists to assert that they are incapable of experiencing anything but the present moment. But some functional reference does occur between conspecifics, such as communication of food source and migratory destinations (as in elephants) and expressions of grief. It’s just a leap to assume that animals can have no non-linguistic mental experience of being elsewhere. This abstracted displacement quality also means that the truth function of modern human language can’t be immediately corroborated. And yet, while protolanguage is less apt to tell lies, deceptive communication in the animal kingdom is by no means unheard of. The potential for error and deception means that we couldn’t develop language as we know it without both a theory of mind and a social organization that agreed on the meaning of signs and signals. But we also had to depend on each other enough to trust that what was being communicated was true. Some suggest that having ritual and liturgical support to strengthen meanings didn’t hurt either. Of course, the confirmation of trust or confidence is a big part of the enabling of deception. Deception, exaggeration, lying (until the liar is caught) is only a bit more cognitively costly than speaking true. That cost is in needing to keep both your true and false facts in mind, and partitioned.

Nativism
    Few doubt that there are specific evolved brain mechanisms that support language acquisition and use, and some are located in well-known parts of the brain. It’s at least certain that we are born able to acquire language with little or no explicit intent or coaching. We do seem biologically programmed to attend, analyze, and use speech, even though feral children won’t be very articulate. The big questions arise over how developed these mechanisms are and how dedicated to language. One group of theories about the evolution of language cites the development of one or more language modules in the brain that are more or less dedicated to linguistic functions. This is called linguistic nativism. The best known proponents of this are Noam Chomsky and, much later, Steven Pinker. Chomsky’s theory of universal grammar suggests that languages share a deep underlying structure, with only cultural variants, that this structure is our genetic inheritance, and that it unfolds naturally as we grow. The ease and speed with which we learn language at such a young age, by biological predispositions, is taken as evidence for this. Children develop language even in deeply impoverished environments. Toddlers speak with great confidence, and later, so do preachers and other idiots. People also seem to have a sense of which grammatical expressions are not permitted. “The capacity is intrinsic, even if it’s not always exploited.” According to this, syntax is largely inborn, while some grammatical features, like lexemes and their morphology, are parochial and culturally acquired. Pinker proposes that this innate language faculty is an evolved adaptation specifically for linguistic communication. For him, language is distinguishable from a more general and primitive reasoning ability. He disputes that language may have dramatic effects on a person’s range of thought.
    The indisputable part of this is that we at least have an inherited language facility, even if there is no language faculty. The capacity for language is very much in our nature. These theories are still evolving and adapting to new data, and gradually assuming less assertive positions. In the 90s, Chomsky updated his theory with his Minimalist Program, which seeks to investigate linguistics from basic cognitive processes upwards towards the larger theory, where language is seen incorporated via an optimal and parsimonious neural organization. An expression of nativist theory might be seen in this claim: “As all human languages share certain essential features, like their predicate-argument structure and their combinatorial properties, which are ideal for expressing arbitrary contents, natural pedagogy may just be a specific domain where this extraordinary faculty, supposedly evolved to fulfill some other function, has found one of its uses. Indeed, it has been suggested that the primary function of linguistic abilities is to enable combinatorial composition of human thought” (Csibra). But pedagogy precedes language, or at least can be fully independent of it, particularly with with gestures, demonstration, and modeling. “Data available on early hominin technological culture are more compatible with the assumption that natural pedagogy was an independently selected adaptive cognitive system than considering it as a by-product of some other human-specific adaptation, such as language” (ib.).
    Many think it too much to ask of evolution that we be furnished with an inherited universal grammar or developed language instinct complete with grammatical rules, whether this was over the last two million years or merely the last sixty thousand. Genetic evolution is quick enough to have already adapted us to our repertoire of nonverbal forms of communication, and is almost certainly beginning to adapt to the verbal, with far more progress already on the spoken word than on the written. It’s asking a lot, however, to suppose it’s more fully adapted to the verbal, even though we can expect genetic evolution to move at a brisker pace due to the adaptive advantage that linguistic sophistication confers, and this would also tend to be a function of group selection, with high-competence individuals taking sexual advantage (those fast-talking, silver-tongued devils).
    We do seem to come predisposed to perceiving certain types of dynamic relationships, both in the world and in our relationships with it, and in ways that are suspiciously compatible with our grammatical representations and reconstructions. It’s also possible to talk coherently about these relationships with people from very different cultures, and even newly contacted tribes. It should perhaps be noted that the structure of our grammar may reflect the way we perceive the world, but this does not mean that the way we perceive the world is adapted to perceiving reality. Natural language must appeal to native heuristics and naive realism to be readily assimilated, and this has some built-in deficiencies. Our perceptions are bound to our sensorium and our umwelt, and so to our sensory and cognitive metaphors. There are some fundamental properties of the universe that our senses can’t make any sense of, throwing us back onto thought experiments and analogical diagrams. The particle-wave paradox, the peculiar relationship between mass and gravity, or electricity and magnetism are common examples. Perceiving space-time, instead of imperfectly modeling it, would require a sense that perceived space in terms of time, and time in terms of space. This would be mores feasible in an echolocating species with a large brain, but where wherever would we find one of those? Cetacean researchers, with their small human brains, don’t seem to have thought yet to try communicating with the handful of blind human beings who have taught themselves how to echolocate and represent acoustics spatially. Smaller brains sometimes need to think twice about what they’re doing. We do have a useful capacity to create artificial languages with artificial grammars to work around these native sensory limitations of ours. The languages of chemistry, microbiology, and the pure math of theoretical physics are noteworthy examples. Of course, they are also noticeably lacking that universal grammar and any inherited modules.
    It’s been known for some time that language has at least two anatomical homes in the typical human brain. Broca’s area, in the ventrolateral PFC, on the dominant (usually left) side (Broadmann 44 & 45), is linked to speech production, language comprehension, and phonological parsing. It is not, however, fully dedicated to linguistic functions. It also has prior functions of postural, gestural, and facial recognition, and species-specific vocalizations, and so is particularly related to protolinguistic communication. The overall PFC also mediates other important nonlinguistic and prelinguistic behaviors like planning, goal directedness, social adjustment, and working memory. This seems to have mirror neuron functions as well, so that perception is also a kind of rehearsal. This probably explains why ASL and our other gestural languages are so easy to learn. Gestural language and vocal language depend on some of the same neural systems, and the regions on the cortex that are responsible for mouth and hand movements border each other. Wernicke’s area (Broadmann 22) is located in the superior temporal lobe in the dominant hemisphere (also usually left) where parietal, occipital, and temporal lobes come together. It’s involved in phonologic retrieval, recognition of auditory word forms, and decoding spoken (auditory) and written (visual) language, but does not in itself provide comprehension. It also has a role in inner speech and a larger role in our speech production than we previously thought. The corresponding area in the other hemisphere seems to concern itself with subordinate, alternative, and ambiguous meanings of our vocabulary, and their intonation, emphasis, and affect. The two areas are connected by a tract of fibers known as the arcuate fasciculus, which is highly and distinctively articulated in humans. These areas, or their homologues, do exist in other primates, which likely implies prelinguistic substrates and protolinguistic functions, such as specialization for species-specific gestures and calls. In primate brains, both are responsible for sound recognition and control of the muscles operating the vocal apparatus in humans.
    The neuro-anatomical connections between gesture and language are not a surprising discovery. In 1644, John Bulwer described gestures of the hands as “the only speech which is natural to man,” one that “men in all regions of the habitable world do at first sight most easily understand.” Gesture, he said,  “had the happiness to escape the confusion at Babel.” Humans everywhere use gestures to signal agreement or not, point to things, give directions, show shapes and characteristics, demonstrate relationships in space and time, and imitate actions like picking, pushing, or pulling. It’s a little more challenging to gesticulate feelings, metaphors, and abstracts, but this still gets done. In the 1980s, Joern Gerdts created a modern version of the old Plains Indian sign language, which enabled tribes from all over to communicate, though they spoke different languages. This version, called INCOS, took care to avoid the insulting contradictions in different cultures around the globe, so that it might be used for international travel. It was so intuitive in construction that it could be learned in less than two hours by watching two films. It was a brilliant idea that, sadly, never caught fire. And it was also a clear demonstration of the natural affinity that humans have for gestural language.
    The two brain areas, which for decades offered us hope of finding a home address for a language faculty or instinct, are being continually reassessed as to their functions, and linked to increasingly numerous functions in other parts of the brain. We are only now beginning to track the actual operations of language throughout the brain. See Wang (2017), who made a study of “the mapping between 42 neurally plausible semantic features.” Studies aren’t simplifying the picture, but patterns are beginning to show indications of types of linguistic content, such as “the motor system representation of how one interacts with a concrete object, or the perceptual system’s representation of the perceptual properties of the object.” A lot of the brain gets involved, including regions for audio-visual, perceptual, motor, affective, social, moral, spatio-temporal, and episodic or autobiographical memory. The left inferior prefrontal cortex, the basal temporal area, the cingulate gyrus, the anterior superior temporal gyrus - “areas all over the brain are recruited for language processing; some are involved in lexical retrieval, some in grammatical processing, some in the production of speech, some in attention and memory” (Nina F. Dronkers 1999).
    It will be simpler for purposes here to concentrate on the functions of the brain in relation to language, rather than structures and processes. Working memory brings together two primary functions of our linguistic database. Vocabulary draws on our declarative or explicit memory, both the semantic associations of words to facts and the episodic associations of names to experiences. Grammar draws on our procedural or implicit memory, what we’ve learned about who or what does what to whom or what, and the where, when, why, and how of that. This, and related parts and pathways of the brain concerned with stereotypical behaviors, procedures, actions, orientations, and directions, have undergone significant development in the last two million years, perhaps accounting for our pulling way from other apes who have an ability to learn vocabulary but don’t do well at all with grammar.
    The extent of any language module(s) we may have remains unresolved. Clearly, humans are born with impressive abilities to communicate that are the descendants of those found in primates and earlier forms. These include calls, sounds, or cries, some innate and universal, and others that quickly become specified to perceptual triggers. Hardwiring for vocal communication exists in several species, including cetaceans, elephants, birds, and other distant relatives. We also have a repertoire of expressions and gestures. Other evolved heuristics contribute some innate perceptual abilities that recognize relations between subjects and predicates, subjects and objects, active and passive relationships, and prepositional relationships. We seem to be built to associate experiences with others that modify them, even though these are different types or categories of experience. It may be that language, including grammar, began as little more than a culturally evolved software package that gradually learned how to exploit these innate abilities and then tied them all together into a particularly useful package. The usefulness of this package would then translate into adaptive skills, conferring selective advantages. That would eventually be reinforced as path-specific neural interconnections. How far this last bit has gone is the big bonus question. Language is clearly and strongly adaptive, but is a couple of hundred millennia enough to evolve a full-blown Chomskian language module? To what extent does language structure cognition? Language is learned in infancy before conceptual skills, but is that a post hoc fallacy? It seems more parsimonious to assume that linguistic commonalities are shaped by commonalities in the way the human brain processes information. Flavel poses the question: “To what extent are the capacities that underlie language learning specific to language and to what extent do they reflect more general properties of the cognitive system?”

Cognitive Linguistics
    George Lakoff, the founder of Cognitive Linguistics, argues that metaphor and its associations are fundamental to language. Cognition, linguistic and otherwise, is embodied originally in sensory experience and perception, to which we assign words, and we use these words to further frame our sense of reality. The more abstract we wax, the more complex our layers of metaphor get. From Wikipedia: “Cognitive linguists deny that the mind has any module for language-acquisition that is unique and autonomous. Although cognitive linguists do not necessarily deny that part of the human linguistic ability is innate, they deny that it is separate from the rest of cognition. They thus reject a body of opinion in cognitive science suggesting that there is evidence for the [massive] modularity of language. They argue that knowledge of linguistic phenomena is essentially conceptual in nature. However, they assert that the storage and retrieval of linguistic data is not significantly different from the storage and retrieval of other knowledge, and that use of language in understanding employs similar cognitive abilities to those used in other non-linguistic tasks.”
    One polarized opposite to linguistic nativism is known as Connectionism. The gist of this is that it’s language itself that has adapted to fit pre-existing structures and processes within the human brain. “Neural networks got much better at learning grammars because the grammars evolved culturally to be readily learnable by the existing neural networks” (Henrich). Language is software that has a capacity to use brain regions already in use for other functions, to take advantage of capabilities for neural reuse or retasking, and to exploit spandrels and exaptations for newly developed human purposes. To the extent that native structures and processes allow, and to the extent that language isn’t directly relying on inherited protolinguistic skills like gestural mimicry, the software can use whatever works, and isn’t bound by a native grammar. Language merely capitalizes on our native, pre-linguistic cognitive operations. Any apparent universality of grammar is strictly due to whatever constraints are imposed by our overlain neural substrates, together with a tendency for our cognitive processes to learn and converge on those that work most effectively.
    We usually learn words in context, just as in school we’re often told to use this new word in a sentence. This also constitutes instruction by example in grammar. We’re given plenty of examples of how to build sentences with the words as we’re learning them. It doesn’t matter that these sentences aren’t diagrammed for us. We seem able at a young age to get the idea. How this comes about doesn’t require an innate grammar, but this description doesn’t preclude one either. Our early words tend to be names of people, names of animals, objects, substances, and social terms. Early verbs will only name short-duration events. Word strings soon conform to the language spoken at home. The reaction of others to atypical or culturally incorrect grammatical expression may be sufficient to ensure conformity to grammatical linguistic norms.
    We can probably all agree that vocabulary is almost entirely learned from the culture, except perhaps for some native cries and calls, so the functions in question would primarily concern word combinations and syntax or grammar. Biological givens haven’t really had much time to evolve a language module or instinct, given the changeable nature of human grammars and the limited millennia in which we’ve been speaking (but this doesn't mean that genetic adaptations are not underway). Instead, the sophisticated linguistic thinking enabled by language is an emergent property of structural language programs. Language is not prerequisite to thought or to consciousness: this error may be a holdover from human exceptionalism, and the idea that language provides the reason that makes us rational beings, unlike those instinctual animals not made in the Imago Dei. We do have thoughts without words, and language may have arisen as a spandrel between this and vocalization. We might even be in mid-adaptation, a half-wired spandrel. But language is one basis of mentalese, or at least that part of our inner monologue that happens in words or verbally expressible forms, especially as rehearsals for speech. Composers still think in music, designers in images, chefs in taste, and lovers in touch. In short, it’s language itself that has done the evolving and adapting, yet it has emergent properties that free it from the constraints of fitness to a biological address, as well as the experiential here and now. Language helps us to form thought, without it being the very substance of thought, and is absolutely necessary for structuring a great deal of our abstract, technical, and scientific thought.
    Kim Sterelny restates what we’ve said of abstract or displaced reference, “Neither language nor thought are stimulus bound: we can both speak and think of the elsewhere and the else when. Both language and thought are counterfactual: we can both speak and think of how the world might be, not just how it is. We can misrepresent the world; we can both say and think that it is infested by gods, ghosts, and dragons.” He wonders “Just how language-like is the language of thought?” By what rules is the mentalese bound? Dan Slobin theorizes that we adapt how we think to the necessities of speaking clearly, a process he calls “thinking for speaking.” This is of course circular, since language itself has had to adapt to the ways the brain thinks without language. But the theory does find a home for some of the more important aspects of linguistic relativity, discussed soon. Lev Vygotsky thought inner speech was molded by the external through a process of internalization. Even though internal speech is different from spoken language, we learn to think in ways that we can express in words, so that at least some thought develops socially and culturally. The language of thought would have to be a subset of thought and not the whole of thought by any stretch. Thinking is clearly more than linguistic, but it’s language that renders it communicable, so much more effective, and so much more accessible.
    Language, then, is a prosthetic assist to our genetically inherited affective-cognitive processes, able to rise above some initial constraints but not others. It’s language itself that has done the most adapting. It’s evolved to be readily learnable by young children. It hasn’t had time to become an inherited faculty in itself, but it has had to adapt to innate limitations and biases, which can give it a sense of inherited universality. Vocabulary will be constrained either by what we’re able to perceive, or by what we’re able to describe. This will include our ability to nest terms within higher or lower categories or classes (Spot, dog, pet, mammal, chordate). The linearity of the human sense of time provides a major constraint, if not exactly to a narrative form, then at least to some sequential presentation, and the necessary seriality of our vocal output reinforces this. Communication with pictures circumvents this linearity. We are constrained by our own sense of causality or agency, and by the native heuristics that got us almost this far. Language would have to evolve to be simple to learn and use, but there would be additional pressures to expend some extra effort to learn and use it well, at least for those desirous of status, prestige, and a higher class of copulatory experiences, leading in turn to some selective advantage.
 
Language Development
    We’ve had some wild ideas about why language evolved. The old theory of language as an aid to planning the hunt was an embarrassment, since even a gestural language would startle prey. Gestural language isn’t much use in the dark, which might give us a clue. As usual, theories contend and don’t play well together, and the many contenders just won’t negotiate. Softer sciences have the same problem as the justice system: adversarialism. Whenever a debate gets going in earnest, what you will get is two exaggerated half-truths claiming to be the whole thing. False dichotomy is everywhere. More often that not, the truth is somewhere in between or else one side has mitigating circumstances. But the answer also need not lie between them. One of the casualties of this is that the outliers are omitted or forgotten, in this case, the non-normative languages like math, music, closed conceptual and correlative systems, and NVC are omitted from samples. As with many of our debates, the answers to this version of the nature-nurture debate will likely incorporate some of both views, and maybe some from other quarters. Given the selective advantage conferred by our increasingly sophisticated protolanguage, some preadaptations likely occurred before language as we know it arose. This doesn’t need to be an evolved language module, but developments in such areas as Broca’s and Wernicke’s, and the arcuate fasciculus that connects them, may have favored or permitted the development of more modern forms of word-to-memory association and overt communication. Enhanced and new cognitive skills in general, especially in working memory, and its facility with coordinating both semantic and procedural memory, may have accompanied brain enlargement, and this may have provided strong selective advantages to h. erectus and his heirs, through both individual and group selection. Brain size also correlates strongly with the complexity of social organization, which may have provided a recursive evolutionary loop.
    A useful synthesis of nativist and connectionist theories might propose the coevolution of the brain, human culture, and language, with each of these affecting the progress of the others. There are enough things to be explained to go around, if these theorists would only share. Over two million years, there has been adequate time for some genetic evolution, especially given the great selective advantages that speech conferred in enabling more versatile, transmissible, and technological culture. This might not be enough time for a dedicated language module to develop, but the human brain has grown by 50% and our vocal apparatus has also changed significantly. We took a big evolutionary step when we developed our vocal apparatus, and the precise motor control of this that accompanied our FOXP2 gene. Henrich’s idea is that the “FOXP2 was favored by cumulative cultural evolution to improve our procedural, motor, or sequential learning abilities, for learning complex culturally emerging protocols for toolmaking. Once in place, these abilities could have been harnessed in our communicative repertoire to permit more-complex grammatical constructions… . Research applying brain-imaging techniques shows that using both language and tools (manual actions) involve overlapping brain regions. In fact, if you focus only on regions that are specific to either tool-use or language-use, you are left with few differences.” This gene may have first appeared in h. heidelbergensis, but its absence in h. erectus doesn’t mean that this noble progenitor had no capacity for primitive speech or language, and h. erectus is a creature we grossly underestimate. He survived a lot longer than we have, left Africa a lot earlier, and he got around sometimes by simple watercraft.
    What modern apes have been able to learn ought to provide us with a first indication of what more sophisticated languages are built on. The great apes have a far better facility at learning words than grammar, but they are able to combine words into new words, like Koko’s trouble surprise for crazy, or Michaels’s insult smell for garlic, or Washoe’s water bird for duck. And Nim Chimsky had a basic grammar for active and passive: me-verb and verb-me. It’s unknown whether this was learned or represented some cognitive process that’s native in his species. Some apes seem to have natural calls associated with modifiers that can turn danger from a maybe to a run-like-hell. Alex the gray parrot is the only animal known to have asked a question: what color? The questions we have about cetacean languages won’t be answered until researchers think to stop looking for human-type language and start building their models on the cetacean sensorium and umwelt. They should perhaps start wondering if cetacean lexemes might more closely resemble sonograms, which could easily be strung together into narrative stories.
    The structure of the brain lends itself to perceiving certain relationships in the sensed world, things causing other things, things posed in relationship to other things, things inside other things, or moving in relation to them, things doing good and bad things to us, and things at our mercy or submitting to our will. We’ll soon find out which parts of the brain are activated in configuring subject-predicate relations, and we’ll find that this is different from the part that configures prepositions in contexts, or the active and passive moods, or categories and sub-categories. Activities here find their way into grammatical forms. These relationships still have to be knitted together if they’re to be useful in language. The structure of that has to start out as software, making associations and connections. And the longer connections across perceptual and mental domains require more white-matter reinforcement to survive than the more proximal brain area processing.
    Meanwhile the thoughts of Chomsky, et al, are showing their limitations, but without being entirely refuted. The idea that we have a fully developed language module in the brain, like an app for your smart phone, that contains an integrated set of rules and a native grammar, is a lot to ask of a mere few hundred thousand years of genetic evolution and encoding. It’s far more likely that there will be threads of truth woven through the idea, and neural and cognitive processes and modules in the brain that are primed to grasp and associate new information in specific ways that lend themselves to certain grammatical structures like subject-predicate and preposition-object. These more rudimentary functions can then be assembled through the evolving software of language and immersion in social and cultural learning.
    We appear to have an adaptive system in the linguistic domain with three main parts. Practical cultural learning, language itself, and neurobiology are all interacting and coevolving together. Maybe at some point the human brain simply mutated in ways allowing something beyond protolanguage to begin. Language gradually adapted to the evolving functions native in the human brain, especially functions like nonverbal communication, gesture, concept and symbol mapping, and working memory. It’s effectiveness changed our behavior in ways that changed our lifestyles and diet, and allowed us larger societies. Both of these supported larger brains (limited by childbirth) and longer childhoods, with a lot of learning to be done here, allowing for still stronger cultural development. Language would still evolve to use circuits distributed widely throughout the brain, exploiting spandrels, exaptations, and capacities for neural reuse. Our more sophisticated vocal apparatus began as a happy accident relatively late in the game, but has conferred enough selective advantage to lock it into the h. ignoramus genome.
    More complex culture led to selective pressures for larger brains that were friendlier to our language use, or pressures against smaller brains that were not. Learned behavior and its intergenerational accumulation became more important. These selective pressures would have been on groups as well as individuals. Being immersed in an environment where language is a vital part of functioning is itself a selective pressure, and the evolution of language certainly puts selective pressure on the parts of the brain that use it. Sexual selection might well have played a big part in language development where linguistic ability translated to tribal status or prestige, or even just success at seduction with poetry and other clever tricks. Writing and reading are by far the youngest components of language, but they also confer the most powerful selective and adaptive advantages, both for individuals and groups. These are the intergenerational forms, the strongest connections between ancestors and descendants. Language is now prerequisite to effective cultural development. We were already on our way when the Chinese gave us paper and printing.

Linguistic Relativity
    The theories of universal grammar are truly Procrustean when it comes to accounting for outlier languages. There are plenty of exceptions to cite, not only among more “primitive” tribes, but also in languages like formal logic, mathematics, musical notation, disciplinary lexicons, taxonomies, nested analogies, correlative systems, and even systems of hermetic thought and divination. All of these have some combination of vocabulary and explicit grammar. Any useful definition of language should be versatile enough to handle these variants of language.
    Linguistic relativity is a loosely formulated theory synthesized (but not copied or quoted) out of the writings of Edward Sapir, Benjamin Whorf, and Alfred Korzybski. Simply and cautiously stated, it suggests that the way we talk about reality has a significant effect on the way we perceive it, and so ultimately on our behavior. This postulates a reciprocal relationship between language structures (like phonetics, morphemics, vocabulary, syntax and grammar) and the kind of experiential world which speakers of that language will inhabit. This might suggest that a person using a language with no nouns (these exist) would tend to live and think in a more lively world of verbs or processes. It also suggests that the prior existence of a name for an experience makes it easier for one to discover or recreate that experience, or to access a similar experience in memory. But the key word here is easier, which is used in the place of possible. Certainly not all perception is founded on language. The effects of language are weakest in the sensorimotor and native domains, increasingly strong in the emotional and personal with its trigger effects, and logically the strongest when it comes to culture, especially with abstraction and classification.
    While the earliest formulations of these ideas made use of disciplines like anthropology, sociology, and linguistics, they were always more about the philosophy than science. But they got off to a really shaky start, thanks to some incautious wording naming language as the primary determinant of perception, and so this was quickly referred to as linguistic determinism. This got it all entangled in the nature vs nurture, false dichotomy fiasco, and was assumed to take a blank slate or cultural relativity sort of stance. It soon got  pitted against against Chomsky, et al. The most extreme interpretation, now referred to as strong linguistic relativity, is said to assert that language both structures and determines experience, that it forms the primary vehicle for thought as well as communication, and that linguistic categories determine cognitive categories. While some people may have believed in this stronger expression, it’s more often used in straw man fallacies to repudiate linguistic relativity as a whole, particularly in defense of the theories of Chomsky and his successors, right up to Pinker in more modern times. Meanwhile, on the weak linguistic relativity end of the proposition, it’s simply asserted that non-trivial relationships exist and we cannot really understand many of our mental operations, and the view on the world that those give us, until we grasp the significant roles of language.
    If we grant that culture influences the way we think about reality, why not language as well? On language and culture, the MIT Encyclopedia of the Cognitive Sciences (1999) offers, “Through language, and to a lesser extent other semiotic systems, individuals have access to the large accumulation of cultural ideas, practices and technology which instantiate a distinct cultural tradition. The question then arises as to what extent these ideas and practices are actually embodied in the language in lexical and grammatical distinctions. [Wilhelm von] Humboldt, and later Sapir and Whorf, are associated with the theory that a language encapsulates a cultural perspective and actually creates conceptual categories.” Languages differ in the way they interpret experience and in the way they organize representations for presentation. They can differ in what’s emphasized, in what it means, and in what’s left out. The language in use constrains at least the expressibility of our cognitions, and therefore their propagation. Weston La Barre, in The Human Animal, doesn’t hide his big disappointments with the limitations of language: “The sorry fact is that our unconscious linguistic habits shape our religions and our philosophies, imprison our scientific statements about the world, are of the essence of the conflict of postulated culture with postulated culture, are involved with our wars and other human misunderstandings, and are part even of our dreaming, our errors, and our neuroses.”
    Edward Sapir, in Culture, Language and Personality, gets us started here: “Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of the particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language and that language is merely an incidental means of solving specific problems of communication and reflection. The fact of the matter is that the real world is to a large extent unconsciously built up on the language habits of the group… . We see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation.” Yet elsewhere, in American Indian Grammatical Categories, he offers one of his many denials of the same determinism of which the relativity hypothesis would soon be accused: “It would be naïve to imagine that any analysis of experience is dependent on pattern expressed in language. Any concept, whether or not it forms part of the system of grammatical categories, can be conveyed in any language. If a notion is lacking in a given series, it implies a different configuration and not a lack of expressive power.” Sapir and Whorf both contended that an adequate translation between languages is always possible. Where we lack specific words, experiences may be less accessible to memory or imagination, but that doesn’t make them inaccessible. Lucretius wrote about the properties of atoms and molecules, and the process of natural selection, nearly two millennia before Lavoisier and Darwin, even though he lacked the proper vocabulary to do so. Language facilitates thought more than constrains it. It facilitates or advantages certain ways of looking at the world, and even enables some perceptions that are otherwise unavailable through non-linguistic modalities such as raw sensory and conceptual metaphor. And it can also misguide and be used to misguide our thoughts.
    Benjamin Whorf, in Language, Thought, and Reality (1956) offered this summary: “We dissect nature along lines laid down by our native language. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds - and this means largely by the linguistic system in our minds. We cut nature up, organize it into concepts, and [then] ascribe significance as we do largely because we are parties to an agreement to organize if in this way - an agreement that holds throughout our speech community and is codified in the patterns of our language.” And this: “All observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar, or can in some way be calibrated.” Speaking to one aspect of the affective effects of language, he also offers: “Language is not simply a reporting device for experience but a defining framework for it. So if, from perhaps some unhealthy desire for sympathetic support, you describe your life in negative terms you will find that this will reinforce your mind’s negative emotions and make you unhappy and even more susceptible to feeling unhappy in the future. By simply doing the reverse and focusing on why you are lucky and grateful things are not worse, you will strengthen and increase your mind’s positive emotions and make yourself happy and even more likely to feel happy in the future.” A change in language can transform our appreciation of the cosmos.
    On thinking in language, Whorf offers, “Some have supposed thinking to be entirely linguistic. Watson, I believe, holds or held this view, and the great merit of Watson in this regard is that he was one of the first to point out and teach the very large and unrecognized linguistic element in silent thinking. His error lies in going the whole hog; also, perhaps, in not realizing or at least not emphasizing that the linguistic aspect of thinking is not a biologically organized process, speech or language, but a cultural organization, i.e., a language.” Here, Whorf steps out of bounds. Neuroscience has known for some time that even without an evolved language acquisition device or dedicated language module, there are neurological substrates, and language does make use of the way the brain processes experience. Language is not a simple cultural template that we impress onto a blank slate. Pinker correctly asserts this much. Language still needs to adapt to the cognitive (and affective) processes wired into the evolved human brain, or else it would be neither learnable nor usable.
    Whorf also questioned the idea that language does little more than express the contents of pre-linguistic thinking, “Natural logic says that talking is merely an incidental process concerned strictly with communication, not with formulation of ideas. Talking, or the use of language, is supposed only to express what is essentially already formulated non-linguistically. Formulation is an independent process, called thought or thinking, and is supposed to be largely indifferent to the nature of particular languages.” And “Natural logic holds that different languages are essentially parallel methods for expressing this one-and-the-same rationale of thought and, hence, differ really in but minor ways which may seem important only because they are seen at close range. It holds that mathematics, symbolic logic, philosophy, and so on are systems contrasted with language which deal directly with this realm of thought, not that they are themselves specialized extensions of language.” He makes an important argument: bumping the more unusual or non-normative examples is an example of selection bias, which is often necessary in false-dichotomy arguments.
    Alfred Korzybski followed his philosophical and not-altogether scientific assertions with practical methods to circumvent the difficulties inherent in our languages. Whatever of Korzybski’s thought and work may have been dismissed as unscientific, he nonetheless left behind several useful ideas and cognitive techniques. He writes of his own sense of relativity: “We do not realize the tremendous power the structure of an habitual language has. It is not an exaggeration to say that it enslaves us through the mechanism of [semantic or evaluative reactions] and that the structure which a language exhibits, and impresses upon us unconsciously, is automatically projected upon the world around us.”  Of course, that goes both ways, with language entraining itself to some of our neural architecture, and that’s no guarantee that any real world is depicted either. But, “Humans can be literally poisoned by false ideas and false teachings. Many people have a just horror at the thought of putting poison into tea or coffee, but seem unable to realize that, when they teach false ideas and false doctrines, they are poisoning the time-binding capacity of their fellow men and women. One has to stop and think! … Humans are thus made untrue to ‘human nature.’” Time-binding is his term for intergenerational cultural transmission. From Wikipedia: “He argued that human knowledge of the world is limited both by the human nervous system and the languages humans have developed, and thus no one can have direct access to reality, given that the most we can know is that which is filtered through the brain’s responses to reality. His best known dictum is ‘The map is not the territory.’” And “Humans cannot experience the world directly, but only through their abstractions (nonverbal impressions or gleanings derived from the nervous system, and verbal indicators expressed and derived from language). These sometimes mislead us about what is the case. Our understanding sometimes lacks similarity of structure with what is actually happening.” Adhyāsa is the Sanskrit term for the superimposition of interpretive grids onto reality, or the false attribution of properties of one thing onto another thing.
    Korzybski’s General Semantics is a system of linguistic strategies to keep us conscious of our linguistic fitness, or failure to fit, with reality. It expands on Nietzsche’s perspectivism. Using the term non-Aristotelian, he asserts that we only know abstracted and oversimplified representations of reality. That to say a is b, or even a is a, is preposterous. To say something is either a or not-a is also. Neither is one thing likely to be the sole cause of another. Things we divide into two parts, like space and time or mind and body, are often best left whole. Even formal propositions should be examined in terms of similarities, proportions, or probabilities. Maps and models are only approximations, and descriptions remain descriptions, usually with other points of view available. The process by which we abstract is as much worth mapping as any reality we map. Language serves us best when we remember what it is. Definitions and metaphors are instruments for thinking and have no authority beyond the context in which they are used. He makes some important points about our semantic or evaluative reactions as well. We react either to preconceptions about perceived meanings of events, and to our linguistic representations of them, not to the events themselves. We are often well-served by delaying our reactions until we have a better sense of what’s happening. The failure to recognize multiple perspectives, or to suspect that better maps may exist than our own, is behind a lot of our conflicts. He promoted using such declarations as “I don’t know” and “Let’s see.”
    E-Prime, or English-Prime, is a set of linguistic practices that evolved out of General Semantics. It eliminates all forms of the helping verb to be: is, am, are, was, were, be, being, been. Korzybski himself was OK with a few uses, as an auxiliary verb, or when used to state location or existence. E-Prime leaves other helping verbs alone, like has, have, had, do, does, did, shall, will, should, would, may, might, must, can, and could. This attempts to put some cognitive distance, and a little time to think, between an action or thing and our abstraction of it. It forces us to think about what we’re doing when we call a thing a thing. The statement “E-Prime is a linguist practice” might become become “E-Prime uses linguistic practices.” This trick can be both inconvenient and awkward, but it’s never impossible to implement, and that says something in itself. As discussed earlier, some of the ways we can distance ourselves from rigid identities, beliefs, and affiliations eliminate the verb to be in self-description, freeing ourselves for more open explorations.
    We might grant that the existence of card catalogs and the Dewey decimals affects only the most superficial organization of the library, and determines very little of its content. Using these devices certainly enables a new level of organization and accessibility to our database. Where time and energy for research is a constraint (as it is in thinking and working memory), these will become even more valuable and have greater effects on output. But perhaps a more apt metaphor is the existence of search engines and social media and their more significant effect on content and presentation. Even further, lexical contents often become coequal with our sensory, episodic, and emotional memories. Language certainly assists in the organization, parsing, indexing, referencing, association, and accessibility of thoughts. Using it enables a new level of accessibility in working memory. But words, and the relationships between words, can also be experiences, and associations to experiences, and access to memories of experiences, and clues to new associations between experiences, all in their own right. Words exist that refer to objects outside of our sensory experience, like molecule or bacterium. Knowledge of molecules and bacteria will only be spread with language, not by direct experience, and this knowledge will certainly shape our behavior, and will likely alter even our overall longevity. The word poetry describes something that wouldn’t exist without language, and just think of all the babies that’s led to.

Semantics and Syntax
    It’s now become clearer that our vocabulary and grammar exploit different pathways and general purpose functions in the brain, and that these pathways and functions evolved long before language. They play important parts in our perceptions and native heuristics. Vocabulary is most at home in declarative or explicit memory (semantic and episodic), with syntax, on the other hand, in procedural or implicit (and autobiographical) memory (Hamrick 2018). The parts and pathways of the brain most concerned with stereotypical behaviors, procedures, actions, orientations, and directions, have undergone significant development in the last two million years, perhaps accounting for our pulling way from other apes, who have an ability to learn vocabulary but don’t do well with grammar. While language isn’t always or entirely bound to track experience in the same way as these perceptual and cognitive functions, the more sympathetic the two are will translate into ease and speed of language learning. This means that native perception and heuristics could provide a best first guess for the substrates of grammatical processing, and offer at least a hint of a human universality.
    The above correlation between vocabulary vs grammar, and declarative vs procedural memory, and sometimes even extendable to nouns vs verbs, is imperfect wherever the difference between nouns and verbs isn’t all that cut and dried. Very early on for hominins, we might well have had a word for walk and then modifiers to speed that up or slow it down. A language might have a couple of useful prefixes that can turn a lexeme into a verb or a noun. We’ve seen words added to other words as modifiers in our experiments with apes learning language (like trouble surprise, insult smell, water bird). We can either see this as a deeper layer to semantic learning that simply involves better articulation of single ideas using multiple words or syllables, or we can see this as the first step towards a grammar that adds a subordinate modifier to a larger idea. We’ve also seen Nim Chimsky’s basic grammar in me-verb and verb-me word combinations.
    In Old Chinese, a character could change from noun to verb simply with a change in its context. In a way, since everything changes, nouns are really just slow verbs, and verbs are just fast nouns. Whorf, like Nietzsche, speaks on the arbitrary classification of nouns and verbs: “But nature herself is not thus polarized. If it be said that ‘strike, turn, run,’ are verbs because they denote temporary or short-lasting events, i.e., actions, why then is ‘fist’ a noun? It also is a temporary event. Why are ‘lightning, spark, wave, eddy, pulsation, flame, storm, phase, cycle, spasm, noise, emotion’ nouns? They are temporary events. If ‘man’ and ‘house’ are nouns because they are long- lasting and stable events, i.e., things, what then are ‘keep, adhere, extend, project, continue, persist, grow, dwell,’ and so on doing among the verbs? If it be objected that ‘possess’ and ‘adhere’ are verbs because they are stable relationships rather than stable percepts, why then should ‘equilibrium, pressure, current, peace, group, nation, society, tribe, sister,’ or any kinship term be among the nouns? It will be found that an ‘event’ to us means ‘what our language classes as a verb’ or something analogized therefrom. And it will be found that it isn’t possible to define ‘event, thing, object, relationship,’ and so on, from nature, but that to define them always involves a circuitous return to the grammatical categories of the definer’s language.”
    In some ways, our background or unconscious use of grammatical rules and categories helps to process our understanding and organize our thoughts for presentation to the larger culture. The effects might be subtle and hard to notice, but grammar will make demands on how we sequence the pieces of complete expressions. We are forced into linear trains of thought. It can also have mandatory effects on how we describe or express causality, usually in a way that oversimplifies converging causal forces. Grammatical gender is probably the silliest and most inconvenient thing we do. At least prepositions and tenses specify spatial and temporal relations which seem to coincide well with naive realism and metaphors derived from that.
    A lexeme is a sub-category of thought, forming an associative bond with at least one other memory, which may be another lexeme. These call things to mind, to re-mind us, to re-cognize the past, and they can do this in a context where the recalled thing is neither present nor presently hinted at. Lexemes are associated with experience, becoming attached to memories along with the feelings and boundaries of the experience, adding another handle that facilitates recall. Lexemes are the basic unit of meaning in a word, regardless of prefixes and suffixes or part of speech. They become words when they change form according to morphological rules that adapt them to grammatical contexts. The broadest set of meanings for a word is called its semantic field. A concept would be an abstract idea named by one or more central lexemes and surrounded by sufficient peripheral associations to define and constrain the extent of its meaning. Concept effectively means captured, just as defined means delimited. Connotation is less delimited. Content words name objects of reality and their qualities. Function words may have little or no substantive meaning but have places in linguistic structures. Semantics studies relations between signs and what they refer to (denota or meanings). Syntactics studies relations between words in linguistic structures. Pragmatics studies context and effects words have on the people. Lexemics studies the set of inflected forms taken by a single word-idea or core meaning, either by definition or connotation. Morphemes are context-specified lexemes, as these are shaped to their sentence structure.
    To some extent, per weak relativity, observation will be a function of the symbol systems the observer has available. This is by no stretch an absolute statement, but it helps to have names for things. In the lore of sorcery, this is how we get command of our demons, provided that we “spell” their names correctly. The more limited the symbol systems, in number and kind, the less we are likely to notice. A symbol system is, in effect, a point of view, a lens, or an interpretive template. The more ways of talking we are capable of, the more complete our understanding might be. Words can call attention to real things that we can’t perceive. How else would we know that bacteria were all around, on, and inside of us? We could be less concerned about infection or contagion without them, but more concerned about the demons and witches making our loved ones die.
    We are still and forever bound by our own sensory metaphors. Pebbles and ocean waves still struggle to combine into our picture of light, and our ideas of physical force in physics are still felt in our muscles. Words are referents, signs, symbols, and metaphors, not usually the things in themselves. Anatole Rapoport cautions of their use: “Therefore the linguistically hygienic use of metaphor depends on the full recognition of its limitations, that is, on critical consciousness of the generalizations, analogies, and abstractions involved.” On the larger scale of syntax, and on language overall, Korzbski repeats his similar warning not to confuse the map with the terrain it tries to re-present.
    Reification is the process of mistaking an abstraction, such as a word, for a real thing. Nietzsche writes in his fragmented final notes, “Our bad habit of taking a mnemonic, an abbreviated formula, to be an entity, finally as a cause, e.g., saying of lightning ‘it flashes.’ Or the little word ‘I.’  To make a kind of perspective in seeing the cause of seeing: this was what happened in the invention of the subject, the I…” Recall the 1752 riots over the Gregorian calendar reform. A rainbow isn’t a real thing. It has no location out there in space. The word names a process that begins and ends with what our senses do with a particular stimulus. Reification is also a common error in naming mystical and religious experience. We might have an experience steeped in what seems to be an infinite or unbounded consciousness, but many make the error of taking our own way of conceptualizing this to be a fundamental property of the universe that we think we’re experiencing. A tautology, in a narrow sense of the word, is a word or idea that’s trivially true, or true by definition. It isn’t falsifiable and requires no trial or test in any larger reality. A classic example is Anselm’s ontological proof of the existence of god. We have named many dozens of angels, together with their missions here on Earth, and these have been passed down for centuries, giving them a life of their own. But this fails to bring them into existence, even when praying their names gives people the shivers.
    Parsing poses another big anticognitive issue. Much of what we perceive is more of a continuous medium or a spectrum than the articulated and digitized models that end up in our minds. Ideas may be nothing more than relative locations within these continua. Sometimes there are biological reasons for parsing our spectrum of experience, as we do with the RGB cones in our eyes. Sometimes there are physical markers, as there are with spectral lines in the EM spectrum. Sometimes our articulating words are based on real physical properties, such as the vibratory harmonics in sound waves. Other times, our efforts at parsing make little or no sense at all, like our parsing the year into twelve months of varying lengths, or the phonetic capacities of human speech into randomly sequenced letters of an arbitrary and incomplete alphabet. Yet people who are lacking in critical thinking skills will add up the numbers of their birthdates together with the numbers of the letters of their names, and somehow think the result will speak to them of their character and destiny. A current hot-button, political correctness issue is the attempt to eliminate pre-speciation nomenclature from the vocabulary of science, even though obvious genetic differences develop in isolated populations of a species due to variant adaptive strategies in different niches. Because it makes humans squeamish to talk about races, names for the process must now be denied to gophers and science in general.
    The expansiveness or scope of a word’s semantic field can be too broad, too narrow, or Goldilocks-right. And interesting things happen as a word’s scope might shift over time. The Chinese word Dào (formerly Tao) used to mean way, road, path, or storyline, and something akin to natural law. It’s an example of a word that started out rather specific and clear, but still rich in connotation as a sensory and conceptual metaphor. Yet it came to be almost meaningless with over-elaboration by the religious. Sometimes words are a lot more interesting when rich in meaning. I’m not talking about polysemy, where the same word has numerous, ambivalent, unrelated meanings. Old Chinese used to have a lot more words used singly, while the whole of the cultural experience still needed to be captured within a vocabulary of (let’s say) ten-thousand words. Those words had to be fairly plump and juicy, rich in meaning and connotation. Consequently, there can be no one way to translate old Chinese poetry, or Laozi. There can be really bad ways, as our bookstores attest, but there will always be more than one good way. As vocabulary began to develop, we came to a point where we couldn’t have a single word for every experience. We could put two words together to better articulate an experience, even while remaining parsimonious with vocabulary. This is what had to occur in transitioning from ancient to modern Chinese, in lieu of developing and memorizing an impossible number of characters as the culture elaborated. However, there never has been more than one word for dragon or tiger.
    The articulation, graininess, or resolution of our vocabularies will have optimums. Sometimes extremely inclusive words can be useful. Due to the ability of English to incorporate other “foreign” words and phrases, it now has more words for snow than the Inuit language (and I just read that Mainers now have seventy curse words for snow). I would like to see a single English word that embraces the combined meanings of conscientiousness, conscience, and consciousness, as does the Pali word appamada. But sometimes this breadth can be vague and counterproductive. I write this shortly after the American Psychiatric Association published its 2013 DSM-5, where it made the truly boneheaded mistake of lumping a number of the autism spectrum conditions (ASCs) into the singular “Autism Spectrum Disorder” (ASD). Unfortunately, now many neuroscientists are adopting this oversimplification in their recruitment of test subjects, erasing several valuable sampling categories, and washing out the possibility of returning to examine different combinations of genetic and epigenetic factors for the several relatively distinct conditions. So we do have a need for articulation, whether by way of a more limited set of words with lots of potential for modification, or just an increasing number of words.
    Taxonomies can come and go. It wasn’t that long ago that we diagnosed medical conditions on the basis of which of the four humors predominated. We now regard this as humorous. Back when psychology was defining itself as the study of behavior, only a few of us noticed that psychology, too, was a form of behavior, and specifically, a languaging and taxonomic behavior, like the Hermeticists of old, identifying all the parts of the psyche, soul, or spirit. There now seem to be more parts than id, ego, and superego. We err when we take this effort to be a set description of reality as it is, and then we find the need to change it. The exceptions to the initial rules accumulate until whole systems need to be recodified. The history of Linnean taxonomy provides a good example: we now need subfamilies and superorders beyond the original KPCOFGS. Hierarchical classification is ever fundamental to organizing our thoughts, and it appears early in childhood. It’s an easy thing to have one idea embrace both robins and meadowlarks, more challenging for both robins and penguins, or a chihuahua and a dire wolf. It will takes a still larger, and less intuitive idea to embrace a robin and a musk ox, and larger still, a robin and a redwood. The more abstract we get, the more we can be sure that this is a linguistic and not just a perceptual function.
    This optimum of articulation holds true for stereotypes as well, but here our higher-order and more inclusive abstractions can extricate us from some of our perceptual problems. I like to think I’m more fundamentally Terran than American, and more Human than Causcasian, and more Hominin than Human, and thankfully, now that Human has become such an embarrassment. But although abstraction can take us high above a problem, where solutions can often be found, and where it’s the very nature of higher-order thinking, abstraction can also be a problem. It can value abstracts over particulars, the nation over its citizens, the religion over its sinners. Abstractions are models of the real thing, often constructed for specific purposes, with intentions that slant the presentation to show some particular aspect. They are maps, not the terrain. Maps are safer places to get lost, but that ain’t living.
    Local lexicons, or intradisciplinary jargon, can be rich sources for cultural appropriation. Cultural appropriation is another word that the PC twits have abused, but it’s a big part how cultures grow. Whorf wrote, “Every language and every well-knit technical sublanguage incorporates certain points of view and certain patterned resistances to widely divergent points of view.” These are the usual sources of our shorthand abstractions (SHAs), such as evolution, adaptive fitness, emergence, entropy, memes, and scientific method, words that summarize complex processes with single words that open into large domains by their implications. But we also have toxic SHAs that can drag whole piles of nonsense into our heads with just one or two words: intelligent design, papal infallibility, original sin, and salvation might come to mind for some of us.
    Semantic reactions were pointed out earlier by Korzybski. These play a huge role in anticognition, particularly interfering with our ability to learn calmly and objectively from careless verbal expressions, and when wielded in persuasive rhetoric, political and religious argument, and advertising. Each individual is a unique constellation of associations, connections, relations, and triggers, but there are enough commonalities across populations that working rules of thumb can be derived, and whole populations conquered, converted, or sold a bill of goods. Is this the “right to life” or the “right to choose”? Will it be “illegal aliens” or “undocumented workers”? These are the triggers that get pulled and the buttons that get pushed. They have us coming and going. We just don’t want these things wired into our control panels.
    The manner in which we qualify particular linguistic assertions, especially about ourselves, our identities, beliefs, affiliations, and abilities, can have a powerful effect on both perceptions and outcomes. We’ve already made much of the difference between “I am X” and “I like X,” and “I tend to favor X.” There are other useful examples. “I can’t do that” can easily have an entirely different outcome than “I can’t do that yet.” “I don’t get it” becomes “I don’t get that yet.” To say “You are a rude person” might verbally express the same reaction as “You were really rude to me,” and even “I felt like you were being rude to me.” But the outcomes will likely differ. “In my opinion” and “In my humble opinion” probably mean the same thing, that you aren’t really being all that humble, but both will change the way that any statement they precede might be taken. With regard to the exercise of agency and it’s corresponding responsibility, we can watch truly twisted verbiage and guilt-shirking, non-agentive language in both children and politicians. Little kids aren’t the only ones to switch to the passive voice. Perhaps most famously, mistakes were made. The way we phrase things will affect the way we place blame and seek compensation or justice.
    The specific anticognitives pertaining to the Linguistic Domain are listed and discussed in Chapter 3.7, Logical Fallacies.



2.10 - Metacognitive Domain

Metacognition and Metastrategic Knowledge, Agency and Free Would,

Mindfulness and Concentration, Heedful Diligence or Appamada

“The significant problems we face cannot be solved at the same level of thinking we were at when we created them.” Albert Einstein

Metacognition and Metastrategic Knowledge
    I’ve long imagined that the human condition and search for transcendence was like a crawl through a maze, down on hands and knees, following the rules set forth for acceptable searching, but with walls only one meter high. Solutions to many or most of our predicaments can be had simply by walking erect and looking around, even if that means breaking some rules. A 2015 cartoon by Hilary B. Price depicts a lab rat walking atop the walls of a maze and wondering “Why didn’t I think of this earlier?” In other ways, our search for ourselves is a bit like looking for our own eyeballs and listening for our ears. The prefix meta- in metacognition is used in its sense of beyond, but I think of it more here as above. Taking a step up, to a higher perspective, or to a higher level of abstraction or frame of reference for the sake of perspective.
    Metacognition is most commonly understood as “thinking about thinking,” or awareness and understanding of our own thought processes, or higher-order cognitive skills. It was coined by developmental psychologist John H. Flavell in 1976. But the term will be used here in a stricter, narrower sense, of thinking effectively or agentically about thinking, and thinking along with its ever-present affect. Even philosophers can and do think about thinking, but the nowhere that this thinking so often goes says much. You can think about thinking and still be utterly wrong about what thinking and the mind that does it really is. To qualify as metacognitive here, it has to do some real work in reality, or it’s just as useless as metaphysics. Here the word will refer more strictly to overview that either has or can enable a sense of agency, a sense of the game afoot, a participatory awareness, a self-regulation of mental affairs, or a supervenience by emergent mental processes on more fundamental and biological ones. By this definition, the mere perception of mental events isn’t necessarily metacognitive: it’s simply self-aware thinking. Agency is being in a position to judge the value of, and to reassign new values to, a cognitive processes and accompanying affective states. The Buddha himself regarded the mind’s sense of its own contents as nothing more than another one of the senses, numbering six, with this alongside the usual five. Mental contents and activities would require some extraordinary properties to take them beyond this dimension.
    Before going further, note that the word “agentic” has now become a Janus word, with two perfectly contradictory meanings. According to Albert Bandura, agentic people are “self-organizing, proactive, self-reflective and self-regulating as times change. An agentic perspective states that we are not merely reactive organisms shaped by environmental forces or driven by inner impulses.” This is an adaptive intelligence, the employment of means to an end, and the exercise of volitional agency. The word is used here in this sense. But according to Stanley Milgram, an agentic person “behaves like an agent, assuming no responsibility for actions or their consequences, only following the orders of someone in authority.” This sense of the word will not be used here, even where such behavior is described.
     A near-synonym for metacognition as used here could be metastrategic knowledge (MSK). This is said to encompass several cognitive skills, such as a grasp of false belief and its dynamics, distinguishing between appearance and reality, ability to vary visual perspective, and introspective awareness of our own thoughts. Metacognition regulates first order cognition. Deanna Kuhn (2000) proposes that metacognition and metastrategic knowledge operate differently on declarative and procedural knowledge. The first (as I see it) can alter associations to memories, including the affective components. The second can alter our behavioral attitudes and choices by fine-tuning the values associated with strategic options. We alter our effective output by inhibiting inferior or ineffective content and strategies. But thought about thought that ultimately goes nowhere is really nothing more than thought. I remain dissatisfied with the development of this idea in its current use and practice for the same reasons I have issues with critical thinking: the role of affect in cognition is grossly understated, underestimated, and even ignored, while the spotlight remains disproportionately trained on reason.
    John Dewey’s term for metacognition was reflective inquiry, “in which the thinker turns a subject over in the mind, giving it serious and consecutive consideration.” He defined reflective inquiry as an “active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends” (1933). He identified five aspects, which roughly parallel a scientific method: 1) suggestions, 2) intellectualization into problems for solving, 3) uses of suggestions of hypotheses to guide observations in collecting facts, 4) mental inferential elaboration employing reason and other processes, and 5) the testing of hypotheses. Gardner proposed an existential intelligence or moral intelligence, being reluctant to reify a spiritual intelligence. This asked and grappled with the deeper questions of existence, meaning, value, purpose, and morality. This is said of “individuals who exhibit the proclivity to pose and ponder questions about life, death, and ultimate realities.” He didn’t elaborate on this.
    Researchers at the UCSD School of Medicine devised a preliminary scale to chart the dimensions of wisdom, and named it SD-WISE. They named only six components, to wit: social decision-making and pragmatic knowledge of life; emotional regulation; reflection and self-understanding; tolerance of diverse values; ability to deal with ambiguity and uncertainty; and prosocial attitudes and behaviors like empathy and altruism. Outside of being somewhat biased towards prosocial neurotypicals, and the fact that it leaves out some other rather important dimensions, like self-directed behavior, reframing ability, distancing from belief, a sense of humor, creative and divergent thinking, and a knack for revaluing values, it still offers six dimensions worth studying.
    Conventionally, metacognition includes three groupings or categories of perception: 1) trained on content or declarative knowledge, wherewith we assess the contents of our minds, concepts and schemas; 2) trained on task or procedural knowledge, wherewith we perform operations on what we think we know; and 3) trained on strategic or conditional knowledge, wherewith we enact methods of vetting information, learning, unlearning, adapting to a changing information landscape. The two components of metacognition are said to be 1) awareness and knowledge of cognition itself and how objects are recognized, or attention paid to the contents of the mind, and 2) the allocation of awareness in the self-regulation of cognition, including methods of inquiry and methodical doubt.
    The metacognitive domain holds the most promise of relief from stupid anticognitives. Activities in the metacognitive domain can intervene in or upon any other domain, or at least inform any of the others. But there are also a number of errors to be committed here as well. The reification of first-hand experience is certainly one of these, with enormous consequences for the world when this leads to creating religions and homocentric ideologies. There is the confusion of inner truth with truth itself, or perspective with objective. Another is the imaginary entities that we create within that set of emergent qualia that we collectively call our minds or our selves. We construct the little homunculus that squats behind our eyeballs, peering out on the world and operating the self-controls. This little entity is like a user-generated interface between the nervous-endocrine system and the equally incomprehensible world. Like a user interface, it shows you neither the code nor the binary, but it still affects and manipulates them. This works for us in a naively realistic way some to most of the time, but it isn’t any more real than the perceptions it confabulates. If we can remember to see this for what it is, we can maintain some creative control over what it is and what it does. Some sense of a first-person identity and its limited perspective will be present in even the most exalted and mystical of egoless states, or else we would have no accounts or memories of them.

Agency and Free Would
    “Decide, v.i. To succumb to the preponderance of one set of influences over another set.” Ambrose Bierce
    There’s a current bandwagon in psychology and some related fields that will object much to this narrower version of metacognition on the grounds that true conscious agency (or free will) has yet to be established. Critics may then cite the Libet experiment, discussed shortly. We can grant them this, stipulating some of their argument: a true agency isn’t as common as most people think it is, neither in its distribution throughout the human population, nor in the frequency of its use by those who have the requisite mental skills. But effective or agentic cognition has been fundamental to such practices as Buddhism for millennia now. Free will is a funny idea, often posed as one of those falsely dichotomous “do we or don’t we?” questions. Compatibilism names the notion that free will and determinism can coexist. The idea goes back to the Stoics. Michael Gazzaniga writes, “Indeed, even though we may acknowledge that there are certain physical, biological, and social forces that influence our decisions and actions, we nonetheless feel as though ‘we’ are somehow separate from these impersonal forces, and that rather than being at their whim, it is ‘we’ who are the final arbiters in making the choices that we do.” But why do “we” only really do this such a small percentage of the time? The assertions below go beyond a simple compatibilism. The process can be indirect or roundabout, interacting with determining factors, and involving a metaphorical self-reprogramming. Experiments showing that decisions are largely made before consciousness gets involved don’t really constitute an argument for determinism, since decisions made consciously with emotional content can make recursive cycles and forays into the unconscious and move things around down in there. Initial decisions can be made in more lofty and supervenient realms by those who have emergent talents, realms, and such methods. This can also be a function of a personal revaluation of values.
    The Libet experiment did much to dethrone the conscious mind as a real-time executor of human action. In brief, it showed that the subconscious mind will arrive at decisions and begin their execution hundreds of milliseconds before the conscious mind “comes to” those decisions. In many ways, the conscious mind then simply rationalizes what the body and brain have already decided. In Pinker’s words, “The  conscious mind … is a spin doctor, not the commander in chief. It merely tells a story about our reaction.” The typical conclusion drawn is stated by David Oakley in Chasing the Rainbow, “Despite the compelling subjective experience of executive self-control, we argue that ‘consciousness’ contains no top-down control processes and that ‘consciousness’ involves no executive, causal, or controlling relationship with any of the familiar psychological processes conventionally attributed to it. The experience of consciousness is a passive accompaniment to the non-conscious processes of internal broadcasting and the creation of the personal narrative. Though it is an end-product created by non-conscious executive systems, the personal narrative serves the powerful evolutionary function of enabling individuals to communicate (externally broadcast) the contents of internal broadcasting.”
    The problem with this conclusion is that it’s a straw man fallacy, start to finish. It arbitrarily assumes that agentic processes must occur within real-time slices of hundreds of milliseconds. Then it decisively sets out to slay that hypothesis. But we can’t just arbitrarily or conveniently define agency in terms of processes spanning but fractions of a second. The conclusion begs the question by defining “free will” or agency far too narrowly, and doing so in order to fit the results of the experiment. It’s preposterous on its face. Despite all the peer review, E pur si muove. The experiment does absolutely nothing to show that both emergent agency and self-directed behavior are now out of the ontological running over more extended periods of time.
    Agency isn’t a simply matter of engaging a mental faculty called the free will, like some switched motor, or awakening some emergent homunculus squatting in the forebrain, or rationally looking at options and simply making a choice. We have supervisory systems in the brain, especially in the PFC and ACC, to choose behaviors, suppress urges, and manage habits. Agency is a negotiation between these parts of the brain, in order to determine how to respond to stimuli in certain ways later or next time, by changing values, or our frames of reference, or associated affect, or our sense of meaning and relevance. It’s a recursive process by which we try to gradually educate the subconscious mind to respond to stimuli in more successful and adaptive ways. The secret to free will is in consciously altering the unconscious, but this is a process that requires some time. It also includes our development of inhibitory attitudes, what Ramachandran calls free won’t, a function of the dlPFC. This encompasses cognitive inhibition, deferred gratification, and inhibitory control or restraint. An ability to disengage or distance ourselves from a dilemma may also be the wedge that we need to permit free choice, sometimes just by giving us more than a fraction of a second to ponder. We may not have free will in decisions involving only milliseconds, or perhaps even minutes. But we do if we have enough time for behavioral learning pursuant to decisions made consciously. We have to create an executive function that works by effecting longer-term changes in the subconscious. We can also do this by making alterations in what we think is valuable, and in which values themselves are worth having.
    It’s not surprising that people cite the Libet experiment here. But while that has much to tell us, the stock explanation that it disproves agency is specious, and a manipulation of the hypothesis after the data is in. This we can grant: most people aren’t free most of the time. Perhaps “free would” is a lot more common. The extent of self-management, for most people most of the time, is this: “You dangles a carrot in front of your nose, and you goes wherever the carrot goes.” We often use props for our resolutions, like pay-in-advance gym memberships, or down payments on our dream object, or marriage licenses. It doesn’t always have to be so roundabout as this, but it does seem to require working with the brain as it’s actually put together. We can still use self-made user interfaces like rational concepts of free will to pretend to do the work of making choices, but they need to be connected somehow to a far deeper layer of coding. They can’t just be delusions, or claims made in the holy books. We need to create ideas that have strong emotions associated with them, and we have to make those ideas our own if we want to associate them with the word free. We need to build our chosen emotional reactions through reiterative or recursive processes that cycle through the unconscious, and this process takes more than seconds. Perhaps most important is our need to take charge of how we value things, what we regard as important, relevant, or meaningful. Being able to dismiss a temptation as personally unimportant is how to conquer the temptation. Fighting it only reifies the damned thing. This can be learned, and to learn this is a choice. We can figure out these new values and relevances rationally first, if we like, but this in itself won’t plug them into the limbic system. We need to either walk these new ideas into experiences that have the power to alter us in some deep ways, or we need to invite ourselves to have profound experiences that are known to produce new values and relevance.
    It’s true that agency or free will, as it’s commonly understood, as a moral faculty or an accursed gift from the creator on high, is an illusion, and even a delusion. Most people aren’t free most of the time. It’s almost certainly a lot rarer in both its distribution and frequency of occurrence than is commonly thought. It’s an emergent process, like mind itself. It might self-structure or it might lend itself to self-construction. There are ways it can disconnect itself from its own causal factors and even have a say in the operations that created it. In philosophy, supervenience is a relation used to describe systems where the upper-level properties of a system are not always fully determined by their lower level properties. The whole will be greater in its abilities than the sum of its parts. To supervene on or upon is to have a determinative effect on lower level properties.
    The stricter materialists, reductionists, and behaviorists are still denying emergence. Somebody somewhere seems to have ruled that something has to be measurable or material in order to exist, or to have any claim to reality. But something has only to be effective, to be able to alter reality, before it can have such a claim. My love of the color blue, although neither love nor blue are physical, might very well lead to me painting large parts my environment blue, effecting a measurable increase in the ambient 450-500 nanometer E-M wavelengths. As someone who enjoys Theravada Buddhism, I don’t believe that “I” am an ontological reality, and am certainly not a lasting phenomenon, whether emergent or not. But the self-schema that this experience leads to will modulate my reactions and have real effects in how I live my life, and perhaps some effect on the world around me. This becomes reality in that sense. Carolyn Dicey Jennings says it this way: “Demonstrating the existence of illusions of will is not the same as demonstrating the absence of will. My understanding of a substantive self is as a physically realized emergent phenomenon - it is made up of parts but it has a property that goes beyond the sum of its parts, in that it has some degree of power or control over its parts.”
    The word resolution is an odd sort of Janus word, in that it has separate meanings that can pertain to either consciousness or will. To the astronomer or optics engineer, the word will refer to a clarity of vision or a fine-grained articulation in the field of view. To those set to turn their lives around on New Year's Eve, for real this time, resolution is an avowal that's backed up by persistent determination, often for a whole day or more. Interestingly, this kind of resolution works best when the other kind, the clarity of vision, can also be maintained. This is the function of value in the exercise of agency.
    The issue of accountability for our behavioral choices arises here with the topic of agency or free will. Is it our upbringings that commit our crimes? Are we culpable for refusing to learn constraints? Is incarceration a separate issue entirely, just a power arrogated by society to set malefactors aside to prevent further damage? Is it our duty to teach them a lesson they’ll never forget? Do we need to be taught legal behavior like it’s potty training? We almost have to separate this from the philosophical issue of agency and move it into social contract.
 
Mindfulness and Concentration
    Attentional control, paying attention, or mindfulness is another important executive function within this domain. It’s a practice, which implies that it requires an effort. Daniel Goleman, in The Buddha on Meditation, provides an adequate starting description of mindfulness practice: “The state of mind in meditation is one of sufficient attention, relaxed alertness, presence, an unattached involvement or observation, un-interpretive, non-judgmental, a readiness to observe what comes and goes. While we mind or attend the various objects of mindfulness, we merely notice them as they come and go, like frames in a film, not allowing them to stimulate the mind into thought-chains of reactions to them.” Readiness might be the most important word in this description. It’s a state where even bright, sudden, or disturbing stimuli simply wash through you, without compromising your equanimity. You’re neither too relaxed nor too vigilant. Similarly, Sam Harris calls mindfulness, “simply a state of clear, nonjudgmental, and undistracted attention to the contents of consciousness, whether pleasant or unpleasant.” Harris borrows an illustration from Insight Meditation Society co-founder Joseph Goldstein. He “likens this shift in awareness to the experience of being fully immersed in a film and then suddenly realizing that you are sitting in a theater watching a mere play of light on a wall. Your perception is unchanged, but the spell is broken. Most of us spend every waking moment lost in the movie of our lives. Until we see that an alternative to this enchantment exists, we are entirely at the mercy of appearances.” That’s the meta of mindfulness.
    Mindfulness practice takes a lot of forms, with the common characteristic of attending this moment, whether the objects of our attention are within or without. Simply attending as we move through ordinary daily life, chopping wood, carrying water, serving tea, is a common form that requires no special cushions, mats, or coaches. Ramping up the attention we pay to the world is energetically more expensive than dimmer and more ordinary awareness. Our attention seems to be more urgently attracted to the novel and the unexplored, until we have “been there and done that,” and can now return to something more like sleepwalking.
    The final two steps on the Buddha’s Eightfold Path most concern us here: Samma Sati, Right Mindfulnesss, and Sama Samadhi, Right Concentration. The first is a systematic observational tour through our physical being, our feelings and sensations, our mental processes and activities, and the objects of our thoughts. The second is performed in a meditative absorption and consists either of Samatha Bhavana, focussed mental practices designed to achieve particular states of mind or awareness, or Vipassana Bhavana, the development of insight by introspection. The latter is the closest to Zazen. This is being unblinkingly watchful, seeing or knowing phenomena for what they are as they arise and disappear, the vision of every specific thing formed as being impermanent, imperfect, and having no independent existence. The objects of our consciousness simply arise out of the depths, to be observed until they go, popping like bubbles and passing away. But we see their origins, their connections, and then their passing. And it’s OK to learn and understand things during that process, this being an important part of minding.
    Although practiced in Buddhist countries for centuries, mindfulness has been slowly catching on in the West, along with big helpings of commercial and new age hype, of course. Some practices can (and probably should) begin at an early age, as soon as the toddling is done. Few structured programs seem to begin with children younger than school age, but for the younger kids, just more simple engagement, make believe, calling attention to things, pointing out their thoughts, feelings, and breathing, and the encouragement to explore while paying lots of attention, seems to accomplish the same goals without need of further structure. Try asking them, “What does your brand new brain think about that?” Just remind them of their minds. Elementary school is still a good time to start more structured exercises. Mindfulness works its way comfortably into sports. The dicta “keep your eye on the ball” and even “be the ball” are well known examples with impressive results. The extreme sports that potentially involve injury actually demand mindfulness, or else. Since most of the emotional challenges that school-age children will encounter are social, it’s especially important to help them understand social dynamics, and what they can and can’t do about them, and thereby learn some emotional resilience by paying closer attention to their interactions and observing emotional reactions.
    The promise of mindfulness shouldn’t focus on its promise. This isn’t just beside the point: it’s not being present and therefore it’s counterproductive. We might still be looking at where we’re going or where we’ve just been, but we aren’t dwelling on how we hope things will be, or things that might have been. It’s still OK to be in time as a stream that has some length, breadth, and depth. The word concentrated doesn’t really mean the same thing as narrowly focused. It has the same roots as the word concentric, to be “with the center.” It says nothing about where the circumference has to be. The radius could be ginormous. If you frame you mind expansively enough, being here now could also mean being way over there as well. Pleasant or unpleasant, we want to see and accept things just as they are, without pro or con spin. It’s important to remember, though, that acceptance isn’t the same thing as approval: it’s just that the world is only at our feet when we know where we stand. When we accept things as they really are, we also have a better understanding of the real dynamics in play here, and thus of how effective change might be better brought about. Mindfulness, at least in some of its forms, can bring along a critical eye, in which case it’s called vigilance. This may assume that most information coming our way is noise, or even crap, and engage filters to be used at the time of attending, to filter for value, meaning, and relevance, to deny quarter to toxic trains of ideas, to monitor the entrance for Trojan horses and feet in the doorway. Although we don’t want too defensive a posture or attitude, an open mind still doesn’t have to accept stupid stuff, not even for a moment. While both Vipassana and Zazen may allow nonsense the freedom to come and go, they will also maintain enough distance that entrapment isn't a problem.
    Cognitive Behavioral Therapies and cognitive psychology in general all use some form of mindfulness, although not all of them respectfully credit their origins to the ancient practices in Vedanta and Dhamma-Vinaya. There are other ancient traditions that will accomplish similar states, many of them either shamanic, or else given to the tribes by shamans for their rituals. Vision quests, lucid dreaming, chanting, all night dances to drumbeats around the fire: all of these concentrate the mind, in a variety of different ways.
    The idea that we can ever see ourselves objectively, or even subjectively down into root cognitive processes, is an illusion of course. Mindfulness can’t penetrate the true nature of phenomena to know the actual reality that underlies them. We still can’t know things as they truly are, or fully get around our thoughts, concepts, ideas, and metaphors, and the limitations that our limited sensory experiences impose upon these. It can get us closer, helping us to true our lives, to adjust how we live in accordance with a higher degree of wisdom. It allows us to put enough space between ourselves and the phenomenal to make more rational choices and avoid being victims and puppets of this and that. And it brings us a lot closer to understanding and accepting that all of our phenomena are going to be temporary, imperfect and inessential. The Buddha asks us to dive right in anyway, and start trying to be truthful about ourselves. It’s really the only way to get the kind of first-hand experience that will connect those abstract buttons and commands in our conscious minds to the more messy affairs of our neural circuits and glands.

Heedful Diligence or Appamada
    Vayadhamma samkhara, appamadena sampadetha: Compound beings are ephemeral, strive with heedful-diligence.” Buddha’s last words
    The word appamada embraces the meanings of heedfulness and diligence. It’s a particular form of mindfulness combining consciousness, conscience, and conscientiousness. There is a purposefulness about this as well. Personal purpose can be synonymous with vocation or calling, or the development of genius, gifts, and talents. Higher purpose is something different, and might be thought of as living for something that’s greater than ourselves, dedicating or consecrating our lives to something more than we are, something that will outlive us. Both of these are more meta than ordinary living, but the two should not be confused. Personal purpose can and should be personally rewarding. Happiness isn’t wish fulfillment, it’s a harvest that requires tilling, planting, and care, and eudaemonia is even more work than happiness. The Japanese idea of ikigai, a reason for being, is said to combine what you’re good at, what you love, what the world needs, and what you can be paid for. When we find it, our passion, mission, vocation, and profession can be one and the same. Higher purpose, on the other hand, steps off its track when it expects to be rewarded, or made happy, by the efforts we expend. It’s not about us. One of the expressions of the alchemical term magnum opus asserts that the Great Work is ultimately the transformation of mankind, perhaps along the Nietschean lines of “man is something to be surpassed,” or the more modern transhumaninst h+. Where we are truly living for or serving something greater than ourselves, then our happiness is pretty much beside the point, and we have Zhuangzi’s “perfect sincerity offers no guarantee.” We are Loren Eiseley’s Star Throwers, maybe accomplishing only a small success, but accomplishing nonetheless, and still OK with that. It’s not about us: it’s something better than that. Emotions, then, are but grist for the mill, and can be put to work, instead of being taken so personally that they cloud all our thinking. Obviously, higher purpose can get really twisted around when we allow it to be provided for us, especially among true believers and soldiers. It bears close examination, from above, on metacognitive levels, looking down. Look what those foolish people put in their Koolaid. Why did all those dead soldiers not charge the hill with a better plan, like sneaky tactics, or even diplomacy?
    The thought of lifelong learning doesn’t seem to be very attractive to most people. Lacking help from motives like personal calling or higher purpose, the human mind is going to resist education as a lifelong project. Cognition is energetically and attentionally expensive, and more so for those who can’t see it as rewarding. It seems most are willing to settle for simple answers and settling into a routine. At that point, their most strenuous cognitive efforts are often expended in defending the errors they’ve settled upon. Is this a result of learning not having been made a rich and exciting experience all by itself? I would put some of the blame there, in inferior education. Or is it a feebleness of character? The ambitious among us have learned, sometimes painfully, that you can only become truly good at something if you know you aren’t perfect, and know that you’ll need thousands of hours of study or practice. This is so much work that no amount of conventional, external rewards will be enough. The rewards need to be intrinsic to help defray these attentional costs.
    Constructive discontent (a term borrowed from Irene Becker) ignores the Buddhist issue of dissatisfaction stemming from desire. Those who have lives to be lived, in ways that they want to live them, are willing to undergo some goads into action. It’s a can-do, get-er-done kind of thing. You expect stress, costs, and exhaustion. You ask for disturbing emotions to tap for motivation. We do have to skip the new age bullshit that tells us we’re already perfect just as we are, and we’ll need to develop an attitude that builds in some humility, instead of wrapping ourselves up in our unearned self-esteem. We need to use good judgement, instead of not being judgmental. This means a willingness to notice when we’ve made errors, and that makes it easier for us to unlearn or relearn as needed. Not knowing everything, or even not knowing much, isn’t being dumb. Dumb is not wanting to keep learning more. Smart is not knowing everything already, and still being full of interesting questions and investigative skills. Answers are just periods, then the fun’s over.
    Many decades ago, some psychics came to our town, four followers and a leader, who held a free introductory psychic workshop, sponsored by a small group of locals who studied the Silva Method (of Mind Control, distinct from the Sylvan Learning Center). I went along. We held other peoples’ car keys and hair combs up to our foreheads and got some visions to share. My vision was five kids in a big valley, but instead of mountains, there was this giant wrap-around billboard that said Telluride. They dismissed my vision as not psychic. But later on, the leader was talking about going deep inside and contacting “the Learner Within.” When I heard that, I let out a good, audible sound of approval and applauded too. He seemed confused, so I repeated what I’d heard and praised the originality of the phrasing. Much annoyed, he said “Well, I meant to say ‘the Teacher Within,’” and just went on rambling without a clue to the wisdom in his Freudian slippage. This would not be my family. I’ve always preferred the Learner Within. That would always teach me lots more.
    Learning means growing up and changing. It has to adapt, and that means some unlearning from time to time, as even truth itself might need changing. Max Planck observed a strong resistance to adaptive thinking in scientists: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” This has been paraphrased into “Science advances one funeral at a time.” We can do better, but we have to learn how to, to go against a natural inclination to rest on our laurels.
    Abraham Maslow asserted that self-actualization was a real drive, one that came along with being a human being. More pressing needs would occupy us first, and often entangle us with lower-order distractions, such that many of us might never even sense its presence. A thwarting of more basic needs would lead to neurotic behavior that kept us tangled down in the lower levels of our more deficient being. Maslow asserted that self-actualizing people indicate a coherent personality syndrome and represent optimal psychological health and functioning, comfortable both alone and in relationships. They’re able to distinguish the fraudulent from the genuine, and are loosely bound at best by social convention. They’re ‘problem centered,’ meaning that they treat life’s difficulties as problems outside of themselves that demanded solutions. The new age platitude that claims you can’t change the world, only yourself and your attitude, is a non-starter, and perhaps a sign of psychological damage. The struggle to develop into a self-actualizing being claims enough of us that those who find our way into purposeful self-actualization are non-normative, exceptional in a statistical sense, such that we aren’t really studied as a class because we’re almost anecdotal by definition.
    Some of our psychologists have recognized a need for a separate branch of psychology to deal with this phenomenon. The discipline for this was named Positive Psychology in the late 1990s by its founders, Martin Seligman and Mihaly Csikszentmihalyi. Unfortunately, despite the best intentions of these founders, a significant number of its enthusiasts are in danger of missing the point by mistaking the meaning of the word positive. A lot of the preliminary research seems to be using self-reported happiness as the first measure of a person’s psychological well-being, or more specifically, conflating positive with happy and negative with being critical. The new age is creeping onto that lovely new lawn like crabgrass. There is much more to what Abraham Maslow called the “farther reaches of human nature” than our self-satisfied and narcissistic emotional states, even though this misunderstanding does sit quite well with modern culture. As Nietzsche put it: “My suffering and my fellow-suffering: what matter about them!  Do I then strive after my happiness?  I strive after my work!” (TSZ #80). We have better and more important things to do than dwell on ourselves. If our happiness wants to come along, that’s cool. But it rides in the back and does none of the driving.
    A most important motivational driver in our self-actualization is Mr. Death. Ars longa, vita brevis. In fact, it’s frequently the premature death of someone close to us that kicks us into a new sense of urgency, or a new commitment to living life more fully. Carlos Castaneda had lots of wonderful things to say on the subject of us sassy, immortal beings with the sense of having time. In his Journey to Ixtlan he wrote that “death is the only wise advisor that we have. Whenever you feel, as you always do, that everything is going wrong and you’re about to be annihilated, turn to your death and ask if that is so. Your death will tell you that you’re wrong; that nothing really matters outside its touch. Your death will tell you, ‘I haven’t touched you yet.’” And in his Tales of Power, “When you feel and act like an immortal being that has all the time in the world, you are not impeccable; at those times you should turn, look around, and then you will realize that you’re feeling of having time is an idiocy.” For those who really understand our finitude, time is truly precious, and this is the key to “striving with heedful-diligence.” As Buddha said in the Dhammapada, “The world does not know that we must all come to an end here, but those who know it, their quarrels cease at once” (1.6, Max Müller). There really is no excuse for boredom once we really understand the value of the time we have, the preciousness of life. Susan Ertz offered “Millions long for immortality who do not know what to do with themselves on a rainy afternoon.” Related to how little time we have here, one of the more useful lessons is that we just don’t have time to learn it all first hand. We have the ability to “learn in other heads,” as the Spanish say, and we need to use that if we want to optimize our time here. That often means seeking out people who know more than we do and listening to them.
    Whether we have a personal or a higher purpose, it will still be a valuable thing to assess what we’ve done here, on this world, in this life, and in the midst of all this life. Have we taken more than we’ve given? Is the world a better place for our having been here, or have we left it diminished? Have we paid our rent? Gratitude is a good thing, maybe even one of the best things. Reverence, too. But we aren’t these things, and they aren’t the measure of our value or worth. For that, we look to our deeds. Regardless of what the poets and philosophers say, human is as human does. And if we’re going to learn from our study of ignorance here, we might make an effort to review all of our accounts and impacts, not just those that the human parasite sees. Is life itself supported or diminished by our presence?
    Heedful diligence requires an appetite for information and experience, a hunger, that we seem to come fully equipped with as young children. It’s the job of the parent, the village, and the mentor to keep that flame alive, or to refuel and rekindle it when it starts to sputter. The fact that this goes away at such a young age in so many of us should have a lot more of us asking why. What can take something as insatiably hungry as a young human mind and make it not want to learn? What does it take to destroy that? Koyaanisqatsi is the Hopi word, life out of balance.

    The next two chapters outline some of the lessons and practices within the metacognitive domain, one for the benefit of children and one for the elders.



2.11 - Metacognitive Domain -
Thoughts and Practices for Kids

Childhood Adversity, Neo-Piagetian Developmental Stages,
By Domain: Sensorimotor, Accommodating, Situational, Emotional,
Personal, Social, Cultural, Formal Education, Kindergarten,
Secondary School, Introducing the Metacognitive Domain to Kids

“It is easier to build strong children than to repair broken men.”
Frederick Douglass

Childhood Adversity
    Minds start out with the givens, in the early domains, with sensorimotor experience and inherited cognitive processes. The older layers are laid down first. Like a bricolage, the mind is built up with age, out of whatever inputs, experience, lessons, or information is made available. Human culture prior to now has been tragically unaware of the importance of these early layers, to which all subsequent learning must adapt, and to which so much of this is wrenching cognitive dissonance. An incompatibility between nature and early nurture can set up lifelong internal conflicts. Improved information is too easily defeated by cognitive bias and other entrenched defenses. It’s critical to watch and select what goes into the making of young minds. In the shallower culture, this is the critical period for logic-proofing, and indoctrinating the foundations of various religious and cultural beliefs. We ought instead to be cultivating the ability to question intelligently, and adapt and relearn when errors are pointed out. Even though much lip service is given to teaching kids how to think more than what to think, far too few have even noticed the vital roles that emotion and social integration play in the way we organize thought. Then they wind up complaining that teaching critical thinking doesn’t work. It’s a lot to ask to be raised by parents and teachers who are educated in even a small fraction of the anticognitive processes presented here, and can point them out when they arise.
    We have looming questions and controversies over whatever claims a community or society may have on its future generations. To what extent do parents own their own children, or have rights to raise them in ways that may prove harmful? To what extent may community-authorized governments intervene in the development of children, and on what grounds? What are the obligations to assist or support them? What is the obligation to educate? It’s generally assumed that these questions become answerable wherever true childhood adversity is a threat. But we are only now beginning to understand what a threat adversity is, and how important childhood development is to society as a whole, and particularly adversity in the earlier years where the most foundational aspects of both cognition and affect are forming. The full parental rights ensure that these years are nearly always in the hands of rank amateurs, who might be generally well-intentioned, but are almost always deluded, as humans are inclined to be. And the community is the entity that most inherits the consequences. Thankfully, culture allows us to pass down wisdom that we ourselves have learned too late, to those who are willing to listen.
    The most severe of the adversity issues could be charted around Maslow’s basic needs, the homeostatic and deficiency motivations: 1) Physiological (breathing, food, water, basic health care, circulation, shelter, temperature regulation, excretion, movement, and sleep). Here we have environmental toxins, life in war zones, natural disasters, poor nutrition, poverty, prenatal adversity, and refugee scenarios; 2) Safety or Security (physical, social, employment, resources, property, and health). Here are bullying, dangerous neighborhoods, homelessness, physical abuse, sexual abuse, unpredictable threats, and other forms of violence; 3) Belonging and Love (friendship, family, community, and intimacy). Here are authoritarian schooling, broken homes, separation from loved ones, emotional abuse, foster care abuses, intolerance, neglect, parental illness, parental mental illness, deaths in the family, rootlessness, substance abuse, and the witnessing of abuse; 4) Esteem (confidence, self-worth, basic education, sense of accomplishment, respect of and for others). Here are absence of behavioral models, arbitrary rules and boundaries, relentless or unwarranted criticism,  inconsistent reinforcement of social advances, bigotry, persistent uncertainty, relational aggression, and street gangs. Children growing up in these adversive environments have compromised cognitive development that affects the actual structure of the brain and the formation and size of some of its parts, perhaps most notably the amygdala. It’s still unclear how much of this is genetic (in family traits and individual vulnerabilities) and how much epigenetic or environmental. There are some wide ranges in individual differences here that still need to be studied, perhaps for a genetic component to resilience, or perhaps it’s simply stronger character. But we do know that environmental factors are significant and best addressed by correcting these deficiencies.
    Paradoxically, in ways that really mess with the results of adversity studies, language exposure may be better in poorer neighborhoods, which are more often multilingual, and this contributes to cognitive development. Also, with the larger and extended families typical of poorer neighborhoods, greater age diversity may aid in cognitive development. Both of these add at least a little cognitive enrichment back into the mix.
    The above adversive conditions are matters of general agreement, but they may not cover the ground that a more functional society would need to cover. We do know that these alone are responsible for a great deal of young adult self-loathing, insecurity, anxiety, depression, attention deficits, defensiveness, impulsiveness, self-indulgence, narcissism, apathy, frustration, and criminal behavior. If our definition of adversity were expanded to include some of the lessons we’ve learned in this book, we would almost certainly generate some howls of protest from the bulk of society, because adversive conditions might then include entitled and consequence-free childhoods, extreme risk aversion, inadequate play opportunities, lack of age diversity in siblings and playmates, one-child households, lack of affection, lack of quality time with parents or their substitutes, and a stimulus-poor infancy. Helicopter parenting or child micromanagement alone has serious consequences for a child’s development of emotional and behavioral self-management. Despite the strong need for imaginative play, to satisfy this while also encouraging confusion over what’s pretend with what’s real can do lasting damage to cognitive modeling ability. Most controversially, despite the need for moral and ethical structure, much of society’s religious and political indoctrination really ought to be regarded as child abuse, particularly when this involves dire threats made by imaginary beings and bogeymen. Child indoctrination needs to be distinguished from enculturation. It’s natural for parents to want to raise children who aren’t constantly disagreeing with them, but kids should always be free to explore. If we really took raising children as seriously as these early years warrant, parents would be trained in parenting, and perhaps even licensed. We could always offer a carrot instead of a stick, perhaps with additional social and even financial support, for parents who took that route. The long-term costs to society would be far less with such an investment. But then, who could we trust with the prerequisite curriculum for licensing? Something like a DMV? We think not that.
    Maybe the biggest overall problem these days is that we don’t have enough time for children, or by the time we make time, we’ve spent our day’s worth of energy. Unless or until parents are able to simplify their lives in order to better prioritize family, or get better educated for the sake of higher paying jobs, there really isn’t much that can be done about this. Several nations are showing due diligence here, as with free public day care and preschools, paid parental leave, and more. Those that aren’t doing so believe they’re saving money, but these people have no vision of a future, or much of a future either. Parents with time or money have the advantage of homeschool opportunities, supplemented by community extracurriculars, but this largely benefits only a privileged class and it widens the margin even further. Better support and cooperatives for homeschooling could help. Our only choices here involve a massive reorganization of societal priorities, by generations that have been as negligent with their children as they’ve been with the world they’ll inherit. A good grassroots start could be made in local, intentional community efforts to pool daycare, homeschool, educational, and extracurricular resources, in the hope that these seeds might take root and propagate.

Neo-Piagetian Developmental Stages
    While the several stages of child cognitive development aren’t really fixed, Piaget's system for classifying them offers some good general guidance for predicting what children are neurologically ready to learn about when, and can serve to make parents and teachers aware of milestones to watch for. The classically accepted stages, by now somewhat modified, are as follows: 1) Sensorimotor stage (Prenatal to Language). Infants gain knowledge of the world from the physical actions they perform within it. Children develop a permanent sense of self and objectify others. We want this period to be rich in sensation, with novelty, texture, surprise, and some puzzlement, but without being unpleasantly overstimulating. 2) Preoperational stage (Language to 7). Children don’t yet understand concrete logic and can’t mentally manipulate information. Play and pretending play big roles. Thinking in this stage is still egocentric. Centration, conservation, irreversibility, class inclusion, and transitive inference (using previous knowledge to determine the missing piece) are all characteristics of preoperative thought. Per Vygotsky, the  play of young children is their leading activity, which he understood as the main source of the preschoolers’ development in terms of emotional, volitional and cognitive development. Flavell asserts that children acquire the notion of mental representation of reality as distinct from reality itself between 3 and 4 (at about the same age they learn to lie). The appearance-reality paradigm, along with the false-belief task, is widely used as diagnostic of theory of mind development during early childhood. 3) Concrete Operational stage (7-11). Children can solve problems that apply to concrete events or objects. Two types of social thinking develop here: imaginary audience, involving attention-getting behavior, and personal fable, which entails a child’s sense of personal uniqueness and invincibility. Children are now more likely to solve problems in a trial-and-error fashion. Reasoning is still situated rather than generalized. 4) Formal Operational stage (11-15/20). This sees the logical use of symbols and  abstract concepts. Children can make assumptions that have no necessary relation to reality. Problem solving skills are more generalized. Children become capable of hypothetical, counterfactual, and deductive reasoning. Throughout, we can use knowledge of these learning periods to provide children with more optimal developmental materials and experiences. This knowledge should also tell us something about the wisdom of pacing and patience, although this is not to say that we can’t plant seeds in earlier stages that won’t germinate until later. We can point out things at premature times, as mnemonic examples to point out again later, for future reference (remember when?).
    We might now add yet a fifth stage, from adolescence to the mid-twenties, during which the prefrontal cortex becomes more fully developed, along with the potential and tools for conscious agency. Theories get tested in real-world scenarios, and when the trial goes badly you now get tried as an adult. Mature executive function can’t be expected of children (even true of most adults). This is a developing set of skills, largely homed in the prefrontal cortex, and a function of good working memory. It takes an average of twenty-five years to reach near-peak competence. Even with degrees of childhood adversity (or its lack) being equal, individual differences vary widely and aren’t necessarily related to intelligence or potential. Strong executive function is marked by a number of PFC functions, including initiative, self-restraint, risk assessment, rules assessment, prioritizing, time management, decisiveness, focus, task shifting, persistence, deferred gratification, deferred relevance, and long-term planning. The ventromedial portion is more concerned with pressing affect from the limbic system, and the dorsolateral with imposition of whatever distances or degrees of abstraction are needed to keep that lower brain from wrecking its beautiful plans. While maturation of these functions can’t really be rushed, development can be aided in a number of ways with training that includes reliable routines, open communication, parsing or chunking tasks, positive and negative reinforcement, mindfulness exercises, baby steps, and metacognitve self-awareness (using your new brain).
    As we approach the leading edges of a child’s neurological readiness to learn new things in new ways, we can speed things along a bit with assistance or instruction. Lev Vygotsky proposed that, with help, a child can learn skills or aspects of a skill that go beyond his actual developmental or maturational level. This must be tailored to the child’s individual capabilities. It’s a sweet spot in growth where children can get assistance just before they no longer need it, but at a point where learning might be more optimal or enduring. Waiting until they are almost ready is a bit like leading them from behind. Vygotsky called this leading edge or sweet spot the child’s Zone of Proximal Development (ZPD). The ZPD may be signaled by new kinds of questions kids ask. While it may not always be wise to do this, there are cognitive processes that are easiest to learn long before they’re really needed, before they can be tagged with the sense of personal relevance or meaning that makes the child’s learning a more exciting thing to do. I call this deferred relevance.
    Education done right really comes down to keeping kids hungry to learn, keeping those flames of curiosity burning. In the preoperational stage, this is done primarily with play, pretend, and storytelling. But pretending doesn’t mean that they can’t know or be told they’re pretending. It isn’t necessary for them to believe in Santa Claus. Furthermore, it doesn’t cheat them out of the experience to know this is as make believe as serving tea to teddy bears. They will enjoy it just as much. Kids also need to learn to interrogate the world authentically, and to move at their own optimum pace. The preoperational is also the optimal time to learn a second language, or learn how to read, and perhaps even learn the most basic arithmetic. We can take better advantage of our extended childhood (aka neoteny or juvenilization) before the extensive pruning of our neural connections is complete. We want to show those brains that some things are going to be relevant later, and why they ought not toss or prune those little-used neural connections just yet.
    A familiarity with Maslow’s hierarchy of needs, together with a grasp of how deprivation and adversities might be sorted therein, forms a good first template for meeting children’s basic needs, and so enabling them to get on with the business of eating and digesting the world with their hungry brains. The first seven years should focus on plentiful play, unschooling, physical movement, outdoor learning, hands-on experience, experimentation, inquiry learning (like Socratic dialog), environmental awareness, puzzle solving, story time, mindfulness, group interaction, music, art, and life skills. This age is regardless of any giftedness. A precocious child can be encouraged to help peers and younger ones, developing some leadership and mentoring skills in the process. It’s more important that kids learn how to think than what to think, especially with culture evolving so quickly now and so much of the what changing right along with it. Imaginative play, or make believe, should be fully supported, except that it should always referred to as such, making believe without the belief. The pretend Santa can still bring presents. Maria Montessori believed that children who are at liberty to choose and act freely within a properly prepared environment would act spontaneously for their optimal development. Other than basic senses of number, scale, proportion, change of frame or point of view, categorical sorting, and sensory metaphors, nothing formal of STEM really needs to be taught at this age, unless a child insists. Kids both inherit and find their own kind of scientific method. They should, however, learn to read and write by the end of this period, and second languages don’t hurt them either.
    To really nurture capability, we nurture appetite and drive. Without these, discipline accomplishes little. We don’t need to fully meet children’s needs for them, or spoon feed them answers to their questions. It never hurts them to do at least a little of the work, even a little more than half. Knowledge shouldn’t be transmitted in one direction, from the expert to the learner. An answer should be inferred or arrived at by the child wherever that’s possible. The learners need to reach out for it, question it, interpret it, and integrate that into the already-learned. Information need not be pushed onto the hungry. There need be no reward or bribe offered that’s unrelated to the learning. Just encourage their hunger and let them feed. We want motivation to be intrinsic, with enough to spare to do things for their own reward or pro bono. The reward is that life is better this way. Encourage and praise effective efforts, successful strategies, and real accomplishments, not just potential, trying, or simple efforts. Self-discipline isn’t a source of motive power: it’s energy spent on self-control that sucks motive power when only a little is there. We should watch for signs like self-sabotage, challenge avoidance, motivational paralysis, procrastination, obsessive distraction, boredom, and ingratitude: where these are, something important is missing, and you’re probably not going to supply it for them. It has to be discovered, but it needn’t be discovered alone. Counseling might be in order if you can’t talk about it, and remember that school counselors are free. Kids may simply need to renew their sense of what’s important, meaningful, relevant, or valuable.

By Domain: Sensorimotor (and Native)
    Play should absolutely dominate the preoperational stage of development. This includes the hard physical play that develops motor skills, strength, and kinesthetic awareness. It includes the social play that teaches us to interpret and anticipate the actions of others, try out other perspectives, and negotiate alternative outcomes. It includes the intrinsically pleasurable emotional play of taking risks, overcoming fears, learning confidence, seeking novelty, and pushing envelopes. And it includes the cognitive play of strategy, humor, and wittiness. There is nothing unproductive in letting it predominate even in school, particularly in the kindergarten years through age seven. And since they can keep this up all day (with breaks and naps), this would even let the school day more closely match the hours of adult workdays. This doesn’t mean that play can’t be designed to accomplish things, only that it should dominate the spirit. There is so much that it teaches, at an age when we’re best primed to learn. The wiser among us know that play should also play a more important role in life continuously, without taking a back seat, all the way through old age. There are lessons to be learned that may not be fun at the time. You have your owies, and broken bones, and maybe broken hearts. Kids should have enough playtime supervision to avoid the broken necks, major puncture wounds, third-degree burns, and kidnappings, but hovering, overprotectiveness, micromanaging, or helicopter parenting can lead to some lifelong damage and deficiencies as well.

Accommodating
    Kids need to pretend, but they can do this while distinguishing pretending from what’s real. It’s preparation for dealing cognitively with things that aren’t present. It’s OK if they know or get reminded that they’re pretending. Knowing that Santa isn’t real, knowing they’re only serving fake tea to the stuffed rabbit, doesn’t prevent or preclude the experience, and it doesn’t stop the flow of Xmas presents. We may want to instruct them not to break the Santa kayfabe for the other children, however. That leads to wrathful parents banging on your door. Or the Jesus one, although exposing that one might be more of a service to the other children. “Pretending that” and “believing that” are both propositional attitudes, but we should start learning the difference at a younger age than we have been, give what we’ve learned about delusion and how early the foundations for that are laid, and how soon the postulates and axioms start to harden. The ability to understand and recognize pretense is the right precursor to understanding false belief.
    An ability to stand corrected, without the usual embarrassment, shame, or humiliation, is often the greatest skill lacking in our affective and cognitive skill sets. Some of the wisest things we can ever say include: “I don’t know,” “I was wrong,” “I made a mistake,” “I’ve changed my mind,” “I’m sorry,” “please forgive me,” and “I stand corrected.” Life is a place of learning. We should be looking at our own mistakes as junior scientists with curiosity and wonder, not fear and embarrassment. One of the best things we can do for a kid is give him a good reason to want feedback, even give him an appetite for further input, critique, even criticism and correction. If lifelong learning is any kind of objective, standing corrected might be the single most important lesson a child (or adult) can learn. It can even be a point of pride or badge of honor to be able to admit to having made an error, to really be smart enough to allow ourselves to be corrected and make our minds better with that new information. This takes patience and practice, and consistent praise where due, for a while at least. It also helps to correct children privately instead of publicly, until they get used to it and comfortable in rolling with it. It’s uphill for everyone to admit error, but it’s a necessary step if we’re ever going to be real know-it-alls instead of a fake ones. Cognitive humility is the only path to cognitive greatness. With just a little bit of practice, standing corrected isn’t nearly as painful as we fear. And we can learn to do it with some impressive dignity, and hopefully even a sense of humor. As the Zhouyi says, “Being true is as good as impressive.”  It’s this that will allow us to keep growing and learning. There comes to be nearly as much value in error and failure as there is in success, just like in science. “The greatest teacher failure is,” says Master Yoda.
    Early education might ought to include optical and other sensory illusions, as well as magic tricks. Games like Telephone or Chinese whispers can be a fun, early part of a teaching program, and make it seem a lot less shameful to have been fooled or caught in an error. In showing children these illusions and tricks, we lay some neurological groundwork for later explanations, as though priming their minds. We should also encourage them to learn to do magic tricks, to observe the process from both sides. And we can refer them back to this early education later in life, while we’re showing them how politics, religion, and advertising work, exposing the tricks of persuasive influence. We learn about inattentional blindness, unsighted vision, and the fallibility of imagination. We learn the basis for Arthur C. Clarke’s Third Law, that “Any sufficiently advanced technology is indistinguishable from magic.” Saying “I’ve been fooled” is not the same thing as “I was stupid.” It’s a stimulus to the kind of humility that allows us to keep learning. There may even be value in taking criticism while knowing we’re in the right: regardless of the spirit or truth of the criticism, it contains data on a point of view that’s also information to learn from, and particularly where it indicates someone’s resistance to a new idea that might have an alternate workaround. This will apply to our behavioral as well as cognitive errors, where the mea culpa or apology stands in for confession of error. In many recovery programs, the first of the Twelve Steps is, in a sense, and admission of error, or at least of denial, even if it contains flawed assumptions. It opens up the understanding that new and better information is needed, and this allows the redemption, correction, compensation, or restitution that’s needed to move on. We need to attach a high value to the revision and improvement of thought, to accept correction with grace and dignity instead of defensiveness. Youth is the time to introduce the idea that learning should go on for a lifetime, and that most grownups only pretend to be done.

Situational
    Problems and puzzles can be the same thing, and we can use these words interchangeably. A problem isn’t something to stop us or whine about. It’s something to challenge us, and solving problems is fun like solving puzzles. A mathematician who runs from his problems will stink as a mathematician. Puzzling things out teaches skills like elaboration, extrapolation, inference, and divergent or lateral thinking. It’s a great life metaphor for collecting all the facts as pieces of the puzzle. There is also a hedonic aspect to problem solving, a pleasure that overcomes intimidation and fear of failure, a felt appreciation of our own mental agility or nimbleness of mind that we can take into other areas of life. We need to be careful to calibrate these, to not get kids in too far over their heads and trade the enthusiasm for frustration. Solutions to the problems we pose should usually require no more than the knowledge or set of skills already acquired, often in new combinations, but sometimes a puzzle can be used as an incentive to acquire just one more missing skill. One example of developing this into a program is Myrna B. Shure’s ICPS, or “I Can Problem Solve,” for preschool ages and up.
    Trial increments of personal autonomy and authority should be delegated, with stipulations that these may be forfeited for neglect or misconduct. We perform better when people have higher standards and expectations for us. Clearly, these should never be unattainable, and praise is more important than censure in the long term. The social movement to freely distribute excessive amounts of unearned self-esteem has been a disaster. Praise should be earned and real. It will also help kids to phrase positive commands as choices, and somewhat more benignly than “do this or else.” Things left on the floor tell me you want me to give them to the thrift store. What’s your choice here? The early years are not too early to learn some rudimentary self-management and mindfulness, and this is better done in a decision-rich environment than one simply loaded with commands and penalties for their disobedience. So how much help do we give? The same we’d give to any adult with similar skills? Certainly don’t do more than they do of what they can do for themselves. Meet them no more than halfway on that. We might simply step in alongside them as coach or advisor, and just let them know that we’re there if we’re really needed. Then it’s a bonding opportunity as well.
    We should distinguish between risk taking and impulsive action, between those behaviors that best challenge the developing neocortex and those that simply give free rein to emotional impulses. Different parts of the brain are at work here. An excess of impulsiveness is more apt to spell trouble later on and even impair further ability to learn from our risk taking. This may even warrant medical or psychiatric intervention. Yet we don’t want to interfere with a child’s exploration of their environment any more than we have to. There is a natural science to what they are doing, and this process closely parallels the more explicit scientific method. They are trying out hypotheses and generalizing rules. Our only job as the supervisors should be to modify consequences to remove lethal, injurious, and traumatic elements. The fever with which adolescent risk is pursued is only in small part related to raging hormones. This is perhaps much more a function of a drive to learn first-hand what life’s real boundaries are, to learn some things that aren’t secondhand. Self-control is a little further down on the syllabus. For now, it’s moodiness, impulsiveness, risk taking, rebellion, social vulnerability, and receptiveness to peer pressure. You may not be the coach they need or want. That’s a better job for role models who have more recently emerged from this. As for the younger ones, to the extent we can, we might let them make their own plans for their days, and ask them to keep us posted with progress reports. Later, we can review with them how their goals have been met.
    Within limits, supervised one-trial learning might be the best way to learn what words of warning mean. If the most a child will get from touching the stove is a minor burn, just say “hot” right as he’s about to touch it. The same with the proverbial finger in the fan. Electric shock wants a somewhat higher level of precaution. Twenty amps of household current is not a place to get started, but maybe a non-lethal, electrified ranching fence will do. Stranger danger is another challenging one, because we also don’t want kids growing up afraid of everyone new, or more than reasonably suspicious of anyone they’ve never met. Plus, a free-range child has a healthier, more rounded childhood, and this is getting slowly recognized as local governments start to get more permissive again about children returning from school on their own, or making their own way to the pond with the polliwogs and garter snakes.

Emotional
    Even gifted kids can fear both failure and success. Children can be pressed into neurotic adaptations from a couple of directions. A fear of failure can result from excessive pressure to perform, from parents, coaches, or teachers. Sometimes the only release from feeling this pressure can be a form of self-sabotage, making only half-hearted attempts, a form of passive-aggressive behavior, such that the lack of effort is to blame instead of a lack of ability. Even more common than this is the pressure to dumb down in an egalitarian society. Despite lip-service to the contrary, there’s a lot of social and political resistance to excellence and accelerated development, particularly from our young, same-age peers. It breaks the illusion that all men are created equal. Certainly all persons ought to be constitutionally entitled to equal rights and opportunities, but ideas of equality should begin and end there, and not with insistence on equal outcomes. It’s really important that children learn just how important it is to be the best of themselves, and how unimportant it is in the long run to be either smarter or dumber than they are. Life offers enough anxiety already. Besides, the best driving forces come from our appetites, not from pressure.
    Children need to feel close enough to others to seek comfort, and to ask any questions that might arise. They will learn best from those they’re close to, those they love or respect, if not parents, then persons in loco parentis. And maybe it doesn’t really hurt to have someone around who can tell them that their parents are wrong, without freaking the parents out. You know the guy, the Marxist, homosexual, beatnik uncle: he’s still your mom’s brother. Children also need individual, one-on-one quality time for bonding, maybe best in a kid-centered activity, time to be given undivided, personal attention, without siblings or others present.
    Parents who give in to even a single tantrum or public meltdown really do deserve everything that’s coming their way after that. But the children don’t. Neither do the theatergoers, shoppers, or the other passengers. Giving in is a favor to nobody, an incredibly stupid thing to do. Neither should you bargain your way out of these. Just leave the room with them. Strategies that work even a little will be repeated, and the extinction of the behavior will require multiple trials. Withdrawal, sulking, and passive aggression all have similar solutions - you simply refuse to reward them, and then they will serve as their own punishments. You can ask what misbehavior is leading to exhaustion or frustration, and deescalate or disengage from that. Rage, abnormal violence, use of weapons, and retribution demand a different treatment from simple inattention, though, especially some version of the incarceration it would incur later in life, or at least the removal of some cherished privileges. But it’s a bad idea to develop zero-tolerance policies that punish any warranted self-defense.
    The Piagetian developmental framework has a predominantly cognitive focus, and it too often ignores the influences and impacts of emotional and social factors. A personal, individuated, ongoing, and  evolving mindfulness practice will contribute as much towards a mentally healthy adulthood as any education. Getting “control” of our emotions isn’t the point of this, but we will have few self-managerial options at all if we aren’t paying attention to emotional forces as they arise and begin to move us around. We can use our mindfulness to get to know our emotions, to recognize specific feelings as they arise, learn what kind of stimuli trigger them, learn whether or not they are necessary or wanted, learn to name and communicate about them, learn what alternative feelings we might choose instead, and learn how to saddle them up and ride them to cool places. We start to understand the causes of feelings this way, and begin to see that behavioral reactions aren’t always required. We can do a lot of that by questioning and taking charge of what we consider valuable, meaningful, or relevant, which helps determine how we respond to things reactively, even before our mind gets a conscious peek at what’s happening. We help children to learn this by having good one-on-one conversations about their feelings and about specific feelings in general.
    It’s true that kids don’t yet have the developed set of top-down controls for emotional self-management, but then neither do most adults. It’s still a matter of degrees, and once past the first few years, there really isn’t a lower limit for getting started. The self-restraint, self-motivation, and self-discipline that come from this early training are useful throughout childhood. We can learn about feedback loops and escalation, and how to interrupt those. We can learn early on that cognitive dissonance can be amusing or entertaining instead of threatening or uncomfortable. We can learn to start looking immediately for workarounds when we’re frustrated, or to double down if we just need more effort. We can learn that we don’t need to be happy all the time, and that we don’t need to make up reasons or excuses when we’re not. We can learn that holding anger or a grudge only hurts the one holding it, so it’s a really stupid way to punish someone. We can learn to count to ten, while waiting for better information to act upon. With mindfulness, we can just let moods, feelings, and emotions come and go, so we don’t have to let the unpleasant ones make us do stupid things. Finally, if we can learn to see emotions just for what they are, we can learn to have the good ones just by thinking about them. We can feel reverence or gratitude or forgiveness without making up all those lame excuses for them that come from unrelated and limiting belief systems.

Personal
    Talking to another humming being, instead of talking down to a child, has remarkable effects. Following infancy, where motherese or baby talk appears to have salubrious effects on mental development, we are better off speaking with children as though they were real people, acknowledged, appreciated, significant, respected,  helpful, and valued people whose opinions matter. We should try to do this without condescension, even when speaking firmly with authority and establishing firm boundaries. You simply get face to face, and speak from the heart to the point, as if this were a friend.
    We all like to have a good sense of who we are, that being one that allows us to predict our pathways through life’s challenging situations. What far too few of us realize, kids and dults alike, is that the strongest identities aren’t fixed and well-established, or even that clearly defined. That only makes for rigidity, brittleness, defensiveness, or confinement, and this from our false assumptions that change must mean insecurity. The fittest identities are fluid, dynamic, and adaptable. Kids will build identities supported with self-esteem and confidence, so it’s important to care about what we praise in them. If we praise them primarily for their potential, there’s a danger that all they will be is potential. Praising them for their efforts instead might help them to become more diligent. The accomplishments that we praise don’t really need to be for competitive success, but they should be real accomplishments and not just participation awards. And if praise comes from both competitive success and accomplishment, competition won’t be so overvalued. Inequality is going to be a given, and even the gifted will have the more gifted still, just to keep them humble. Achievements can also be cognitive inferences and insights, or healthy emotional self-management, or a persistent sense of purpose. But nobody is automatically special, no worth is unconditional, no matter what the new age people say: that only makes entitled narcissists. You don’t praise little Johnny for being close when he says 2+2=5. The good life is work: it’s heedfulness, diligence, caring, being useful, being compassionate, or serving something noble. That’s what we should be rewarding the most. But if praise is to be given sparingly, it’s that much more important that it be given when and where it’s due, and not withheld as a goad to still higher achievements. It’s just a rotten thing to do to keep moving children’s goalposts for them.
    Identities that have names have names that are given by others. They can represent de-individuation more than an identity. This is your category, your box, your slot, your fungibility, this is you as an object. This is what your replacement will look like. Isn’t this the opposite of what we want? To some extent, no. Being too much of an individual will get us noticed, perhaps even respected, but only at a very local level. Then we get overwhelmed when we move beyond the parochial. There have to be words for what we are before we get presented to the larger world or culture. So perhaps the compromise is having labels for ourselves that represent the fluid, dynamic, and adaptable. We can be eclectic, interdisciplinary, correctable, polymaths, students for life. These are identities that remain dynamic, that can be asserted without locking us into a box or a cage. This requires a devaluation of sameness, normalcy, or conformity. And it might even mean steering clear of relationships that are shallow or inauthentic, and ultimately, settling for fewer friends, or being less popular. But then there’s that quality over quantity thing.
    There is no better instruction in morality and ethics than consistently and reliably facing the due consequences of our actions. Of course, sometimes the lesson is that society or culture has some truly boneheaded moral and ethical rules. When society’s behavioral rules need to be sourced in religious and other ideologies, the project has already failed. The teaching of behavioral values can still be done without ideology or dogma, although we might get away with using some science, as with the evolutionary support for altruism. Of course if we’re going to use primatology, we’ll need to extract this from other innate proclivities for territoriality, xenophobia, gossiping, bullying, and cheating. The clearest place to start is probably the Confucian “What you don’t like done to yourself, don’t do to others.” While some may still argue that environmental conscientiousness and social justice are separate political and economic issues, they are rapidly becoming vital issues for the survival of both the biosphere and the species. It’s becoming increasingly OK to teach these as givens, and to counteract the teachings that undermine them. Paulo Freire’s “Critical Pedagogy” rejected the neutrality of knowledge: there is no such thing as a neutral education process. David Cooperrider and Suresh Srivastva, with the theory of Appreciative Inquiry, highlight the importance of positive affect, applied positive psychology, sustainable development, social constructionism, and biomimicry being inserted into the investigative inquiry and educational processes. This is, of course, the education that the far right would see replaced with either the Tanakh, the Bible, or the Quran.
    Having inconsistent consequences as lessons can be just as problematic as having none at all, since young children are quite capable of basic risk-benefit analysis and will include a frequent absence of consequences on the benefit side. The little gamblers learn to play the odds. We want them able to predict consequences with more clarity, and it’s hard to underestimate the value of consistency. Inconsistency is a huge part of the failure of the grownup justice systems. While we need not say “let the punishment fit the crime,” or invoke lex talionis, wherever other people are hurt, the emotional intensity of the consequences should be close to the same as the hurt. A removal of privileges can still hurt, and we want it to. When our feelings approximate any suffering we’ve made others feel, it helps with our empathy and theory of mind. Above all, however, the consequences need to be reliably and consistently faced.
    Between 3 and 4 years, lying starts to get a lot easier. Kids also learn to shift blame to others, and even to inanimate objects, a strategy often often marked by the passive voice or mood. This has to become more difficult as well. We also want a clear distinction between facing consequences for doing the things we’re lying about and the consequences incurred by lying about them. As said earlier, making a clearer distinction between pretend and real will help, without diminishing the need to pretend. Also as said earlier, being able to own our cognitive errors will help us to own our behavioral ones, and follow that with apologies, and any restitution that’s warranted. Above all, we have to remember that the best part of tough love is love. You’re not doing these consequences to them. You aren’t teaching them a lesson. Consequence is the teacher. You are allowing them better first-hand education in what it is they’ve done. Consequences must be incurred entirely on their own account. Demanding that children take responsibility for their decisions puts the them in the driver’s seat. To compromise responsibility, or to diffuse it, ultimately makes children no more than passengers.
    Boundaries and discipline are somewhat broader in scope than morals and ethics, but are still constraints we need to put on behavior for both safety and general social acceptance. The behavioral options open to us are infinite in their permutations, but half of infinity is still infinite. It’s seldom the end of the world when something isn’t permitted, although a child might feel that. As with grownup justice systems, we set boundaries, define transgressions, and articulate the consequences of transgression. Those boundaries that go unspoken are the biggest troublemakers. Children appreciate having them clarified beforehand a lot more than you would think. It never hurts to ask to hear them repeated to you, either, to be certain they have been heard. It helps with the cost-benefit analysis that precedes most questionable action. But future behavior really still depends most on the consistency or reliability of the stipulated consequences.
    Boundaries are also becoming more important with respect to how time is allocated. But these boundaries do add value to the time spent within them and they add useful experience in budgeting other valuable resources. A pressing example is in limits placed on electronic devices and social media. Addictive behavior here has become a real problem, despite the value of the behavior itself. The use of electronic media in these early years, and its persistence as habit in later years, merits special attention. The use of media as “a one-eyed babysitter” has long been criticized for good reason, as have unhealthy obsessions with video games and other entertainment analogs of sugar. Evolving recommendations for optimum and maximum exposure to electronic media should be tracked with some care, not because electronic media are harmful, but because other important worlds remain to be explored, like nature and social relationships. Time spent there should be proportionate to its overall importance in life. Some pointless exercise might be permitted, but that should be much more strictly limited than educational programming.
    It will also help kids to know which proscriptions are non-negotiable and which can be pleaded out. Some boundaries that a parent or substitute might regard as tentatively set, but negotiable, allow them to ask “can you convince me otherwise?” when a violation is proposed. Listen carefully to their reasons and get them clarified as needed. Begging and wheedling are not tactics of negotiation to be encouraged: the response to those should always be “asked and answered.” But more rational argument both encourages diplomatic and reasoning skills and signals the maturity adults want to see before granting some license. There’s another trick to use that just impresses the heck out of children: when you’re wrong, admit it, and then make whatever apologies or changes are necessary. And this doesn’t undermine your authority: it actually builds mutual respect. So does listening to them, as though they might have something important to say, and that will mean listening until they’re done with their explanations.
    In forming the rules, it’s important to maintain any distinctions between the expected and the extraordinary, between rights and privileges, between duties and favors, between needs and wants, and between real wants and implanted wants. These play big parts in helping kids appreciate and manage their resources and priorities. Two levels of allowance might be advised: an unconditional one, and another when all the chores are done. Beyond the expected chores can be jobs to be done for extra money or privileges. Let them do their own budgeting, with full knowledge of the work involved, and borrow only with terms of repayment. Gifts are gifts, however. Treatments of rights might follow a similar theme. But as to unconditionality, it may help when some rights can be forfeit. A child making a statement by refusing to eat a meal will eventually have hunger take over. You can let that one take care of itself, but not by providing snacks before the next meal.

Social
    Vertical or age diversity in the social environment will contribute a lot to well-rounded development. The movement in developed countries towards smaller families and only-child upbringings will need to be compensated for in some way to give us back the equivalent of older and younger siblings, preferably of both sexes (I suppose now it’s all sexes). The assembly-line, age-segregated processing of modern public education, the industrialized mass production of equal and fungible persons, doesn’t help one bit either. Ages tend to be segregated even at recess. The Montessori schools recognize this explicitly and work at providing age diversity, with younger kids a child can help to guide, and older kids a child can learn from. In the cities, the street gangs will only too happily step in and perform the initiatory functions on our behalf. Socially, we can be doing this with activities like community centers, scouting, tutoring, and babysitting. Age diversity is probably the best of many reasons to restore pockets of neighborhood, village, tribal-scale intentional community, or extended family within the larger culture. It takes a culture to raise a child, but the education is done at the village level. There are also intergenerational levels of age diversity that tend to be forgotten in modern culture. In the very old days, the elders taught the children, and the children helped the elders, while the middle generation provided. This is another reason why the American Indian schools were such a total disaster: removing the kids destroyed two-thirds of the social structure, not one-third.
    Need for a stronger sense of extra-familial social support and belonging becomes especially acute during the elementary and secondary school years. It’s difficult to impossible to isolate this from the need to develop a personal sense of identity. Most of who we think we are is formed in these years in terms of where we fit in and in comparison with others. Sometimes it seems the personal and social domains don’t separate much at all until we get clear of this gauntlet. Philip Zimbardo summarizes the issue: “The power of that desire for acceptance will make some people do almost anything to be accepted, and go to even further extremes to avoid rejection by The Group. We are indeed social animals, and usually our social connections benefit us and help us to achieve important goals that we could not achieve alone. However, there are times when conformity to a group norm is counter-productive to the social good. It is imperative to determine when to follow the norm and when to reject it. Ultimately, we live within our own minds, in solitary splendor, and therefore we must be willing and ready to declare our independence regardless of the social rejection it may elicit. It is not easy, especially for young people with shaky self-images, or adults whose self-image is isomorphic with that of their job. Pressures on them to be a “team player,” to sacrifice personal morality for the good of the team are nearly irresistible. What is required is that we step back, get outside opinions, and find new groups that will support our independence and promote our values. There will always be another, different, better group for us.”
    The tension between our individual needs and our need to belong doesn’t go away until we learn to sever or distinguish the two all by ourselves. Any objectivity that kids can find in observing others can transfer to themselves in like situations, but it almost takes a third party to remind them. If they can see a maladaptive reaction to social pressure in others, such as another kid being peer-pressured to do some boneheaded thing, it might help them if they can be condescending and judgmental towards that, so that when their own turn comes, they might fight harder to retain their dignity. What is it to belong? What do you get and what do you give? We should talk to them about the why of conformity and help them distinguish the benefits from the harm. Perspective taking and reframing will help in these discussions. Experience with social and cultural diversity makes it clear that there are options. Still, when the pressures are applied, it will be an emotional experience, more than a rational one, that needs to be managed, and these demand a sense of what’s truly valuable. Sometimes cooperating with these forces is enlightened self-interest and sometimes it’s the end of an interesting future.
    The middle way in social conflict is usually a great place to take a stand. It’s here that we have the greatest opportunities for negotiation, diplomacy, arbitration, mediation and successful outcomes. The judge or fair witness usually commands the most respect wherever there are questions. This is a good thing to be known for, and you can still advocate for extreme things like justice and peace. The adversaries and combatants are the ones who tend to cancel each other out. Most of the best leading gets done from this central position, not by the one marshaling one of the opposing armies. This position, however, doesn’t mean that our own ideas or vision must be a compromise, or stuck in the middle. We simply hold our own positions while having good manners and being too clever to lose.
    The peer pressure problem demands resolution in anyone who wants a life of their own. When we’re kids, we don’t always have much of a choice in who it is that surrounds us. This is especially problematic in public schools, but that’s also practice for the real world. Standing up for ourselves, holding true, is no doubt more effective when we do it ways that earn us the respect of our peers. This should be the primary goal, and equal care should be taken to avoid “loser” labels, humiliation, shame, and embarrassment. Holding true is what Gandhi called Satyagraha. Sometimes it involved being disobedient. It may be necessary to rewrite the script for onlookers and critics, to show that this disobedience or abnormality is done on a higher order and for a higher purpose, that your refusal to play this game represents a higher order of dignity. It might be necessary to call others out on their rumors, gossip, secrets, and other forms of nitpicking silliness. Who knows? You might be doing them all a great service by breaking the kayfabe. If not, you may be laughed at, bullied, or teased for a while, but you may become known as someone to come to with questions, or someone capable of worthwhile second opinions. It’s this or dumb it all down and be your own “Handicap General.” It’s OK to be different. It’s really OK if someone doesn’t like or understand you. Let a few of them go. It’s OK to disagree. And it’s often OK to disobey orders. This is only a drama, and you will have a real life to live out elsewhere.
    Relational aggression names the primary offensive tactic in peer pressure, and this isn’t something that kids, especially young ones, can work through on their own. Supportive coaching is needed. This is the use of friendship and status to manipulate other people, the withholding of acceptance, belonging, or membership, and the use of insults, unkind words, rudeness, meanness, bullying, humiliation, ostracism, and isolation, usually in jockeying for social position. It’s difficult to successfully assert any sort of interesting identity without developing some resistance to these. Parents can do what they can to downplay the importance of being accepted by everybody, the silliness of trying to please them all. A sense of humor about it all can go a long way. To just laugh and shake your head can be a great reply. Becoming really good at something can help a lot as well, since a deserved sense of confidence is a great thing to build resilience on. There may be no better preparation in social resilience than several years of training in the martial arts, and particularly for keeping the peace and avoiding confrontations. It won’t always be possible to skirt relational aggression in a way that demonstrates superior leadership ability, but the kid who can pull this off is showing exemplary and impressive resilience that can set an example for others.
    We’ll do a lot to avoid being laughed at. We’ll walk a more careful line to stay clear of that. Simple embarrassment, even just for dumb stuff, without evildoing or anything, is a powerful feeling. But to get a deserved humbling isn’t such a bad thing, even if it makes you groan about the memory decades later. Castaneda’s Don Juan spoke of the virtues of losing self-importance. We don’t really need to be all that special. That just needs defending. But authenticity ought to be preferred over both humility and modesty, and most certainly so if those mean inauthentic self-effacement. It’s simply this: “No man is a prophet in his own village.” You have to have somewhere you can relax and be yourself, and make an occasional error.

Cultural
“Teaching is the essential profession, the one that makes all other professions possible.” David Haselkorn
    We have an evolved pedagogy, natural ways of raising our young, that’s overlaid by some broad variations across our cultures. Educating our young functions as a prosthesis for natural intelligence, to correct and supplement our innate heuristics, and bring us up to speed on how human culture has evolved, with sufficient cultural literacy to get through life. We’ve evolved with extended childhoods and much time to absorb what we can of human culture. Teaching skills that come easily to parents have coevolved with this, giving us a natural methodology, and it’s extendable to teaching, mentoring, counseling, and coaching. It’s natural for us to instruct our children, and we seem to know to do this in stages. The details and content is where we get exposed as rank amateurs, especially with our problematic cultural biases. We have to tease out a wide range of cultural differences to get at the universals. Children learn in many ways, but mimicry and emulation top the list. They especially look to respected or beloved elders, or those otherwise assumed to be representative of the culture. This makes role modeling crucial and gives grownups a reason to check their hypocrisies (where they’re able). The best teaching, as with the best leading, is done by example.
    Most of us have some experience with the question “why?” and its cousins. Children aren’t shy about communicating their ignorance, and it’s a little sad that we tend to lose this as dults. We might assert here, over some objections from the squeamish, that whenever children are able to ask a question, they are ready for and deserve an honest answer. Even “the talk,” even at six. Do we ask or try to answer the big questions? Sure. Children can surprise adults with early worries about big-picture, life-and-death concepts. In some cases, these questions can be the first sign of high-ability needs. And given the lip service we give to these being our future leaders, why not get them started thinking about social justice, the environment, and war, instead of filling their heads with grownup delusions and lies? Finally, we ought to stop being shy about saying “I don’t know,” maybe followed by “let’s you and me try to figure that out.”
    Inquiry-based teaching and learning, our work with the question “why?” and its cousins, is potentially the most productive for cultivating critical thinking and cognitive skills. We will learn best by doing, and investigative inquiry is a form of doing-in-theory. Kids like to scrutinize and infer. But if all we do is respond with answers, or class particulars into generalities, we aren’t doing children much of a favor. Remember our Heisenberg: “We have to remember that what we observe is not nature itself, but nature exposed to our methods of questioning.” We want to learn to walk our children through the processes of good investigation and inference. We don’t give them the answer: we help them to arrive at one or more answers. Of course, we don't want to get out too far ahead of our Piaget and Vygotsky insights, and we still need to be sensitive to frustration and fatigue, and not press the method to unpleasantness. It should be fun for kids to watch themselves think, and to notice how thinking works together with their feelings. Here are just a few examples of questions to ask of tentative answers: is there another or better way to ask that question? What would be the best outcome? Why did you answer that way? What would almost work? Who is the best person to ask? If this was a lie, who would benefit? What’s another way to see that? Where do we find out if this is true? Where is this not true? When is this no longer true? Can this idea be made better? Why would people want to believe this? Why is this important? Where is it not important? What would people have to ignore to believe this? Why has this idea not died? Why is it important to think this? How is this like something else? Why did that make you angry? Why did that make you happy? Let’s think some more about that. Let’s look that up together. What book or website do we need to go to? What’s the best question to ask?
    The word “education” means “to draw out,” not “stuff into.” We’re looking to draw out ideas, with their underlying assumptions. Some questions about how things work or behave might be presented in such a way that children first predict the content or outcome and then compare that with what reality did. Even in our everyday encounters, we don’t need to ask our kids too-easy questions and settle for pointless answers or mumbles. Don’t make small talk with little people or ask overly general questions. Don’t ask “how was your day?” Ask specific, meaningful questions that get them to think. At least: “What was the best thing that happened today?” But better: “What was the hardest thing?” “Did you see something that you still don’t understand?” ”What was the coolest thing you saw somebody do?” “Did anything you saw make you nervous?” “Do you think anybody misunderstood you?” “Did you see anyone who needed help?” “What do you know now that you didn’t know last week?” “What would you like to learn in school that they aren’t teaching?” “What’s something that you know that I don’t know?”
    Linguistic explanation isn’t the only teaching tool we have. We can make demonstrations, opportunistically seize on teachable moments, help narrow options to a manageable number of choices, coach or suggest improvements, initiate the young into rites and rituals, share teaching stories and memorable cultural events, grant social approval for deeds well done, and censure for the maladaptive behavior. There is a constant flux of new ideas in educational theory, and major fads sweep through every few years. It pays to stay abreast, and to maintain a critical eye. Currently, some promising new programs are being developed for preschool through elementary that can also be practiced at home, including SEL (Social and Emotional Learning). Others can be found in the final link of this work. Comprehensive systems are coming on line to develop and integrate emotional processes, social and interpersonal skills, and cognitive regulation. In one parsing of education’s several dimensions, Michelle Ann Kline offers “a taxonomy of teaching adaptations” (behaviors that evolved to facilitate teaching and learning in others), categorizing five modes of teaching as by 1) Social tolerance or observation, or allowing intrusive participation in activities; 2) Opportunity provisioning, setting up exercises or practice opportunities with teacher help available; 3) Social or local enhancement, calling or directing attention to learning opportunities; 4) Evaluative feedback, using the various types of reinforcement in response to actual or hypothetical actions; and 5) Direct active teaching, the classical subject already within a syllabus.

Formal Education
“If you have to tell a child something a thousand times, perhaps it is not the child who is the slow learner.” Walter Barbe
    The very young learn more quickly when learning isn’t as structured, but the price is that there will be much to unlearn later, and this gives impetus to formal education when societies grow more complex than tribes and villages. It’s largely discouraging to look to public education for examples of what works, although more inspiring systems seem to be emerging now, especially in Europe. American schools are in a nosedive with ridiculous administrative and building costs, and the mandatory fads, when the real solution is twice the teachers, better trained, paid twice as much where merited, with broader curriculums, more material supplies, better hours, nutritious meals, bigger budgets, more recess time, and plenty of field trips. Meanwhile, much of Asia is now overly obsessed with drilling and overachievement, at the expense of childhood, even though test scores run high. More humane systems are moving slowly away from the assembly-line model, but still churning out equal and interchangeable workers, to be managed by a somewhat differently educated elite group of socioeconomic overlords.
    The core curriculum is central to the issue of formal education. What level of cultural literacy is necessary to function in the society? Everything beyond this ought to be along separate and elective tracks, like economics, academia, science, home economics, life skills, or votech, which will each have their own cultural literacy requirements. To make issues even more confusing, the complexity of culture is changing at an accelerating pace, with the content of core literacy growing exponentially. This is shifting the educational demand from semantic knowledge to process knowledge, and cognitive demands from knowing what to think to knowing how to think. The best cognitive toolkit now metaphorically approximates a swiss army knife, adaptive intelligence and improvisation, but it still requires basics that aren’t in motion, like the multiplication tables and reading skills. The worth of rote knowledge is only losing proportionate ground, though, and subjects like history and its lessons, or the Constitution and its terms, are casualties. To lack a sense of geography and history is to lack a sense of scale and perspective, and leaves us largely ignorant, and stuck in small, local boxes. It’s probably best here to not dwell on American schools, which teach primarily for attainment of benchmark scores in reading, English, and math. The results are just an embarrassment. Budget constraints lead to the amputation of entire fields not deemed relevant to performance in the work place, even when it’s known that exposure to music and art improve performance in science and technology. The religious protests against the study of evolution also harm the study and understanding of cultural evolution.
    Given the flood of low quality information (low signal-to-noise ratio) that’s now available in both the culture at large and in the schools within it, critical thinking for the purpose of vetting this information is becoming much more important. Instead of encouraging belief, we should instead be undermining confidence in the process of believing, and maintaining more open cognitive systems that will better resist entropy by encouraging new inputs. Scientists, academics, and journalists are showing an increasing ignorance of the most basic logical principles, and almost nobody is showing an understanding of the role of emotion in cultural persuasion (outside of those designing the propaganda and advertising). More intercultural literacy ought to be restored to the core, with alternate points of view, such as Howard Zinn’s introducing indigenous and minority perspectives. Otherwise, we’re making assumptions about globalization that bear little resemblance to facts and taking us ever further from world health and peace. We need moral studies with ethical issues that look into our sociobiological adaptations, with factors that lead to different kinds of identity and systems of belief. Knowledge like this is an antidote to xenophobic insecurity, intolerance, and fear, and these are the very bogeymen employed by tyranny to extort a people’s liberties from them.
    Beyond core requirements, aside from ingraining enough common ground for the group to survive as a group, thing will open up. The need for structure cedes ground to self-guided instruction, to a learner’s choice, creative studies, thinking outside the box, and interdisciplinarity. This kind of intellectual independence is even useful on the more creative fronts of STEM. The scope of what’s available beyond the core might even warrant a course of its own within the core, so that more people understand the value of drawing on other disciplines, perspectives, and frames, and perhaps exposing the insignificance of things we’ve inflated. A greater comfort with fuzziness, ambiguity, and paradox that’s learned in creative thinking helps to overcome the discomfort of cognitive dissonance, which in its turn drives much of our self-delusion.
    Deferred relevance is cousin to deferred gratification. Being able to make or keep children excited about learning something that won’t be all that useful or relevant until later in life is one of pedagogy’s greatest challenges. STEM education is probably the furthest removed from immediate rewards, that or learning a foreign language that isn’t spoken locally. Content is more ideally provided when it can be directly related to previous experiences and engaged in present life. This is how it means something. Further, recall of learned material is most effective when conditions approximate the context in which it’s learned. This doesn’t bode well for our learning in contexts that haven’t come up yet, or may not come up again. The reason to learn ahead of time is often to take advantage of the brain’s heightened ability to learn at particular developmental stages, provided the motivation is there, and to keep synaptic connections in place that would otherwise be pruned. The key is keeping curiosity alive, and the survival of the learner’s skill and interest in learning is at stake. It’s obviously easier to learn in a context in which that learning can be immediately applied. Absent that, second best is to provide some substitute or provisional relevance, to make it either fun or otherwise more immediately rewarding. With no fun or reward, it’s harder for the young to see the road itself as the journey or destination. Maybe the occasional child will discover that the best option, since he’s going to be stuck in school anyway, is to make the most of his time, and when the assignment is finished, to ask for more projects to keep from getting bored. I made a game out of trying to finish the quizzes and tests in half the allotted time, but I didn’t discover that trick until I stumbled onto it in the 7th grade. Fortunately, the teacher noticed and gave me more advanced materials. But I would have wasted fewer years if I’d had a learning coach. We learn best when we participate in selecting our curriculum, but to gain that advantage, we usually have to get through the things selected for us. And it may be only here that disciplining ourselves to do what we don’t enjoy truly builds character. When relevance is deferred poorly, discounting the distant future comes early to young minds. We need to find some incremental degrees of relevance and value, achievable intermediate goals, or ways to take pleasure while learning in baby steps.

Kindergarten
    Kindergarten is used here in the original sense of “children’s garden” and roughly coincides with Piagets’ entire preoperational stage, from language development and toilet training to around age 7. Structured learning of core material has been highly overemphasized, and the importance of play and free investigation grossly underestimated. There are only a handful of cultural assimilations that really require the structure that kids are given at this age, particularly those related to language acquisition, articulation, the most basic arithmetic, reading, and writing. These will benefit from neurodevelopmental optima. It doesn’t hurt to start basic second languages early on, especially in regions where these aren’t often spoken at home. This is a great time to learn basic moral and ethical rules of social interaction, but taught with common sense instead of ideological justifications. Beyond these, you really only need tables and chairs for free-form art projects and kid-science experiments. Even skills like letter identification and counting can be learned more informally, out of doors. Field trips and their lessons stick extremely well in the memory, trips into nature or wilderness (especially for urban kids), or museums, zoos, and aquariums. First-hand pond and mud puddle experience might best be got closer to home, with brave parents and hoses within shouting distance. It’s in our evolved nature to teach children to propagate the culture they’re raised in. But now that we aren’t so geographically isolated, we also need to expose them to other cultures, before their minds set up and harden against whatever isn’t already identified with or believed in by parents. They can still belong to their culture or group while being better informed of the others.
    There is no need for pressure or stress in early education. Enjoyment is the best driver. The term unschooling refers to an emphasis on learner-chosen activities, somewhat closer to summer vacation, but with any facilities and programs that a school and the larger communities are still able to offer. Adults are there simply to intervene in emergencies, answer questions, offer counsel, and provide security. Such schools could run all day and serve as the loco parentis while parents are working, but they can’t do this well with their current prematurely institutional structure. If this schooling were mostly play, with naps, and educational experience planned and disguised as play, it would stand a better chance. But it would still require a more general mix of real life experience, including time spent on social and entertainment media, and even school cleaning and maintenance activities, as several international systems are doing. You wash your own lunch tray, but you don't manufacture shoes.
    Storytelling remains a central component, as it’s been for hundreds of millennia, and it remains central to the most basic ways we learn. Given the enormous range of stories that are available, schools would be wise to leave more random selection for the home and promote the more entertaining stories that hold the best life lessons. Among these are the classical myths, and stories that hold templates for understanding archetypal human behavior patterns, like the several mentioned earlier. Even if the deeper dimensions aren’t yet grasped in kindergarten, they are there in the memory to be called upon later. We can also use literature to promote scientific, environmental, and social justice agendas, wherever we can sneak these past the religious conservatives. This is also a good time to start getting kids to invent and tell stories, and in the process, to try perspective shifting and decision trees or alternate endings. Piaget thought and taught that most children aren’t capable of philosophical thinking until the formal operational phase, but even if this is true, it doesn’t mean that foundations in available conceptual metaphors and analogies can’t be laid down earlier. And it doesn’t mean that they can’t be perceptive, a talent which your philosophical thinkers frequently lack.
    The minimizing of academic activities should even be applied to the gifted, who really don’t need to be pressured to learn, and who will also learn even better from play. This applies to set curriculum, though. Elective academic studies shouldn’t be prohibited in order to encourage play. Prodigies should run with their interests. It’s not generally advisable to skip the gifted more than a single grade early on. Being at play with other children less gifted than themselves can be put to work helping them to develop the leadership skills and the patience they’ll likely find useful later in life. They can be delegated additional tasks and responsibilities, and then they can also learn some of the consequences of being tactless and bossy, good things to know for future leaders. The neocortices of the very gifted develop a little differently than others, growing for a few years longer, then pruning unused connections somewhat more rapidly. For these kids, the most important time for broad exposure to new areas of interest is throughout the concrete-operational stage. Even here, horizontal enrichment or extracurriculars may be preferable to vertical acceleration or skipping grades. It isn’t necessary to deprive these children of less structured learning or unschooling. A fairly high percentage of the gifted will have autism spectrum conditions (ASCs), particularly the Aspies, for whom socialization might be more of a problem. These may have entirely different sets of social needs, or lack thereof, and will need special accommodation if they are to reward their society and culture properly later.
    Two well-established alternative systems exist for the younger students. Maria Montessori developed a system concentrating on personal interaction within a designed environment, full of well-planned tricks, allowing a guided “psychological self-construction.” It uses discovery or constructivist learning, through interaction with designed materials. Instead of a rigid curriculum, children have a choice of activities, at least within a specified range. There is generally free movement within the classroom. Learning opportunities are geared to sensitive or critical developmental periods. Classrooms have a mix of ages within them and social harmony is emphasized. The other, Waldorf schools, stresses continuity in teaching, so that instructors may stay with their students for years, developing some mutual individual knowledge and trust. Despite losing something in having a diversity of sources and teachers, we do learn better from those we know and trust. The subject is the student, not the subject taught, and the search is made for the student’s primary passions and interests. Multi-sensory learning, life skill development, encouraging the imagination, learning by doing, and a respect for lifelong learning all figure strongly in the program. The lesser known educational approach of Reggio Emilia  is still more child centered than these. Here, children are capable of constructing their very own learning experiences, and of communicating their needs for assistance while doing so. Children can be trusted to be interested in things worth knowing about. The teacher goes along as co-learner and collaborator. Projects might be begun with no idea of where they will end.

Secondary School
    Some learn skills with more facility than others, and so education will also serve as a social-sorting mechanism. This flies in the face of delusions that we’re all the same in our essence. While our differences need not sort us into socioeconomic classes or castes, by the time kids get to secondary education, real differences in both interests and abilities have appeared. Architects of educational systems have made some truly idiotic mistakes in assuming that equality means equality of outcomes, while the usefulness to society of equal rights and opportunities remains underappreciated. In education, it’s only the latter equalities that gives us a clear picture of natural interests and abilities that’s undistorted by socioeconomic privilege, and only ignorance of this that prevents the deserving from rising as they should from wherever they need to rise. In a sane system, secondary school is the place where this information would lead to divergence into separate schools and educational tracks. You don’t want to specialize until you know that’s where you’re going, but by now you can narrow it down. By this age, Sapolsky’s “portfolio of the frontal cortex” is giving us much better indications of how it will develop. A lot of students, probably a majority, really won’t need or want to learn much more than the expanded core curriculum, as proposed above, and will be better served by concentrating on more practical life skills and vocational training. There’s nothing to say you can’t make microscopes and telescopes in shop, and people can do a whole lot worse than squeaking by on what journeyman plumbers or electricians charge. If this isn't enough, there are libraries.
    For most, it seems, the development of the socialization skills in secondary school takes precedence over formal learning, and this will provide a level of distraction with disastrous results that are perceived only much later, when a former student has to ask if you’d like fries with that, two hundred or more times a day. Why not bring social dynamics to the forefront and study them as a subject? The learning is clearly timely. It’s somewhat both surprising and not that we don’t take advantage of this period of questioning, and its open rebellion, to offer classes in questioning and rebellion. The adolescent PFC is developing options for future trial and error scenarios now. Psychology and sociology could be taught in these years, especially towards an understanding of the things society and culture can do to mess up your mind. Kids should learn to distinguish between needs and wants, and between their rights and privileges. It’s right that they should be thinking about the dismissal of rules and norms, and thumping the culture’s idols to see if they’re hollow. There are effective and ineffective ways to question authority and voice protest. While the faculty and administration are often terrified of children breaking their cultural bonds, their attempts at suppression almost invariably have the opposite effect. This could be taught instead.
    This is also a great time to learn the worth of service, and even that it’s OK to take some credit for serving. It may be too much to ask of a hypocritical religious majority to develop service programs for social justice, world peace, international aid, and environmental protection, but where resistance to these can be overcome, publicly sponsored programs might go far in restoring faith in publicly sponsored programs in next-generation minds. Service learning is also done in the real world, so there’s that kind of training, in a world that’s increasingly unlike the parent’s own younger years. Aging parents, of course, might even wind up as beneficiaries of this.

Introducing the Metacognitive Domain to Kids
    Inquiry-based teaching and learning, and the dozens of questions suggested just above under the Cultural domain will give a child a good introduction to thinking and feeling in the metacognitive domain.  Emotional literacy and the preliminary exercise of affective self-management are also well within the realm of possibility.
    Mitakuye Oyasin, Sioux for “all my relations,” is a wonderful mnemonic reminder that we are all part of the single web of life. It’s a way to remember that we are not just human beings. We’re related to microbes, bugs, plants, mushrooms, and fish as well. Children need to be trained and even drilled in this, even when their parents are hunters and Christians. Curiosity about other life forms comes naturally to children, but not the knowledge that we are all related. I think back with some horror at what all those tadpoles and fireflies must have suffered. We have to start outgrowing both our nationalism and our human exceptionalism and take our place among our wider kindred. Neither is this to erase distinctions between the taxa and species, but to more fully embrace our diversity, and the need for our diversity, the texture of life on Earth, before we lose it. Unanimity, homogeneity, and consensus have no texture or depth. It’s a lot easier for kids to learn this than it is for adults. Our species exceptionalism is part of the denial that a parasite requires to keep on feeding without intrusions of conscience.
    Dave Foremen of Earth First! proposed the commandment “Pay your rent.” Human beings have transitioned from a symbiotic to a parasitic species in a frighteningly short time. Human parasitism is the common ground on which overpopulation, overconsumption, and human exceptionalism all squat so smugly. Nearly all of us do little more than take what we need, even those who take less out of conscience. We give so little back. We stay busy taking care of our own needs and wants, and these are usually artificially inflated. Those of us with deeper consciences feel a weighty indebtedness to both culture and to the biosphere out of which we emerge, and without which we might soon fail to emerge. We have to give back. This is a noblesse oblige, out of gratitude, for all we’ve been given. And ultimately, a hard look at the problem will regard most human beings as ingrates. Service learning is a good start here, but ultimately it needs to pervade our feelings, thoughts, and actions. Childhood is the right time to start developing this, and whatever schooling is chosen is the place, since the solutions we need will demand our collective action.
    While it may be early to get a child thinking about thinking (which is not yet metacognition) it might be useful to start referring early and explicitly to a child’s thoughts and feelings. You have a brand new brain. Let’s talk about how it works. Not what do you think? but "What does your brain think about this?" Not how do you feel? but "What do your feelings feel about this?" Sneak some self-reflection and self-awareness into the conversation. Waking up is what sentient beings are born to do.



2.12 - Metacognitive Domain -
Thoughts and Practices for Dults

Not Too Late for a Little Work, By Domain: Sensorimotor and Native,
Accommodating, Situational, Emotional, Personal, Social, Cultural,
Linguistic, Work in the Metacognitive Domain, Elucidogens

“Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.”
Charles Mackay

Not Too Late for a Little Work
    When we’re over twenty-five, with our mostly-ripened PFC, the odds are good that most of our cognitive and affective neural configurations are largely fixed now, in the unhappy sense of that word. There will still be significant unlearning and relearning opportunities and possibilities, often with effort that’s proportionate to how entangled these targets are with the rest of our memories. Many settings will be fiercely protected by firewalls of identity, belief, and belonging. It doesn’t matter that so much of this wasn’t our fault or doing. We have to prioritize, do some triage, and expend our efforts at self-correction where they’re likely to do the most good. For most, unfortunately, that target will only be an increase in personal happiness or satisfaction. With respect to the bigger picture and the future of the species, the best we can do is take steps that this doesn’t happen to our children, at least to the extent we can prevent it. And for second best, we might prioritize those ignorances and delusions that threaten the continued existence of our species, the biodiversity of life on earth, and the aggregated biosphere that sustains us. The difficulties of doing this, or even of making a significant start, should be obvious by this point in the book. The intractability of our human denial is impressive and frightening. Humans are junkies and parasites, commensals who just take and take, and for the most part give nothing back. But we can make local efforts and set local examples, and hope for some kind of contagion. Or maybe we leave signs and warnings for the archaeologists digging through our ruins.
    One of the more vapid new age platitudes says the only way to change the world is to change ourselves. This is at its most appealing to self-absorbed narcissists with no sense of obligation to the culture or to the living world. Yes, we all need work on ourselves, but that doesn’t mean accomplishing nothing constructive in our larger contexts while we’re doing it. It’s time to start taking steps to get over ourselves. All of the kid-stuff thoughts and practices discussed in the last section are useful to grownups as well. It’s no less important for dults to prioritize and satisfy the most basic needs first, and this might even free up some time and energy for more meaningful pursuits. Play remains important, even if it takes on a more sporting character. And we still need a sense of humor: we’re as good as dead without that. Emotional awareness and affective self-management are every bit as important as the cognitive. Standing corrected, both for misapprehension and misbehavior, might remain the single most vital practice if we’re going to keep growing and learning. Improving a bit on Luke 4:24: No man is a prophet in his own village. We need those closest to us to be candid with us, and we need to listen to them when they are, or else we let whatever wisdom we find go to our heads, where it stops flowing forth.
    Of the various branches of psychology available now to us dults, the two that are most at home in the metacognitive domain are cognitive and positive psychology. Not surprisingly, cognitive psychology tends to be heavy on the cognitive side of things, and too light on the affective and limbic substrates of cognition. Its main areas of interest are memory, perception, categorization, representation, and reason or thought. It works here as well as it can, while generally ignoring some basic components of the mind. Cognitive Behavioral Therapies (CBTs) generally take another step in the right direction. These will include such systems and practices as cognitive reframing and restructuring, cognitive reappraisal, and cognitive emotional behavioral therapy. Some are named: Structured Cognitive Behavioral Training (SCBT), Rational-Emotive Behavior Therapy (REBT), Acceptance and Commitment Therapy (ACT), and Motivational Interviewing and self-efficacy (MI). All of these owe a debt to neuroscience and, acknowledged or not, to historical mindfulness practices and Buddhism. As strategies for thinking that help structure and attune our responses to the world, they might be viewed as operating software. To the extent that they are developed with wet and juicy brains in mind, they may eventually become more neurologically optimized for our messier human wetware than the drier theories about reason and information processing that currently predominate.
     It’s lamentable that so much of psychology’s database is founded on study of damaged brains and disappointing human behavior. It’s more of a science of squeaky wheels, not one of impressively balanced and frictionless wheels. Mental health in psychology will tend to be regarded in terms of the normal condition, the center of the bell curve. It has little to say to self-actualizing human beings, the gifted and creative, except to wish us well, and good luck trying to fit in. Positive psychology attempts to address this missing part of the field. The term positive means to posit, put forward, propose, advance or assert. The word suggests creativity. While even the best of us needs quite a bit of work and at least some repair (even those who call themselves masters), this positive branch of the field will spend less time looking for healing, less looking backwards into what caused us to go astray, or in wringing our hands over the misfortunes of our earlier circumstances. Someone who is getting therapy in positive psychology might be receiving life coaching, or skydiving lessons, or vocational guidance, or philosophical counseling, or mindfulness training.
    In many ways, this is the psychology of the non-normative or exceptional, in search of Maslow’s “farther reaches of human nature” and perhaps even Nietzsche’s “man is something to be surpassed.” It was first named in the late 1990’s by founders Martin Seligman and Mihaly Csikszentmihalyi, offering this definition in 1998: “We believe that a psychology of positive human functioning will arise, which achieves a scientific understanding and effective interventions to build thriving individuals, families, and communities. Positive psychologists seek to find and nurture genius and talent, and to make normal life more fulfilling, not simply to treat mental illness. The field is intended to complement, not to replace traditional psychology. It does not seek to deny the importance of studying how things go wrong, but rather to emphasize the importance of using the scientific method to determine how things go right.”
    We run into two problems here right away. The first: what is exceptional will almost necessarily be anecdotal. It’s oxymoronic to study the exceptional as a group. This of course has critics wagging their fingers and questioning whether this can ever be any sort of objective science, or subjected to reliable measurements. More normal therapies are more amenable to objective study, measurement, and statistical analysis, as long as the individual differences in both patients and their therapists can be either averaged or ignored, and as long as self-reported mental states aren’t thought perfectly objective. But how can we study exceptional, or even just abnormal success, except anecdotally? We need a lot more anecdotes, or the much-maligned anecdata.
    The second problem: Confusion arose almost immediately in the literature over the positive-negative dichotomy, which the simpletons want to read as happy vs bitchy. Positing posits new or creative approaches to living, while the negative negates or aborts the maladaptive approaches. The parallel is to mutation and selection in evolution. The distinction was clear at the start, but far too much of the preliminary research seems to be using self-rated levels of happiness as the first measure of a person’s or subject’s psychological well-being. They aren’t even trying to distinguish happiness from eudaimoníā and sōphrosýnē. It’s all about whether you’re smiling or not. The new age is just creeping onto that lovely new lawn like crabgrass. To someone working at personal purpose, or higher purpose, or someone in the state known as flow, happiness might be little more than a gadfly distraction. It’s not the pursuit or the goal. It’s only the occasional attaboy or attagirl from Life.

By Domain: Sensorimotor and Native
    There’s more metacognitive work to be done in some domains than others. The first two are straightforward. The exercise for the sensorimotor domain is simple, even a no-brainer (figuratively anyway): exercise the faculties in this domain. Wake up and pay better attention. Experience more novelty than you expect to see. Look for the sacred under the surface of the ordinary: that stuff’s everywhere. Revere it. Feeling grateful for having the ability to do this adds even more. And you don’t need some stinking religion to do that, or an imaginary friend to give thanks to. We can simply treat ourselves to enriched experience, even when that means going somewhere exotic, like nature. We can move around here in both self-regulated and spontaneous ways, and paying attention all the while, coming to life, so to speak.
    We can continue to look for mental exercises that show where our native heuristics are inclined to mislead us. Both magic and other sensory illusions, and examples of pareidolia and apophenia, can continue to both entertain and educate. Science, and maybe especially the science presented for children, holds an endless stream of discoveries that show us that naive realism doesn’t depict reality. The table isn’t solid. The stars are no longer configured as they appear. Non-human organisms living inside you outnumber your human cells. Green isn’t the color of plants: that’s the frequency of the spectrum that they have nothing to do with. It helps to be always suspicious that there are parts of the world we aren’t aware of. It keeps us wondering, and guessing.

Accommodating
“If your mind is empty, it is ready for anything. In the beginner’s mind there are many possibilities; in the expert’s mind there are few.”
Shunryu Suzuki on chūxīn
    Cognitive self-management is an easy habit to let lapse. As an executive function, it’s an easy thing to fail to develop much at all. Most of the time we take the path of least effort and let the mind run on autopilot. We don’t value the return on the efforts of management. Both the return and and effort can be considerable, but not always correlated or predictable. Most of the time we’d rather learn enough to get by and return to our dreams and sleepwalking. The several forms of self-management call on executive cognitive functions that are learned and developed most predominantly in the prefrontal cortex, with the affective side in the ventromedial portion and the cognitive side in the dorsolateral (per Sapolsky). Several other parts of the brain are also involved in agency, but these two are central, and importantly, they don’t reach peak development until around our mid-twenties.
    The signal-to-noise ratio in most of the data stream we’re exposed to is low, and getting lower every year. We have to learn to filter or sieve the noise for higher quality data. We have to see a value in doing this, a return on our investment, because vetting our inputs on the way in is a lot of work. Let’s say the signal-to-noise ratio is 10%, and we can only choose to absorb 10% of the data stream. Then, without any effort, we can fill our heads with 90% crap and 10% knowledge, or we can, with lots of effort, fill our heads with 90% enriched knowledge and the 10% of crap that got through our filters. What we fail to remember is that the crap adds up and is much harder to get rid of than it is to acquire, so that over a lifetime we grow up full of shite. What’s it worth to not do that? That may depend on what we are living for, if anything.
    Adaptive learning functions in the world of panta rhei, all in flux, the world of not stepping into the same river twice. The accommodating mind has to continually update to this, often having to think twice about things, and even more than twice. Cognitive flexibility, thinking with a mind that stays limber and nimble, is a little more work, but it’s a lot more fun. We can learn to be amused, entertained, or intrigued by cognitive dissonance rather than anxious, threatened, or tormented. F. Scott Fitzgerald noted, “The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function.” This requires an appreciation of multiple perspectives, frames of reference, or a sense of paradox. When we see something that doesn’t quite fit with what we already know, we have an opportunity to grow. Isaac Asimov noted that  “that’s funny” heralds new scientific discovery better that “eureka.” And Richard Brodie phrased it that “geniuses develop their most brilliant original thoughts through self-imposed cognitive dissonance.” And of course, there’s Walt Whitman’s quip “Do I contradict myself? Very well then I contradict myself (I am large, I contain multitudes).” This starts by getting us above our psychological and cognitive partitioning, where we may be able to see things from other perspectives and in alternative frames of reference.
    Both belief and disbelief are problematic. With effort, we can acquire an ability to suspend both, and authentic enquiry demands it. We need to take promising things into our minds long enough to examine or entertain them, although the longer they stay there, the more interconnected they get. We can’t always question, investigate, or examine from our old, familiar, single perspective, no matter how well-tested that perspective may be. We do have to remember that to have a genuine experience doesn’t require us to reify it. We might, for example, allow ourselves to experience total immersion in a boundless consciousness and feel the outpouring of infinite, divine love. But this doesn’t need not strip us of one bit of our agnostic or atheistic credibility. There’s a prevalent myth that somehow paganism and mysticism, and any transcendent feelings, are fundamentally incompatible with skepticism, logic, science, and (more generally) intelligence. This is an error, perpetuated by those who lack these affective and cognitive skills. These skills are like passports that allow us to travel to any strange land, any altered state, or any alternative experience. The fact that we spend that hour in tears of joy and gratitude doesn’t make any of that experience metaphysically real. It doesn’t in any way make love and consciousness into fundamental properties of the universe. But what’s the point of having a good mind if we’re too stuffy to allow ourselves an experience like that? We get our stress relieved and our eyeballs cleaned. Even a good scientist can do that if he doesn’t run around blabbing about it to peers. We aren’t required to believe a thing, but now we can understand from experiences where the mystics went both wrong and right. We can strike a balance between rigor or conscientiousness and the opportunity to experience openly and in full. The balance we strike might swing for a while, and that’s OK too.
    Sometimes, too, we can take the opposite point of view just for the sake of learning what it has to offer. The Catholic church has long used the role of advocatus diaboli, the devils’s advocate, to make certain other perspectives get seen and heard, and not inconsistently, this is also a significant part of the practice of LaVeyan Satanism. We can’t fear the other, or our shadow, if we really want to be whole. Carl Sagan also supported exploring, even forcing, alternate perspectives: “Spin more than one hypothesis, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among ‘multiple working hypotheses,’ has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.”
    The mind is a bricolage, built on things accepted, whether examined or not. But we can accept conditionally, temporarily, without letting ideas get too rooted to be pulled out again. Minds without tools of analysis become so cluttered with random and unquestioned beliefs, and so lacking in criteria for assessing meaning, that gullibility is the only possible outcome. So when the showmen come out with the next big Mayan event, or harmonic convergence, or planetary alignment, or Second Coming, the true believers will just keep bouncing back for the next one and never seem to learn. But with our critical skills we can see what these people are seeing, and empathize with what they are feeling, and where it’s useful, maybe show them a way out. As an atheist, I’ve spent many hours with the One True God, and I can testify that he has a lot more fun with us than he does with the true believers, who mostly just embarrass him. We also learn in these encounters why humans write their silly holy books the way they do. For further confirmation, refer to the Book of Bokonon, or at least the excerpts you can find in Vonnegut’s Cat’s Cradle.
    Skepticism has been fighting for an honored place among us for a very long time. It developed in India and Vedic studies with the idea of Ajnana (not knowing, with the same etymology as agnosis). It was taken up by the Buddha, who remained agnostic towards all religious “truths” because minds will distort whatever they percieve in proportion to their incompleteness and their cravings. In China, the first real proponents were Laozi and Zhuangzi, although later Daoists in the more religious traditions (Daojiao) would prove to be the very opposite of skeptical. In ancient Greece we had Anaxarchus of Abdera, and Pyrrho of Elis, whose work was carried on by Sextus Empiricus. Here we have the idea of acatalepsia, incomprehensibleness, uncertainty, and the withholding of assent from ideas and doctrines. Knowledge is limited to or by appearances and our own passions, not unlike the Buddha’s claim. The Cynics would pick this value up as well. This school was not as we think of as cynicism today. The original idea was to live a virtuous and simple life in harmony with nature. It just happened that a lot of our human bullshit stood in the way of doing this. The values of the Cynics included eudaimoníā, good spiritedness, a happy state of clarity and well-being; anaídeia, shamelessness, immodesty, impudence, cheek, or impertinence; áskēsis, self-discipline, rigor, asceticism, exercise, training; autárkeia, self-sufficiency, contentedness, self-satisfaction; and parrhēsía, outspokenness, candor, fearless speech, including speaking truth to power. Being scrupulously truthful is Not always the same as being tactless, or trolling to demonstrate points, but this can still be done. The Arab thinker Ibn al-Haytham (965-1040 CE) wrote: “The duty of man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads and … attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.” Descartes introduced methodic doubt, but sadly, found himself unable to doubt some pretty questionable preconceptions.
    One of the best formulae for skepticism is a legal principle, onus probandi incumbit ei qui dicit, non ei qui negat, the burden of proof rests upon the person who affirms, not he who denies. This was partially reformulated as Hitchen’s Razor: “What can be asserted without evidence can be dismissed without evidence.” We are, nonetheless, resistant to simply having ideas destroyed by a skeptical attitude, especially those that delight us or suggest other reasons for acceptance. It’s a good practice, therefore, to replace these ideas we’ve so effectively demolished with something that satisfies our hunger equally well, but without the side effects of error.
    Another good rule to keep handy is Occam’s razor, the Law of Parsimony. The simpleminded common rendering of this as “the simplest explanation is the best one” is incorrect. The rule of thumb literally reads: entia non sunt multiplicanda praeter necessitatem, entities are not to be multiplied beyond necessity. William of Occam was responding in part to the theologian’s habit of introducing different choirs of angels or different levels of metaphysical existence into an explanation. It would be like us adding new dimensions of existence (like cosmological constants, and dark matter and energy) to force some congruence between our observations and our models, even though these haven’t been observed yet. We may follow an aesthetic bias and look for elegance, symmetry, or simplicity first, and try to refrain from dragging in new ideas, dimensions, and alien entities, but this isn’t always the solution. Elegance is nice to see, but it isn’t a rule that binds the universe. When two or more hypotheses are compatible with the available evidence, we simply look first to the one that introduces the fewest new assumptions. As Isaac Newton phrased it: “No more causes of natural things should be admitted than are both true and sufficient to explain their phenomena.” But Newton did go way overboard in claiming that god was the reason that simplicity was the rule. Einstein’s take was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.”
    Eclecticism was discussed at some length in Part One. It’s important that we remain able, if needed, to select only portions of information packages and ideological systems while dismissing others. A version of this inquires into both the necessity and the sufficiency of each piece. In most ideologies we can find quite a bit of filler, as well as ulterior motives and Trojan horses. A good eclectic could probably distill what’s worth reading of the moral and otherwise useful instruction in the Bible down to a single page or less. And that includes the accidental profundities like “All flesh is grass.” Then the remainder can be set aside. Eclecticism goes hand-in-hand with syncretism, which can then take those salvaged pieces from many lands and places, and other disciplines, and reassemble them into new packages and systems with more functional parts.
    Reframing, as a therapy, is described as identifying and disputing irrational or maladaptive thoughts, changing minds by changing the meaning of things. But this is only one facet of how it’s meant here. Reframing was outlined in Part One more consistently with the framing metaphor, as addressing issues of narrow-mindedness (points of view and perspectives); nearsightedness (spatial framing and orders of magnitude); small-mindedness (contextual and conceptual framing); and shortsightedness (temporal framing and deeper time horizons). Reframing is changing the mindset, context, or the ground a figure is set in or against. The photographer switches lenses and filters. We can use switchable cognitive styles, templates, or metrics. And changing the way we perceive an event can change its practical meaning. Cognitive dissonance is often resolved in big-mindedness, and so are many other issues of tolerance. Some tragedies become comedies. Narration or storytelling can be considered forms of reframing when we put life lessons into different relatable contexts. We can try reframing the idiotic platitude “Everything happens for a reason.” Obviously, life is really opportunistic stuff, and is able to salvage some kind of useful outcome or lesson from most situations. On another hand, if we knew the disposition of every subatomic particle, we might make successful causal predictions of all but the more strongly emergent qualia in some kind or reasonable way. But these two do not go together. Explanatory reasons aren’t teleological reasons. Nietzsche simply took a step up from this one and noted “A loss rarely remains a loss for an hour.” That’s why things so often work out for the best, and it has nothing to do with reasons or divine plans.
    Subration is a concept developed by Adi Shankara, an expositor of Advaita Vedanta. This is the reevaluation of a previously appraised level of mental function when cancelled, refuted, or displaced by another level. The former knowledge now takes on the sense of illusion or dream. We have access to something better now. This is likely the most powerful and effective way to unlearn: we just put some obsolete crap behind us, having experienced it as having little value or little to offer, relative to our new experience of a better alternative, and we move on improved. In this, it’s related to samvega, the Janus experience discussed elsewhere. Cognitive unlearning is similar, except that it tends to lack the affective component that values an alternative more highly, and so it’s proportionally less effective. Behaviorally, unlearning is referred to as extinction. Feedback supporting the behavior is withdrawn, while feedback discouraging the behavior may be increased. Wrong or bad learning remains in the brain, at least for a while. It isn’t immediately erased or overwritten so much as increasingly disregarded as the go-to connection or association. Neuroplastic processes may disconnect it eventually.
    Deprogramming is a deliberate rewrite of ideological software perceived to be obsolete, erroneous, counterproductive, or toxic. Deprogramming may be self-performed or done by intervenors, where it’s more subject to abuse and violations of the cognitive liberties of consenting individuals by intolerant cultural entities. Any society with vested interests in conformity will resist attempts at self-deprogramming, especially when it undermines programs like national war efforts. Gaslighting is a common persuasive practice that lends itself to deprogramming where the subject can be immersed in higher quality information. It helps to remember that unlearning and relearning are a lot more difficult than learning incorrect things in the first place, just as it’s lots easier to put stains into a carpet than to get them out. This thought is useful incentive to vet input as it’s being absorbed, or while it’s being held in the mind under probationary review. It’s an incentive to be judgmental, at least to the extent it doesn’t harm us as well.

Situational   
    Metacognitive efforts in the situational domain are fairly straightforward, and largely concerned with optimizing our skill sets and problem-solving strategies. Most problems here occur either when biases prevent us from looking at areas of life where solutions might be hiding, or when emotions arise to dampen our confidence or courage. These two are discussed in the previous and next domains, respectively.
    A common problem in this domain is being stuck down on lower levels of abstraction when the optimum solutions to problems suggest overview and better comprehension, or deBono’s lateral thinking. Here we have the well-known Einstein quote “The significant problems we face cannot be solved at the same level of thinking we were at when we created them.”
    Another issue concerns a lack of acceptance of our present situation as a given, the reality-based conditions that we need to begin with. Acceptance is not the same thing as approval, or acquiescence either. There’s a place for viewing a situation as we wish it were, but that place is somewhere within our plan to make it so. It’s not in the initial sitrep. We have to start where we are, in the real world, if we want to optimize our effectiveness and avoid confusing the real with the imaginary. This absolutely does not mean that we should “let it be” or that the world is running exactly as it should, in this, the best of all possible worlds. While referring to believing in a somewhat different way than it’s used here, Carlos Castaneda wrote, “‘Believing is a cinch,’ don Juan went on. ‘Having to believe is something else. In this case, for instance, power gave you a splendid lesson, but you chose to use only part of it. If you have to believe, however, you must use all the event.’” This using all the event is also as central element in the martial art of Aikido, where we “enter-and-blend” with all of the forces that be, just as they are, finding the center and turning with that, before we turn them around to advantage.

Emotional
“Don Juan assured me that in order to accomplish the feat of making myself miserable I had to work in the most intense fashion, and that it was absurd…. ‘The trick is in what one emphasizes,’ he said. ‘We either make ourselves miserable, or we make ourselves strong. The amount of work is the same.’”  Carlos Castaneda, Journey to Ixtlan
    Affective self-management isn’t the same as suppressing or repressing our emotions, or controlling them, or squelching them, or sublimating them, or ignoring them, or just not having them. In fact, there are a few problems with the model that uses the terms suppression and repression. This can imply that emotions are like some kind of hydraulic fluid that needs to go somewhere under pressure and has to come back out eventually. The notion of catharsis has the same problem. Emotions are responses that our organisms come up with on the fly. They are created on the spot. They aren’t stored in some tube, tank, or vat somewhere. The only continuity they have is that they involve some similarly perceived cocktails of neurochemistry and similar patterns of associative memory involvement, reactions to triggering stimuli. Neither is affective self-management the same thing as manipulation of the emotions in any sense that requires that emotions be felt with less intensity. It will merely invoke some cognitive elements like relevance and (re-)evaluation to provide additional choices in response to stimulus. Of course, for emergency use in self-management, there are the two old standby techniques: counting to ten, and deep, slow, regular breathing, particularly through the nose.
    Epicurean Hedonism is a worthwhile study here. This regards our affective states as reliable cues to how well we’re living our lives, or how adaptive our choices have been. We’re informed by the quality of our pleasures. Of course, many still confuse this with both lesser and later conceptions of hedonism, so it will often be necessary to qualify this whenever pleasure is mentioned as a guiding principle. It’s also easy to conflate the appreciation of happiness as valuable information with the pursuit of happiness as a driving ambition. You might call this version a long-range hedonism which extolled the virtues of good taste, the refinement of our desires, and the deferral of shortsighted self-gratification. “The greatest wealth is to live content with little, for there is never want where the mind is satisfied.” The highest and most pleasant states of pleasure were identified as joy (kharā́), to distinguish them from our more typical sense of pleasure (hēdonḗ). True happiness is human thriving or well-being (eudaimoníā). We constrain ourselves on this path to good taste and higher pleasures, while avoiding neutral, anhedonic, or apathetic states. This does not in any any way imply the neurotic approaches to sexuality so typical of many religions. It simply means to approach such experiences in ways that don’t do you lasting emotional (and perhaps reputational) damage.
    Resentment is the repetition of an unpleasant feeling or emotion every time a memory is recalled. It derives from re-sentiment. But memory isn’t like a library where we find a thing, check it out, and return it unchanged. Memory is plastic. We form new associations to that event as it’s brought up, including the state of mind or affect we are in while doing the recall. When we recall an event that made us angry, and that recall makes us angrier still, we put this memory away with a still stronger association to anger. We give it yet another tooth with which to eat at us down there in that subconscious of ours. But it works the other way as well. When we recall an event that made us angry, but entertain the memory in a better frame of mind, as with a new understanding, or an awareness of mitigating circumstances, or an attitude of forgiveness or compassion, or even a devaluation of the event’s importance, we put the thing away again in a somewhat less toxic form. The memory will improve in its affective tone every time we do this. When we add a new level of cognitive understanding to the memory it’s called cognitive reappraisal. But we ought to be talking about affective reappraisal as well. Castaneda, in The Eagle’s Gift, termed the process recapitulation, as a way “discharge one’s emotions so that they do not react and one can perceive clearly.” This is a full and courageous level of recall, retrieving all associated feelings invested in the memory, while seeking to avoid confabulation and editing. With this skill in mind, it becomes pointless and silly to run from our less pleasant feelings or continue to deny them. We turn and face them down, and will sometimes choose to rise above them instead.
    Another self-management technique borrowed from the extended metaphor of sorcery might be called “naming your demons.” As is known to readers of myth and fable, the sorcerer will gain control of the supernatural thing (or subconscious entity, or creature from the Id) by discovering its name. Then, instead of being haunted by some unspeakable thing, he makes the damned thing run errands for him. It even works on the Devil, if you use the right word and “spell” it correctly. Knowing the names of things, such as names of the heuristics, emotions, and anticognitives itemized in Part Three, adds extra associative handles to our anticognitive processes. When we take up one to study it, we’ll learn it better if we relate it then to something we’ve done in the past, or absent that, by constructing a hypothetical example. Take, for example, sonder, that excellent neologism for the feeling-recognition that each passerby has a life as vivid and complex as your own. If you haven’t felt this, pay attention next time you drive through a residential neighborhood at dinnertime. And say sonder, with a sigh and a little wonderment. All those families having all those dinners. We also use naming fairly frequently just to dismiss the need for a further experience of something or someone, or just to simplify things when we might not need to go deeper: “Oh, I know John. He’s that mechanic with the shop on 5th.” Do I really know John with this simple bit of data? Can I wrap him up in this? Marshall McLuhan called this the “label-libel gambit,” the tendency to dismiss an idea by the expedience of naming it. But here in this domain we’re not defusing anything that we don’t want defused. Here we’re giving ourselves an additional means to access our experience, cognition, associated feelings and emotions, additional handles with which to sort things out. It helps us in recognizing errors by what kind of errors they are, and in knowing what to look for.
    This practice or exercise of naming your demons fits perfectly with the Buddha’s Samma Sati, or Right Mindfulness, and each of these names can be a mental object used in this meditation, whether this is meditation on feelings or sensations (Vedananupassana), on the activities of the mind and mental processes (Cittanupassana), or on objects of thought (Dhammanupassana). This is a more active and directed a process than Samma Samadhi or Right Concentration, discussed below. Here we will deliberately raise a memory, along with its affective components, and then ask whether we still want the association in that particular weave or net, or whether we want to change it, or use more wisdom in tying it to a particular feeling or emotion.
    It was mentioned before that cognitive dissonance doesn’t really have to upset or threaten us as much as it does when we’re unaware of it. We can also be entertained or amused, intrigued by finding something new, something that we haven’t fully comprehended yet. These are things we can play with. Two ideas that appear conflicting or dissonant might sometimes even be used to launch us into higher mental states, states that raise us above the level of comprehension where the conflict is taken seriously. One of these states is called Erleichda, lightening up, a neologism from Tom Robbins’ Jitterbug Perfume. Somewhat more common is the exalted state known as humor. Here we aren’t really talking about low states like schadenfreude or insult comedy, but states of good humor that either raise us up or occur to us following the elevation of awareness. This will often involve the joining of two ideas from unconnected categories, so that we have a sort of fusion energy that gets liberated. Well-known exploits of this phenomenon are found throughout the teaching stories of Sufism, Daoism, and Chan or Zen Buddhism. To the more rigid religious ideologies, suddenly taking things less seriously may be seen as a serious threat. Note for example that Isaac was the old Hebrew word for laughter. This is what Abraham was asked to sacrifice, and he passed his test by being ready to do this. It may be sacrilege to the Abrahamic faiths, but it’s more emotionally mature to cultivate and maintain a sense of humor about yourself, your identity, ideas you’ve accepted, and groups you belong to.

Personal
“Consistency requires you to be as ignorant today as you were a year ago.” Bernard Berenson
“With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day.” Emerson
    Part One offered a general survey of the many hazards of identity, belief, and belonging, especially as fixed ideas that we regard as providing us with some personal security in a dynamic world. But they do no such thing: they demand that we defend them wherever they’re threatened with change or correction. This is just a sunk-cost fallacy, a backfire effect, a doubling down on ideas gone bad. Real security is found in successful adaptation to changing circumstances, in resilience, fitness in the sense of being the right fit for the circumstances. We’re more often praised for staying our course, even, and perhaps especially, when we encounter repeated difficulties along that course. We have conviction, perseverance, and firmness of purpose. And we’re often stigmatized for changing our minds. Then we’re wishy-washy vacillators and waffling flip-floppers.
    Not being a quitter can indeed be a virtue, but why not knowing when to quit? Conviction is what gets celebrated, but it often turns out to be wrong. Perhaps our predictability is that important to culture. In Han Dynasty (or slightly earlier) China, there emerged the paired philosophical ideas of róu and gāng, flexibility and firmness. These were related to yīn and yáng, which are more general ideas that arose at about the same time and became more widely popular. Both pairs were retroactively, but incorrectly, regarded as the very basis of the original Book of Changes. These ideas saw what might be termed correction and conviction as belonging together. Neither can be praiseworthy alone, where imbalance tends to be maladaptive. If we want to outgrow this self-limitation, we need to face feedback with candor, humility, and honesty, to de-stigmatize changing our minds, to find honor in standing corrected, to open our identities and beliefs to breaths of fresh air, new inputs of energy and information. In systems theory, this is how local systems fight entropy. Boundaries need to be permeable.
    Ideas of non-attachment have seen a wide variety of forms. It may be too commonly seen as shutting down emotionally, ceasing to like or love in order to avoid dislike or hate. Intellectualization is a movement from hot cognition towards the cool to escape the added cognitive load of affect. This is both a coping strategy and a defense mechanism. The Stoics proposed the virtue of apatheia, to be without pathos or suffering. Eventually, the idea degenerated into the modern word apathy, to be without anything of any emotional value. The Buddha is incorrectly known for advocating withdrawal from everything because life is suffering. Counting to ten and taking deep breaths are well-known and normally healthy strategic responses to challenging stimuli. De-identification, like this is not me, this is not my belief, doesn’t leave us with nothing to be or hold onto. These just give us a little space in which to slip in some alternatives. The bottom line in mindfulness practices is learning to recognize thoughts and feelings as no more than thoughts and feelings, as transient appearances in consciousness that come and then go, like bubbles that come to the surface and pop. Observing the flow of phenomenon, we also become less averse to the spaces between things, the moments of what used to be awkward silence. Then the urgency to fill every gap of intolerable void goes away along with those thoughts and feelings that promised us forever. Non-attachment should be no more complicated than releasing our death grip on whatever holds us back or drags us down.
    Whether our identity is open or closed, we integrate new experience and life lessons according to their personal relevance, meaning, and value. On top of these, our cultures often provide us with sacred values. We may be brought up to believe that these take precedence over our individual lives. Along with other emotional processes like denial and emotional hijacking, these sacred values will interfere with how the mind grasps magnitudes. Nathan Hale got immortalized by saying “I only regret that I have but one life to lose for my country.” And he was praised for saying that. Millions have gone down that heroic path, though most are already forgotten, and all of them are mostly just dead. In the metacognitive domain, we can learn to manipulate the worth, weights, and measures of these relevances, meanings, and values, including the sacred ones. We can also just do fine tuning. To be in this domain at all, we need to question what’s relevant, what everything means, what’s the real worth of things aside from what we’re told they’re worth. Nietzsche’s call to a “revaluation of all values” meant an ongoing assessment of even the most sacred, the very tablets of law. To that end, we ought to philosophize with a hammer, while sounding the culture’s idols for that ring of hollowness. Those that get broken are not to be mourned. Higher purpose, where we are living for or dedicated to something greater than ourselves, can still maintain values as sacred, inviolate, and even more important than our individual lives. But with a metacognitive approach, these don’t come from others. They are the expressions of our individual examination, choices, and our sovereignty as individuals. Flexibility of framing, scales, horizons, and perspectives are important keys to reappraisal, appreciation, and revaluation. We can expand our minds by expanding our sense of the relevant and meaningful, and in that expansion, diminish the worth of the superficial, irrelevant, and maladaptive. But we have to learn that this ability can be ours only with some effort, and few cultures seem eager to teach this.

Social
    Peer pressure is no less an important concern here than it is for the kids. The context changes, now with greater concerns for the workplace society, secure employment, loss of collateral, and access to community and social resources. With modifications made for socioeconomic status, pressures are to conformity or standards of acceptable behavior for the groups to which we belong. We also have needs to be held in a certain minimum of regard by our fellows, and this usually requires the maintenance of a reputation against the wearing forces of rumor and gossip, or in some cases, the truth about our darker deeds and fetishes.
    The economic sector of the culture is able to manipulate a citizen’s sense of insecurity over status and long-term economic prospects. Where jobs are not available in a sufficient variety and abundance, it can be a terrifying prospect to lose the present employment, and this ramps up the pressures to conform or obey. Manipulation of this by others can increase behaviors of conformity and obedience. Our choices for metacognitive executive function are thus often hostage to the degree of our insecurity, whether manufactured or not. But we also have internal guides that keep our behavior in check or on track. Buddha called them hiri and otappa. Hiri is a sense of moral shame, driven by a need for self-respect and dignity. Otappa is an ethical wariness, a more outward-looking regard for the consequences of our actions. And of course there’s the simple idea of conscience, which some will assert is little more than an internalized set of social mores that now speaks to us in the first person. This may in fact be a correct assertion in most cases. But there’s also the conscience that answers to higher purpose, that Gandhi explained as he developed his notion of Satyagraha, which may run completely opposed to the current social mores. With all of these concerns for our need to fit in the larger socioeconomic context, any metacognitive decisions will first require choice, as of occupation, place of residence, lifestyle, and standard of living. That in turn usually means comparing options with flexible sets of values, decisions about what’s really important, and how much of an investment in time and energy each option merits. It may demand a revelation that time is worth more than money, or scenery more than success, or affection more than flash.
    Decisions abound over how mannerly, diplomatic, tactful, or obedient to be, and why. But choice that threaten to compromise ourselves and our values are sometimes opportunities to grow, and not threats at all. Sometimes it takes a metacognitive stance to step out of ourselves and see things from another point of view: audi alteram partem, listen to the other side, grant credit where due, and concede the good points of opposite views. We can try to put an opposite point of view into our own words and see how it sounds then. Even if you’re in it to change minds, this is done more effectively when defenses are down, not up. It often helps to draw a more inclusive circle around a pair of combatants, to name a higher level of relatedness. Diplomacy demands respect, and re-spect means “look again.” It isn’t always such an easy thing to do. Joseph Joubert offers, “To be capable of respect is today almost as rare as to be worthy of it.”         
    Social ridicule from outside a group is often the handiest tool to unseat the toxic schema seated by peer pressure. But we are now seeing an interesting, if frightening, phenomenon developing in America as the unofficial Christian hegemony is being increasingly challenged by science. As soon as they were able to do so without social ostracism and persecution, anti-religious forces began trying to put cracks in the walls of true belief, using shame and ridicule for the stupidity of bronze age belief relative to the modern age and scientific evidence. But faith only hardened against that assault, and became even more a point of pride, specifically as the opposite of evidence. Then a stronger faith became the only acceptable response to the challenges of hard evidence. Now churches proudly display signs like “The more educated we become, the further we move away from God,” “If your faith is big enough, facts don’t count,” “Faith is believing in something when common sense tells you not to,” and “A free thinker is Satan’s slave.” Sadly, however, science is taking some hits too, as the journalists begin to phrase its tenets with increasingly religious terminology, fervor, and illogic. A real scientist does not believe in evolution, even if he really really likes the theory.

Cultural
    We might be stuck forever with some version of us-vs-them, and simply be left to refine the process in ways that do us progressively less damage. There are likely deep and inherited elements of our reactions to out-groups. We can plainly see intergroup and territorial rivalries throughout the primate order. We hominins have had many hundreds of millennia to accommodate some sensible behaviors that allowed for peaceful exchanges of mates and trade goods between groups, so we do have an adaptive head start at overcoming our intergroup enmity. But it seems we do have to work at this. Even in the relatively enlightened, our prejudicial perceptions can be both subliminal and effective. This often happens in the amygdala, reacting with fear and anxiety, and potentially aggression. Further, these subtle reactions can be manipulated by politicians, preachers, and advertisers, as long as they know which buttons to push.
    Help can come cognitively by expanding our frames of inclusivity. We are still a very long way from outgrowing our juvenile playpens of party, nation, religious affiliation, race, class, caste, status, ability, age, sex, and sexual orientation, even when we mouth inclusionary words about all of us being human beings. The Native Americans, despite having had a lot of intertribal warfare, and each tribe mythologizing itself as the people of origin, still have some great ideas worth appropriating culturally. The best of these is probably the Sioux Mitakuye Oyasin, all my relations, a mantra-like acknowledgement that all beings, two-legged, four-legged, and rooted, are related in a single family. Putting things into a much larger frame tends to erase our un-kind perceptions of differences, as Earth, shrunk to the size of a billiard ball would be as smooth. We are, at bottom, more Terran than American or African, and we’re more primate than human. It’s often said, with much likely truth, that it would take a threat from outer space to pull our childish species into facing all in the same direction.
    Putting others conceptually into out groups often involves some level of depersonalization. They become generalized, homogenized, or stereotyped, too often in terms either of what we are not, or the things that disgust us. The best method for rising above this is travel, crossing the great stream as the Book of Changes says, thereby personalizing increasing numbers of them as individuals with non-stereotypical traits. This is also needed to get us out of our human exceptionalist rut, getting to know other species, besides livestock and pets, as sentient individual beings. The question still remains whether we must (or will always) have others to take stands against, since our evolution inclines us that way. This brings us back to the subject of attribution, since the logical stand for a unified humanity to take, if we had to make one, would be against those of malicious, toxic, or evil character. That brings us around to the question of whether we are all redeemable, given the right social and cultural influences. There’s a wide philosophical divide there. Some say “God didn’t make no junk, and all of us are His children” and others say “Evolution makes plenty of junk, and this should be selected against, and composted.”
    Resisting pressures to ideological conformity is another big metacognitive task. Zimbardo identifies two main types of conformity, sought in uncertain and ambiguous situations: informational and normative. In the first, we reach for common terminologies and models for shared understanding, agreement, or even consensus. In the second, we seek to feel a part of something larger than ourselves by being a normally functioning part of a group. We find safety in numbers and seek approval for being good at the normative roles we play. Even if conscience bids otherwise, we may come to see divergence from these norms as deviant, and in need of suppression or even open repudiation. It takes a real effort, and even a reappraisal of the very worth of belonging, to resist these normative pressures to conform. Zimbardo also advises vigilance towards “foot-in-the-door” techniques that rely on agreements to small initial requests that lead to acceptance of larger ones later on. Granting quarter is an alternate understanding for this, and the Trojan horse is a common metaphor.
    Resisting efforts at cultural persuasion also requires some well-maintained vigilance. Despite the wisdom that seeks a measure of peace and equanimity in life, it often helps to feel just a little insulted when someone tries to sell us a bad or maladaptive idea, or even a product we don’t need. Robert Cialdini (2006) identifies seven key principles of influence, and we may use hints of their use to prompt us in our vigilance: 1) reciprocity, grant a small something to incur a sense of obligation; 2) consistency of commitment, get some words spoken, even if general and tentative; 3) social proof, point to the normative and the comfort of conformity; 4) authority, appeal to a chain of command and a duty to obey accepted order; 5) liking, use of a friendly, charming, or charismatic spokesmodel or salesperson ; 6) scarcity, emphasize how special a person will become by adopting or buying this; and 7) unity, appeal to the sense of belonging to groups that adopt or buy this.
    Anthony Pratkanis (2011) identifies four tool chests of techniques used by those in the business of persuasion, and recommends equipping ourselves with both defensive and offensive tactics to use in resisting their efforts. 1) Landscaping forms the pre-persuasion set of tools used to prepare a subject, including setting the vocabulary, stage, agenda, expectations, information flow, and frame. 2) Social relationship tactics rely on social rules and credibilities, like trusted sources, authority figures, role models, status, specialness, identity, and belonging. 3) Effective communication skills rely on logical plausibility (valid or not), rhetorical devices, appeal to cognitive biases, priming, and repetition. 4) Emotional tactics rely on predictable affective associations to positive and negative triggers, especially such emotions as pride, belonging, empathy, appetite, and specialness or rarity on the plus side, and insecurity, intolerance, fear, embarrassment, guilt, shame, and regret on the minus. Defensive tactics might include vigilance to detect propaganda, playing devil’s advocate, debunking, or getting indignant and insulted. Offensive or proactive tactics might include familiarization with anticognitive terms and forms, vetting source credibility, getting second opinions, reframing, examining alternatives, knowing our vulnerabilities, watching for repetition, watching for repetition, see what I did there, and monitoring our own emotional responses with a little suspicion.

Linguistic
    Metacognition in the linguistic domain comes down to cultural and literal literacy, semantics, logic, versatility in our choice of lexicons, articulation, and familiarization with available ideas. Part One discusses most of these in some detail. Chapter 3.1, Developing Media Savvy and the Smell Test, offers some toolkits for vetting messages cast in the printed word. Chapter 3.7, Logical Fallacies, offers a fairly comprehensive enumeration of these buggers to watch out for. These don’t need to appear again or be introduced here.
    Two common sources of error in the media are false causal inference and teleological assumptions. Journalism and science reporting are not exempt. The causal problems are in part because it’s much too easy for us to connect simultaneous events in misleading sequences due to the linear sequencing of sentences, or to parse them incorrectly due to requirements to have parts of speech. Lightning and consciousness are not its. Inserting causes out of habits of vocabulary and grammar is common in scientific journalism and even more so in the marketplace. Did the brain evolve for problem-solving, or did the brain evolve to solve problems? There is no to, or for, and no purpose to evolution, and it sometimes requires us to contort our phrasing into some less familiar or more complicated forms: Mutations that contributed to better problem-solving tended to be conserved when they proved adaptive. General semantics and E-prime are examples of metacognitive efforts to address these problems. This observation has broader uses in alerting us to watch our use of words more carefully.
   
Work in the Metacognitive Domain
    It shouldn’t be surprising (or disappointing) that sorcery and shamanism get mentioned so many times in this chapter. But I should clarify that we’re not speaking of the expensive, smoke-and-feathers, new-age shamanism here. I mean the “breaking open the head” kind. The same for the two forms of Buddhism, Theravada and Chan, that are mistakenly thought of as religions. The very definition of this domain is thinking outside the box, and with the intention to effect change in either ourselves or in the world. This demands the exceptional, the non-normative, the transformative experience, epiphany, and sometimes ecstasy. The self and its conceits are often perceived as being in the way of dynamic interconnectedness and the unitive experience. This suggests that the new age and self-help movements in modern culture are not outside the box or metacognitive at all: they’re more correctly narcissistic and solipsistic (your mind is the creator of all things). If transcendence is what we truly want, then self ought not be our focus: That’s what we want to get over. Whether we’re speaking of samvega, fanaa, samadhi, satori, or nibbana, we are connecting and interconnecting to a world that makes the self seem just plain silly in scale, duration, and importance, and we become little more than places in time where energy and information get knotted together for a while. Of course, it’s also foolish to get overly attached to the ecstatic and seek a spiritual life made up entirely of special experiences. But there is a lot to be learned by getting over ourselves. Such cognitive shifts will offer us major reframing opportunities with strong affective components. These are life-changing events, powerful enough to affect and reprogram even our core beliefs, comparable in ways to the astronaut seeing the whole Earth for the first stime. You aren’t the same after that.
    Buddha was 25 centuries ahead on this cognitive hygiene learning curve, and it really isn’t that surprising that his work still attracts the interest of both neuroscientists and skeptics. His Noble Eightfold Path, or Ariya Atthangika Magga, charts one of the ways we can awaken, but in a gradual way that still demands a life’s work, with vigilance, heedfulness, and diligence. Salvation is neither given nor easy. And it should probably be acknowledged here that Buddhism’s monasteries are still a lot more productive of disciples than they are of Buddhas. What can you do? Believing is hard to transcend. The proper term for Buddhism is Dhamma-Vinaya, doctrine and discipline. It’s non-theistic or a-theistic, and most explicitly agnostic. It even explicitly denies the concept of a soul, and therefore reincarnation, which may still surprise even Buddhists. The sense of self, which has only the transient reality of qualia, emerges out of the contents of consciousness and goes away again with sleep. Theravada and Chan (or Zen) are the forms least entangled with mythology, ritual, and speculation, and are freer of questionable doctrine than the other branches. Buddha’s charge to those on this path was straightforward: “You should train thus: We shall be wise men, we shall be enquirers” (MN 114). 
    Samma Sankappa, or Right Intention, part two of the Eightfold Way, is the deliberate substitution or replacement of harmful subjective states with their opposites, particularly replacing craving, aversion, and intending harm. Some effective replacements are the Brahmaviharas, the abodes of Brahma, which include metta, loving-kindness; karuna, compassion; mudita, sympathetic joy in the success of others; and upekkha, equanimity. I would make bold to add four of his other virtues or values: khama, forgiveness; katannuta, gratitude or thankfulness; garava, reverence or deep respect; and khanti, patience. Neither reverence nor gratitude here require an object or presuppose a deity. The idea that we can simply choose to have these emotions in place of others, that we can select them as alternatives to their opposites, is the very definition of metacognitive as the word is used here. Emotions can be actions and not just reactions. This can be tested by having them. This does not seem to be standard operating software, however. It need to be learned and practiced.
    A crucial stage in Buddha’s own awakening was samvega, which might be likened to the science fiction trope of looking simultaneously at alternative timelines and being deeply impressed by the superiority of one over the other, such that taking the lesser path becomes unthinkable. The experience is not simply one of shock or horror in seeing things gone wrong: it will also see the way out, and offer the urgency and motivation needed for that. This is an intense experience, not a casual one, not an intellectual one, and is most often had in an altered state. But not always: when an addict hits bottom, the two paths are life and death. The clarity and emotional intensity of this choice makes it metacognitive, even from down in the gutter.
    Following Samma Sati, or Right Mindfulness, discussed above, the eighth step on the path is Samma Samadhi, Right Concentration. Where mindfulness examines the specific objects of consciousness, concentration is with what contains those objects. It has two primary practices or paths of cultivation (bhavanas). Samatha Bhavana is the movement by meditation into altered states of mind by way of themed mental concentration. The themes are called Jhanas, phonetically and etymologically related to the words Dhyana, Chan, and Zen. Essentially these are reframing writ large, willed visionary, mystical, and unitive experiences that function as mental stretching exercises. The end, if this can be called goal-oriented, is tranquility or serenity (samatha). While this practice will explore states with names like “infinite consciousness,” the reification of such states as metaphysical realities is absolutely not the point. The better-known Vipassana Bhavana centers first with concentration on the breath, while letting the objects of consciousness, the sensations, memories, thoughts, and feelings, come and go, observing them well and closely in passing, but not grasping at them in any way. They arise, get understood, and depart at their own pace. It’s a perfect exercise for working with resentments as discussed earlier. As memories come and go, they get bathed in a new and more peaceful light and affect. Insight deconditions and reconditions with the help of neuroplasticity.
    Beginner’s Mind (chūxīn) may bring us into a state called jamais vu in French. This is the opposite of jamais vu. You know you’ve been here before, but it feels like the very first time. The song you’ve heard a hundred times before suddenly becomes twice as rich and intricate. It’s like the reset button has been hit. Or you see the back of your hand for the very first time. This state is attainable by way of  certain therapies and mindfulness practice, but tell me you know where else this is going. The Fifth Precept in Buddhism calls for “restraint from using wine, liquor or intoxicants(,) which result in heedlessness or negligence (pamada) of the mind or emotions.” That optional comma points out an ambiguity. The comma is there in the monasteries, so that any intoxicant is ruled out. Others will admit certain intoxicants which don’t contribute to heedlessness. Tea and coffee are found on one level of this category, and elucidogens on quite another.

Elucidogens
    There is nothing especially wrong with the word psychedelic,” although it sometimes connotes bad art and rock concerts. The word simply means that it manifests or reveals the mind or the psyche, and it’s usually just explained as meaning "mind expanding."
    “Elucidogens” is a neologism we ought to be introducing to the world, a good substitute for hallucinogens, which implies that you are encountering things that aren't there. This is derogatory and it serves the prohibitionist's interests by conjuring images of danger. The word also replaces entheogens, which carries the root theo for deity. (I’d love to take credit for the word, but I found it in one other place, coined by an anonymous author). Gods are absolutely not a necessary component to the elucidogenic experience, despite the frequent experience of reverence and gratitude. The reverence and sense of sacredness, even of impersonal divinity, do not require us to project our myths or adolescent fantasies and expectations onto them. This category of “spirit medicine” is more ancient than human culture, and its history has long been confused and deliberately encoded or obscured by social and political reactions to threats from its effects, and by efforts at secrecy to avoid both persecution and abuse. Terence McKenna, who was always outspoken on the subject, offered, “Psychedelics are illegal not because a loving government is concerned that you may jump out of a third story window. Psychedelics are illegal because they dissolve opinion structures and culturally laid down models of behavior and information processing. They open you up to the possibility that everything you know is wrong.” It will help much to grasp this possibility if we're ever going to set things right.
    Governments came down hard on the elucidogens in the 60s and early 70s, once their use had started to get a whole generation questioning the Vietnam war effort, needs for conspicuous consumption, and honesty in government. Every propaganda trick was brought into play, and both the voters and the legislators ate it up. This effectively ended a very promising start to their use in mental health therapy. Further research was banned, so that all subsequent reports could be dismissed as anecdotal and unscientific. Bogus experiments, suggesting a link to chromosome damage, were widely circulated. At one point, some ill-informed journalist or researcher thought it would be useful to characterize the experience of these “hallucinogens” as psychotomimetic, mimicking psychosis, and that stuck in the culture, enthusiastically supported by a government that desperately needed to suppress substances that seemed to be encouraging rebellion. The comparison was never supported clinically. And yet it continues because dumb gets around much better than smart in this culture. Fortunately, these substances seem to be making a comeback, first under constitutional religious rights protections, and more cautiously with scientific and medical support. Even with mental health justifications, though, elucidogens won’t have arrived until we have legal protocols for their use in exploring positive psychology, because, frankly, the human norm that’s the current standard of mental health really isn’t a thing worth striving to attain.
    In their physical effect, elucidogens overconnect the brain with a flood of neurotransmitters, from the sensorimotor on up to the highest functions, and new neural connections made are formed in the process. It’s the opposite of short-circuiting. This is supported by recent neurological accounts. Clavin Ly, (2018) has claimed, “The ability to promote both structural and functional plasticity in the prefrontal cortex has been hypothesized to underlie the fast-acting antidepressant properties of the dissociative anesthetic ketamine. Here, we report that, like ketamine, serotonergic psychedelics are capable of robustly increasing neuritogenesis and/or spinogenesis both in vitro and in vivo. These changes in neuronal structure are accompanied by increased synapse number and function.” Wild allegations of permanent brain damage might soon be replaced with peer-reviewed allegations of permanent brain repair. And we get some of our beginner’s mind back. Hearing music in these states (jamais vu) also helps to alert us to the limited attention we pay to the rest of the world around us. We make mental connections we might not otherwise make. Our emotions are heightened when new perceptions and ideas rise into our awareness, so it’s easy to start to favor these over the old perceptions and ideas. Doors and windows open that we didn’t even know were there. Old memories, and especially resentments, arise into this altered state and get altered themselves. The samvega experience is common here, especially with addicts, and the alternative paths and timelines become a lot clearer. This is why elucidogens are so effective in treating addiction. We have a need for something akin to an outside perspective, an altered state, a dream body, for looking at our consensual reality. Otherwise it seldom gets questioned, and this is to our detriment.
    The usefulness and strength of these tools or sacraments in reprogramming our own minds, both cognitively and affectively, is beyond doubt for anybody with the experience, or who has somehow passed through the cultural denial. Currently there’s a lot of medical research being done with substance assisted self-modification, especially Ecstasy for PTSD, Ibogaine for Heroin addiction, and Peyote, Teonanacatl, and Ayahuasca for general cures for addictive and behavioral disorders and depression. See www.maps.org and erowid.org for the most reliable current information. How these substances work in mental reprogramming is also flying right in the face of those who are promoting consciousness and information theory as primarily electronic phenomena analogous to computer systems. The forgotten side, where the feelings and emotions come into play, is the neurochemistry. Things like meaning, value, appreciation, forgiveness, and sacredness may not compute to cyberneticists, but they’re at least as vital to reprogramming as any electronic information. And their basis is chemical information. It seems clear that a sine qua non for effective self-reprogramming is radically altering our feelings and emotional states, and not just getting the right idea and having that change things with the skillful application of reason.
    I really don’t give a rat’s ass about disclaiming advice to use entheogens. As far as I’m concerned, our rights to our own minds and our own lives are absolute, and the responsibility to be careful is a part of that. The only advice I would give is to set the experience up in order to minimize the chance of ugly surprises, like accidentally lighting your house on fire, crashing your car, or getting arrested. And this: if you run into anything dwelling in your own mind that scares or horrifies you, don’t run. There will be nowhere to hide from that anyway, and that should be known going in. You just have to stay calm and learn what it has to teach. Digest it. In fact, many of the problems we carry around are there because we won’t face them with anything like courage. Dr. Gabor Mate suggests that “trauma is not caused by the extremely painful experience itself but rather by our dissociation from that part of ourself that had to bear that experience. When I came across the studies about psychedelics, it was very remarkable to me to see how patients under the effect of psychedelics were able to reclaim and integrate their most traumatic experiences and how that led to a powerful healing process.”



Part Three:

3.0 - Toolkits and Anticognitives

by Category and Domain

    Part Three began as Appendices, but that implies afterthoughts and the chapters here are too much more than that. In some ways these are the meat of the work, or at least some of the choicer cuts.
    The accessibility to memory that’s provided by lexemes has been discussed several times, how names for schemas, scripts, affective states, and cognitive entities provide handles for their recall and management. This has also been discussed a few times as naming the demons, done for the sake of getting some command over them. And the topic came up briefly under Buddhist Right Mindfulness, meditations on mental objects, on feelings or sensations, on activities of the mind and mental processes, and on objects of thought.
    Chapter 3.1, Media Savvy and the Smell Test, differs from the remaining chapters as it details an artificial heuristic, a template for use in vetting media inputs. If you include education and its textbooks, media accounts for most of what modern culture has to teach us. I’ve taken John McManus’s SMELL Test for the basic outline of this heuristic and expanded its content to incorporate a couple of other systems, and added a large number of typical questions within that framework that might be posed to any candidates applying to be our new knowledge.
    Chapter 3.2, Evolved Heuristics and Processes, outlines a large number of our native cognitive and affective processes, with emphasis on those which might let us down in our efforts to discover the true. While the final four chapters will sort these processes by the domain they fit within, this outline sorts these mental functions according to what domain they underlie and may or may not support. Taking some issue with Tversky and Kahneman, these anticognitives are qualitatively different from cognitive biases. There’s no real motive or reason behind the failure of these, only the explanation that they are general processes conserved in evolution as general problem-solving strategies, without task specific applications, and adapted to life in a simpler world than the one we live in now. They fail, but there is no motive to fail.
    Chapter 3.3, Emotions and Affective States, is an attempt to develop a finer granularity or articulation of affective states, towards improved management and understanding. The term emotional self-control has too many negative connotations to be of much use here, as this too often implies suppression and repression. Here, we’re working on improving or upgrading our feelings and emotions and keeping them from messing up our perspectives. Lisa Feldman Barrett writes “Emotional intelligence, therefore, requires a brain that can use prediction to manufacture a large, flexible array of different emotions. If you’re in a tricky situation that has called for emotion in the past, your brain will oblige by constructing the emotion that works best. You will be more effective if your brain has many options to choose from.” This articulation, or finer granularity, is more about enriching our access to feelings and emotions than it is about overthinking them.
    The remaining chapters itemize specific mental processes as lexicons, with some annotation, some explanation, and some examples where they appeared to be required.



3.1 - Media Savvy and the Smell Test

Garbage In Garbage Out, Some Filters of Baloney and Craap,
Source, Motivation, Evidence, Logic, Lacunae

Garbage In, Garbage Out
    It’s a lot easier to not learn than to unlearn, but with all there is that’s worth learning, there’s a conundrum in deciding what to not learn, especially since we don’t know that much about it yet. For this we will need prejudgment, or prejudice, which is itself problematic. Most of our information comes from others through the media, and much of this has been crafted in some way to convince us of something. The various kinds of pages and airwaves are loaded with misinformation, disinformation, and vapid information that isn’t worth getting to know, just ghafla, distraction, and gossip. Pure entertainment aside, anyone who wants to be careful about what comes to live in their mind will want some sort of toolkit for weighing the worth of new input, to avoid loading up with error and noise that may need to be unlearned later on. Plays to the emotions need to be watched the most closely. Novel information and news is more exciting to see and hear, even if it’s less probably true. We’re wired to learn things that excite us. Being in the know, and among the first to know, holds promise for us of elevated status. Also, just about every emotion, pleasant or not, can be uses as a trigger or data delivery device.
    We don’t want to take skepticism too far. While there are good reasons to hold to a quantum of conservatism, as science does so well, we also don’t want to stop new and improved ideas from propagating through the culture. It’s to our advantage to get better glimpses of new ideas than cynical, knee-jerk dismissals will allow us. Some degree of suspension of disbelief can be as important as its counterpart. When pernicious and toxic memes threaten to spread unchecked through a system, going viral, calling to faith or credulity rather than proof of viability, these can be subjected to some immune system analogs and questioned at the gate before they get in. But the analogs also have their own autoimmune dysfunctions. It's not good when they run amok, as in (modern) cynicism. The body’s own, non-metaphorical immune system functions by recognition of certain molecular configurations on the surface of foreign bodies, like locks and keys. Cognitively speaking, these are analogs of our front line tools that are set to recognize and manage baits, traps, ploys, tricks, triggers, primings, and specious reasoning.
    There are reasons to be judgmental wherever bad judgment can be avoided. While equanimity or acceptance may well be one of the mental states most worth achieving, it might still be a good idea to let ourselves feel just a little insulted, offended, or disgusted by attempts made to misinform us, especially propaganda and advertising, given their subliminal effectiveness. You pay for any advertising that you fall for, since this is included in the higher prices of name brand products. An explicit defensiveness or closed-mindedness, and an assertion of some cognitive dignity, may sometimes be called for. But obviously, we can’t entirely trust out own intuition or gut feelings about the worth of new input, and our home database is the home base of apperceptive mass and cognitive inertia. We have sources of resistance and denial living deep inside us, far out of reach of conscious access, particularly known as cognitive biases, coping strategies, defense mechanisms, and subtle linguistic ones known as logical fallacies. We benefit by getting to know these, and learning to recognize them whenever they make themselves known. These toolkits are a big part of an overall cognitive hygiene maintenance and repair shop.
    One question for us here is, “Can we ever really be objective?” We have another extreme false dichotomy here, with relativism and postmodernism versus objectivism and naive realism. Obviously, points of view, frames of reference, perspectives, prior landscaping, and any stipulated universes of discourse will all alter or shape perceptions and the ideas formed from them. But there is no slippery slope here into deconstructionism or existential angst. We can usually find something of the true, if not the truth. A couple of points of view on a subject are usually enough to confirm that there is some reality out there being perceived, even when we can’t grasp it in other than human terms. And clearly, large portions of reality will remain invisible to us until we can work our way around our own anticognitives. Can we learn to intuit that something is wrong using something other than these?
    The masses are notoriously unwilling to be better informed (et populus vult decipi), so media gradually adapts itself to approaches that work in practice, often amounting to using small bits of loaded and slanted information. Sadly, the asymptote we move toward here seems to be Orwellian sound bites and buzzwords. With the corporatization of media, both the quality and diversity of available material diminish as sources coalesce or agglomerate, taking advantage of greater profits to be had in economies of scale. The number of field operatives, investigators, researchers, reporters, and writers diminish as cultural data is syndicated. Quality information is expensive to collect, so it must be made to pay. Sponsors gain more control of what’s being said and what’s left unrevealed. The public’s attention is commodified as advertising revenue. Real change is effected by a literate or otherwise involved few who know how to use these approaches.
    We’ve witnessed the increased effectiveness of multimedia, simultaneous graphics, motion pictures, sound, and sometimes even smell. Subliminal suggestion and priming have become technologies. The sensory array, the entire umwelt, is hit with greater immersive impact. News media outlets, and seemingly their consumers, are hooked on spectacle, bread and circuses, the short-term crisis du jour, blown far out of proportion to the remainder of world events. Confidence games play to cognitive biases as magicians play to perceptual biases. As the government licenses the airways to increasingly commercialized entities, it becomes more important to check out sources that are getting by on no-strings subsidies and nonprofit grants. Economically unbiased news needs hands-off funding from outside. NPR is an American example, a perpetual target of conservative budget cuts, although it does lean somewhat left, perhaps in trying to be a counterforce or counterweight to the establishment media.
    Commercialization of the news is a growing concern now, particularly with the sources merging into smaller numbers. News needs to draw attention, get high ratings, attract sponsors, and keep the readers and viewers coming back. Corporations can’t simply vow to be fair and balanced if they’re also required by law to maximize their shareholders’ profits. You know how those chips fall. John McManus argues that media should have some bias, towards the common good, principled persuasion, brevity, and relevance. So, as we’ve seen, should education. But advertisers want to appeal to true believers in the system, not to the eclectics, agnostics and anarchists. They don’t want to undermine consumer confidence by portraying dismal outlooks, intractable problems, or the urgent need for progressive revolution. The Ponzi growth economy is supported unquestioningly, no matter that it’s unsustainable. The constrictive forces working against widely varied interpretations of news are still restrained by rights of free speech. And reporters still have reputations to protect, or at least they still have reasons to worry about potential slander and libel suits. That contributes something to news credibility.
    Media is being diced into unlimited interest niches, the numerous choirs to be preached to, and consumers of cultural input can select what pleases them most, and tune out the rest, and live in the echo chambers of their choice. We can now choose the news, and assemble news programming that has what we wish to see. Gone are the days when we all had the same 6 o’clock news to watch, or a choice between only two local newspapers. With modern niche programming, social factions are getting increasingly immersed in their own separate worlds, and learning less of the lingo and opinions of others. There are self-reinforcing or positive feedback loops in target audience selection, so that memberships only grow increasingly confirmed. This is clearly a strong, disruptive counterforce to the noosphere and global village that de Chardin and McLuhan predicted, but it’s really hard to say at this point in history which one will prove to be worse. Neither show much regard for intelligent decisions made by sovereign, individual minds in human-scale communities. We ourselves will isolate what we want to see and add our personal meaning. On the other hand, we can now consult multiple sources, some intentionally spaced wide apart, to gain some stereopsis and perspective, but how many of us actually choose to do this?

Some Filters of Baloney and Craap [sic]
    In their 1988 book, Manufacturing Consent, Edward S. Herman and Noam Chomsky proposed looking at five filters of editorial bias to look at: 1) how big is the medium, who owns it, and where is the profit? 2) Does advertising provide the money to get on the air, a de facto licensing authority? 3) What are the sources of mass media news, who provides or subsidizes it? 4) How is material adapted to avoid flak, damaging criticism, or lawsuits? and 5) Are there social control triggers, mechanisms and bogeymen, like communists, terrorists, drug pushers, or outsiders in general being used?
    Just as writers or journalists have their mnemonic aids (like 5WHQ, who, what, where, when, why, how, more questions?), there are tricks for reading the works that they print. There are some handy, beginner-level cognitive toolkits for first-screening or vetting new cultural input. Maybe the best known is Carl Sagan’s “Baloney Detection Kit,” from his Demon Haunted World (see links). This is a much, much briefer set of ideas than those developed here, but it can be a good place to start, since it lists some of the most common traps to swatch out for.
    Another template is the CRAAP Test, from California State University’s Meriam Library. CRAAP (see links) is a mnemonic for five things to look for in a presentation: Currency, Relevance, Authority, Accuracy, and Purpose. These five criteria can be assimilated into a differently arranged, 5-category heuristic toolkit called the SMELL Test. It was developed by John McManus in Detecting Bull: How To Identify Biased, Fake and Junk Journalism in the Digital Age. This is a handbook worth reading, and it’s not a difficult read. It’s maybe AP secondary to undergraduate level. The SMELL acronym is the organizational structure that I’m borrowing to use below, but with much-expanded content. The key words are Source, Motivation, Evidence, Logic, and Lacunae (his word is Left Out). There’s a link at the end with his own summary, just so there’s no confusion with the elaborations I’m making below. So far, I've yet to see a FISHY template, but there's still room for one.
    In addition to merging some of the five CRAAP test criteria with this SMELL mnemonic, this system of judgmentalness or judiciousness also works rather nicely with a far older, more highly developed system of judgment. Reason suggests that an ideal system of justice would simply involve both sides telling the truth in mediation or binding arbitration. The adversarial system of justice, with its endless games and overpriced, mealy-mouthed advocates arguing their exaggerated half-truths, nevertheless has something to offer critical mindfulness. For all of the flaws in the world’s justice systems, there are some useful takeaways, especially in rules of evidence or evidentiary procedure. Since cases are usually built inductively, against reasonable doubt, many dovetail with the principles of informal logic. Some facts affirmed by deduction may be entered without evidence, as with judicial notice. Most of the rules can be applied analogously here, but there are some exceptions, like suppression of evidence, or letting some things go on unrelated technicalities. We don't need to let slippery things slither away through loopholes. Generally speaking, the SMELL Test acronym is also able to incorporate useful ideas about testimony, witnesses on both sides, reliability of evidence, soundness of argument, and the overall weight and sufficiency of evidence.
    Of course, you get to be the judge, and the rub may be in what evidence you yourself choose to admit or disallow, and in the different self-schemas you choose to seat on your own internal jury. And are you prepared for that voir dire? It’s up to us to boil it all down to what the takeaway or verdict will be, the core truth, gestalt, gist, relevance, or value, to decide what’s germane about it. It’s up to us to decide what duty we have, to ourselves or culture, to get it right. How much do we fear embarrassment or mockery at getting it wrong? Are we OK with spreading gossip, rumor and unvetted stories? Is news really all about having something to yak about with our friends? Is it morbid curiosity? Would we rather be popular than right? Is there a civic duty as a citizen and a voter? How much self-improvement do we need and how much could we use? And the following are some of the questions we might ask of the information we’re being asked to accept. They are in no particular order, just like life comes at us.

Source
“Who is providing this information?”
“How big is the medium, who owns it, and where is the profit?”
“Does advertising provide the money, a de facto licensing authority?”
Are the sources qualified as sources?
Can the information stand alone without this source’s support?
What reasons are there to trust the sources or think them suspicious?
Are the source’s own sources cited where assertions are not original?
Is the medium pandering to corporate owners or advertisers?
Is the medium pandering to target demographics?
What biases or slants are sources likely to have?
Who paid for this expertise?
Is the source a press release of a government spokesperson or
PR firm?
Are official and public relations sources likely to be spun?
Are confirming second opinions or peer reviews offered?
Should anonymous sources be heard?
How much weight should simple credentials be given?
Are credentials even necessary here? Where were daVinci’s degrees?
What’s the guy called who almost didn’t make it through med school?
Is this celebrity speaking only from celebrity, or from conscience?
Are sensationalist or fallacious headlines and hooks untrustworthy?
Are spokespersons screened or censored to toe a particular line?
Are sources self-censored out of fear of reprisal?
Will reporters self-censor to maintain access, sources, and funding?
What might sources pay for privileged access?
Are this source’s apologies and retractions well-buried in their material?
Is the news itself a word from our sponsors?
Justice: Is the source transparent or redacted?
Justice: What is the likelihood of perjury in this testimony?
Justice: Are witnesses competent?
Justice: Is expert testimony truly expert and unimpeachable?
Justice: Is expert testimony fully within that field?
Justice: Should expert testimony remain a third-hand opinion?
Justice: Is witness testimony subject to point of view and perspective?
Justice: Is cross-examination adequate to fill in holes?
Justice: How does cross-examination shift points of view and frames?
Craap: Do the authors/sponsors make their intentions or purpose clear?
Craap: Does a URL (like .edu or .com) reveal anything about the source?
Craap: Are there spelling or grammar errors, suggesting carelessness?

Motivation
“Why are they telling me this?”
Are social control triggers, mechanisms, or bogeymen being used?
Are value-laden words, buzzwords, bafflegab, and stereotypes in play?
What is the agenda here? Might it run deeper than the one declared?
Who benefits from this and who might be harmed?
Should we also ask a more disinterested party?
Should we ask for proof of disinterest?
What might be the role of cognitive bias?
Are there social cues that warrant suspicion?
Are sources dispassionate or motivated?
Are conflicts of interest present?
Are sources disinterested, except in objectivity (like consumer reports)?
Is the purpose to entertain, sell copy, suggest, inform, narrate, or disclose?
Is the purpose to persuade, convert, sell a candidate, or sell a product?
Is the purpose antagonistic (exposé, whistleblowing, or watchdogging)?
Is this a credo or manifesto for a cause?
Are premises, propositions, and arguments loaded with emotional appeal?
What am I being promised here?
Are egos and self images being played to?
Is this seeking to undermine my sense of security?
Is my fragile self-schema or self-image being played to?
Is my need to belong to a social group being played to?
Might subliminal methods be in use? Hidden advertising?
How much editing is done to boost ratings or please investors?
How much editing is done to target a market demographic?
Is a response, action, or behavior being called for?
If you follow the money, what will you find?
What are the funding sources for this research?
Is a hypocritical diversionary tactic a possibility?
Do reporters embedded on one side of a conflict owe that side a debt?
Justice: Is the prosecution malicious? Prejudiced or biased?
Justice: Are witnesses hostile?
Justice: Are character witnesses relevant?
Justice: Do witnesses have a piece of the puzzle or the full experience?
Craap: Is the information fact, opinion, or propaganda?
Craap: Does the point of view appear objective and impartial?
Craap: Are there ideological, cultural, institutional, or personal biases?

Evidence
“What evidence is provided for generalizations?”
Is all necessary evidence provided? Is this sufficient?
Is testimony first, second, or third hand (hearsay)?
Have particulars in evidence been substantiated, tested, or vetted?
Are the sample sizes adequate?
Are the sample sizes representative?
Do claims appeal to our background biases?
Is seeing believing? Did the Statue of Liberty disappear?
How likely is this opinion or interpretation more than fact?
Are routes provided or available for independent verification?
Are claims and results replicable, and have they been replicated?
What is the method by which evidence is gathered?
Are questions in polls and surveys slyly worded?
Were leading questions asked?
Do the metrics in polls and surveys reflect a bias?
How firmly are assertions and premises worded?
Justice: Are witnesses reliable? Biased? Hostile?
Justice: How much is hearsay and second hand report worth?
Justice: What is the custodial chain of evidence?
Justice: Might evidence have picked up some herpes?
Justice: What evidence might be dismissed as irrelevant?
Justice: How much evidence is circumstantial?
Justice: Is the burden of proof on the one asserting or accusing?
Justice: Is there a presumption of innocence and benefit of doubt?
Justice: Is exculpatory evidence weightier than inculpatory?
Justice: Is cumulative weight or preponderance sufficient?
Justice: Is evidence anecdotal or cherry-picked?
Justice: What constitutes reasonable doubt?
Justice: Should excited utterance be admitted into evidence?
Craap: Is the information current and currently relevant?

Logic
“Do the facts logically compel the conclusions?”
Is specious logic in play in the argument?
Do violations of logical principles appear to void the argument?
Do violations of logical principles simply fail to support it?
Do the arguments cohere?
Are claims both verifiable and falsifiable?
Are the premises true?
Is the proper vocabulary in use?
Are different vocabularies or lexicons being mixed here?
Does the conclusion necessarily follow?
Is the conclusion merely strengthened by true premises?
Is the conclusion even related to the premises?
Do comparisons and analogies cover their intended ground?
Are there alternative explanations and meanings to those presented?
Are causal relationships being inferred from simple correlations?
Are false dichotomies being drawn?
Are slippery slopes being threatened?
Are straw men being sacrificed?
Are red herrings being served?
Should broad or public acceptance be used to certify truth?
Are statistics and graphics being used to manipulate impressions?
Justice: Is testimony relevant? Is evidence relevant?
Justice: Is evidence material, with fact connected directly to outcome?
Justice: How are statistical methods applied to particulars?
Justice: How much inference or reach is required?

Lacunae (Left Out)
“What’s left out that might change our interpretation?”
“Is material adapted to avoid flak, damaging criticism, or lawsuits?”
Do premises concur with experience outside the argument’s context?
Have talking points or universe of discourse been carefully controlled?
What is being marginalized, discounted, or ignored?
Are non-normative instances being summarily dismissed?
Are samples truly representative?
Does the question have other dimensions that are being ignored?
What would misdirection or attentional blindness achieve?
Has data sampling been fair and representative?
Has supporting data been sampled broadly enough?
Are omissions, innocent, negligent, or deceptive?
What sort of information would cognitive bias omit?
What sort of missing data would alter the conclusion?
Who is the editor in chief here?
Are multiple points of view, frames, and lexicons being shared?
Are other sources of opinion available, especially from out-groups?
Might this be only half true? What constitutes the whole truth?
What might an imaginary opponent or devil’s advocate assert?
Are alternate conclusions marginalized, exaggerated, or caricatured?
Have second opinions been sought?
Are we learning what we need to learn?
Should control groups have been required?
What authorities and sources are being ignored?
Have materials been dumbed down into sound bites and buzz words?
Justice: Has evidence been suppressed and why?
Justice: Is evidence withheld out of negligence or incompetence?
Justice: Is evidence withheld out of intent or malice?
Justice: Are testimony and evidence rejected for irrelevant reasons?
Justice: Should privileged communication be considered?
Justice: Should permissible withholding be considered?




3.2 - Evolved Heuristics and Processes

By Affected Domain: Sensorimotor, Native, Accommodating,
Situational, Emotional, Personal, Social, Cultural, Linguistic

By Affected Domain: Sensorimotor
    Aesthetic sense would refer here to an appreciation of beauty existing prior to the setting of personal standards. Even though little appreciation is shown by those with unmet Maslovian deficiency needs, the sense seems to emerge, with plenty of variation, once these have been met. Cue the spectacular banjos in Deliverance. Socially, we have an unfortunate innate tendency to associate physical beauty with moral goodness, a handicap to the plain and a boon to the beautiful. While this may come from somewhere genetically, it’s by no means a reliable rule of thumb, and it’s an intuition to be watched with a readiness to override it. We are drawn in some ways to beauty out of an innate association with health and good order.
    Attentional arousal, and the lighting up of consciousness, is driven by the reticular formation, a configuration of several brain regions with particularly important roles for the brainstem. The mind is more flashlight than lantern, and outside of what is selected is darkness. We’re never aware of more than a small part of our mental activity or our sensed environment. The brain network that quickly estimates the relevance of raw stimulus and draws attention there is called the salience network. Reticular activation has many functions, but with regard to attention, important triggers are suspicions or suggestions of salience, relevance, pertinence, infrequency, homeostatic need, threat, contrast, novelty, and dissonance. While normally triggered by unconscious processes, sensory signals, and affective states, higher cognitive processes are able to intervene and redirect attention. Attention can be paid as well as grabbed. Attention is cognitively expensive (hence paid) and isn’t normally apportioned very generously. It’s allocation is called centration, or focalization, on a limited portion of a stimulus or a discrete aspect of available information, with the consequent inattentiveness to what remains. Attention, whether reflexive or intended, makes regions and portions of memory available on standby for use, a dynamic assemblage called working  memory.
    Attentional limitations circumscribe our multitasking abilities and leave us subject to misdirection and other manifestations of attentional blindness. It’s fun when we’re watching magicians, but not so much when we learn we’ve been conned. We don’t always have a say in what draws our attention, either. High priorities will go to novelty, relevance, resolving cognitive dissonance, and satisfaction of pressing needs. Even masters of concentration often have to wait patiently in this queue.
    Automatic processes are capable of running without the need for attention, without drawing upon general processing resources, or interfering with other concurrent thought processes. This is a function of procedural memory. With this, we can drive for miles in dream and reverie, or imagined conversations, and not wake up until the stop sign has passed us by and the patrolman’s lights start to flash.
    Focusing effect can place unwarranted degrees of attention on a single aspect of a situation or problem, undervaluing or underestimating others. This may be driven by salience or personal relevance only, and may overlook more important aspects of a situation.
    Jamais vu is the opposite of déjà-vu. We know we’ve had this experience before, but it feels like this is the first time. “Have you ever really looked at your hand?” is a common experience for persons in altered states. This can break down mental sets and encourage us to reopen cases, to look again (re-spect), or revisit things we only thought we knew.
    Mimicry is learning by copying or mirroring the activities of others, making second-hand experience feel more like first-hand. This appears to be reinforced by mirroring neural networks that simultaneously activate sensory and effector modalities, so that the seeing of something being done may mean the activity is also being rehearsed by motor neurons.
    Music, drumming, dance, chanting, and song are human universals, though specific forms vary widely across cultures. Universality suggests some path of genetic inheritance.
    Naïve diversification is an inclination to favor more variety in experience if this choice is made in advance, and to favor familiarity more if choices are made progressively or sequentially.
    Novelty preference will draw attention to things not yet experienced or not yet understood until some level of satisfactory familiarity is reached, or until you’ve been there and done that.
    Object permanence or persistence is the benefit-of-doubt assumption we learn to make that an object still exists when it’s removed from our immediate experience, and that it retains its basic shape and other properties when our perspective on it changes. We begin to learn this in infancy, and several higher order animals species use this process as well.
    Picture superiority effect is expressed in the saying that a picture is worth a thousand words. Concepts can be learned and recalled more readily when illustrated. Visual constructions are a lot more anciently familiar to the brain than language, and they have the added advantage of not being limited to linear representations and the syntax that those require.
    Play is a universal, found throughout the animal kingdom, particularly in youth. Play can be physical or imaginative. It’s often lost in adults attending to matters of consequence. This is a foolish move.
    Priming effect occurs when exposure to one stimulus directs the response to a subsequent one. Priming sets the stage for an experience to come. It may narrow the field of view, or establish the perceptual frame to be used, or pre-constrain a universe of terms or discourse. It may do all of this purely by accident, or simply by virtue of this being the leading stimulus. It will draw upon the memories related to itself first, which become the easiest to access, and so set up the availability heuristic (next section).
    Relevance preference will draw attention to phenomenon deemed relevant to our short-term needs, physical or otherwise, and including needs for a sense of meaningfulness. It may require intervention by higher order functions to illuminate relevance to long-term needs and thus defer gratification. The fact that we focus on the relevant renders the world more relevant-looking, giving us a bias towards sensing meaningfulness.

Native
    Altered states, at least in motivations to seek them and methods for their attainment, are a human universal. The strength of motivation varies widely, particularly along a timidity-courage axis. Most may be content with milder forms like fermented beverages, or music, drumming, dance, chanting, and song. But there seems to be enough of the courageous for each tribe or village to have its own shaman.
    Analysis paralysis occurs when snap judgment isn’t snappy enough and we get sucked into too much detail. This is encapsulated in an anonymous poem: “A centipede was happy quite, until a frog in fun said: ‘Pray tell which leg comes after which?’ This raised her mind to such a pitch, She lay distracted in a ditch, Considering how to run.” Satisficing will usually put a stop to this.
    Anchoring-and-adjustment is an inclination to anchor a reference frame or context to the first piece of information presented, setting the stage for the cognitive work to follow. It’s a priming effect. This can preset a less-than-optimum universe of discourse, or frame a perception in too small or too large a context. Errors identified by Tversky and Kahneman can be due to: “Insufficient adjustment; biases in the evaluation of conjunctive and disjunctive events; and anchoring in the assessment of subjective probability distributions.”
    Apophenia is fully over-imaginative pattern seeking, a tendency to project meaningful form onto random input. This has less pre-existing suggestions of form than that seen in pareidolia, q.v. Typical examples include ganzfeld experiments using dimensionless white fields, visions during snow blindness or whiteouts, or hearing voices in static or white noise. In evolutionary psychology, it’s important for us to look at potential explanations for the survival of these rather sloppy heuristics or cognitive shortcuts, even when they so often lead to error. This is a real challenge, since they are also active in our search for such explanations.
    Availability heuristic over-relies on assistance from memories that come more readily, easily, or quickly to mind. Recentness, immediacy, freshness, or frequency of repetition will make this data more readily available to awareness. The mind seems to assume that if this perception has grabbed this much attention, and recently, then it must be both important and currently relevant. This assumption only reduces cognitive load when the assumption is correct. Errors identified by Tversky and Kahneman can be due to: “biases due to retrievability of instances (familiarity, salience); biases due to the effectiveness of search sets; biases of imaginability; and illusory correlation.”
    Causal inference or explanation looks for the causes of effects, often inferring incorrectly from illusory correlation, either concurrence (cum hoc) or sequential occurrence (post hoc). At its most basic, causal inference arises from classical and operant conditioning. At its most simple-minded, it may arise from single-trial learning, particularly in cases of traumatic or flashbulb memory. An ability to construct causal narratives for natural events allows us to make future predictions. Deborah Kelemen’s coinage of “promiscuous teleology” emerges from the idea that, from childhood, we are driven to look for causes in all effects, and with intentions behind them if these can be found. In aggregate, this suggests an intentional design behind clusters of events, and by extension, in the grand aggregation, a grand design, and a still grander designer. It’s an old habit of mind that’s difficult to break later, and one that’s often reinforced by ease of linguistic and grammatical formulation. When speaking of evolution, for instance, we almost have to set a little alarm that goes off whenever we use the words “to” and “for.” Explanation doesn’t require this. When this heuristic insists on seeking causes where none are to be found, animistic, supernatural, or magical influences are often assumed or invented. Illusory correlation, cum hoc and post hoc, are the most common errors here. Most living things arise with some intention, however dim, if only to eat, survive, and reproduce. Something that looks like a counterpart of that may be apparent in anything following a course, as if obedient to natural law. But that doesn’t mean that these things exist to enact a plan or pursue a goal, and certainly not that these things are happening for a reason.
    Chunking is the combination of multiple perceptions into a single thing to hold more easily in the mind and memory. Individual pieces of information will be bound together into a larger schema, with its own name and characteristics, such that when recalled, details are recalled as well. This also enables more information to be held in short term and working memory.
    Classical conditioning is a form of learning in which two different stimuli are repeatedly paired, such that the occurrence of one now elicits the response the other. One of the stimuli may be functionally neutral, such as Pavlov’s bell.
    Confabulation is an ad hoc explanation, an account made up on the fly, often from the quickest things to come to the subconscious mind. It can become a form of misattribution where imagination is mistaken for a memory. Our brains are wired to fill in missing information to make coherent stories wherever lacunae don’t make any sense. We can encounter this phenomenon nightly, where random neural nets, letting go of their standby alerts and agitations, leave us with unconscious and semi-conscious images that need to be strung together as dreams. Then, at some point, the story may become self-sustaining. We also see this in sleep paralysis, where the infilled imagery and confabulated stories can range from incubi and succubi to alien abductions. But there’s more to it. When it pops up into more rational realms, scientists can sometimes use the new material as hypotheses. Poets can turn it into poems. We also make up narrative accounts for the arising of feelings and emotions, or to explain choices we made long before our reason or attention got involved. We often do this in ways that support our self and social schemas, dismissing any accounts which might conflict with these.
    Context effect makes memories more accessible if they occurred in contexts similar to the present. Context is part of the net of associations linked to a particular memory. An affective version of this is known as the mood congruency effect.
    Data mining sifts through large amounts of data in search of patterns, but will often be deceived by the apparent local patterns that occur in random sequences, as we see in the gambler’s fallacy.
    Dichotomy or bifurcation seeks to divide and conceptualize by twos, with the most famous being either body and mind or Yin and Yang. Aside from black-and-white dichotomies that erase the gray areas between (and neglect color altogether), there are further problems in assuming that all dichotomies are of the same type. Some might try to make the claim that white is to black as good is to evil as us is to them as male is to female, but this set of four pairs really wants eight columns for its four types of dichotomy.
    Equivalence, sameness, or substitution of equivalents is a human universal. Material things are normally interchangeable or fungible. Exceptions exist to this, as with the animism heuristic where the special something of something, especially a living thing, its medicine or manas, cannot be duplicated and substitutions are not acceptable. An exact replica of the Mona Lisa, accurate down to the last molecule, still wouldn’t be worth much money.
    Exemplification is appointing a representative of a class. Exemplars are often made of the most vivid or dramatic examples and can therefore set an unreasonable standard.
    Fluency heuristic will favor an option processed more fluently, faster, or more smoothly than another, on an initial assumption that this option has a higher value or a better fit.
    Fortune, fate, luck, or destiny is a nearly universal perception unless we are raised against it, or turn against the idea later in life. Even then, its absence leave a hole to be filled, often with vocation, calling, a shining path, a golden path, personal purpose, or higher purpose. Of course this comes with a companion sense of misfortune.
    Gaze heuristic is an extreme focus on a single operation or variable, such as catching prey or a ball. Being the ball, or keeping your eye on the ball, really does work. It’s not just something they say.
    Less-is-better effect will prefer a smaller or simpler set of data, even though it may entail errors of sample size.
    Naive realism is the readiness to assume that reality is as perceived by the sensorium, or that our umwelt gives us a fair, objective, and accurate model of things. We assume we see reality as it truly is, as anyone can see, and anyone with common sense agrees. The most we have to acknowledge is that we have different points of view or perspectives on the scene before us. This is really all we can do until cultural learning comes into play. Our organisms create our world, but not the world. Our personal world is a personal construct, but it isn’t constructed at random, and while we don’t share the same finished product as other humans and other species, we can make enough shared inferences to get by. It isn’t wildly different from person to person. It’s true that naive realism doesn’t give us a proper physicist’s picture, or multiple perspectives simultaneously. Pictures couldn’t be called true representations, even if the E-M spectrum was limited to visible light. You’d have to see all sides of a thing at once, and get your preconceptions and emotions out of the picture. According to Philip K. Dick. “Reality is that which, when you stop believing in it, doesn’t go away.” We don’t have that. To use a computer metaphor, which we are trying not to overuse here, the naive reality is like the graphical interface, the brain’s interconnected structure is the encoded wetware-plus-software, and the ones and zeros are the atomic/quarky/quantum reality within and outside of the brain.
    Naming, or assignment of lexemic associations and mnemonic handles to classes of one, is a human universal. But with inanimate objects, this is only done when some special or animistic characteristics are perceived. Like your Blarney Stone.
    Narrative preference is for learning that mimics life moving through time as experienced biologically, calling up episodic memories and associations in autobiographical sequence and memory. Minds are better pre-adapted to schemas and scripts than to abstractions, and thinking abstractly isn’t a skill that comes as naturally or as early in life. Narration sets us up in a more lifelike naive realism: we can live along with it, unlike explanation, and it has more power to evoke sensory and emotional memory. Narrative strings together social role and behavioral archetypes into convergently recurring scripts, and we have appetites for learning these. Narrative translated into films, plays, and acting out adds another level of impact and memorableness, getting mirror and efferent neurons involved in the learning. The human mind seems to care more about meaning, feeling that things make sense, than about truth. And even when we can’t see literal truth, we’ll find metaphor and analogy. Storytelling may be the most powerful way for us to learn, and may overwhelm more rational presentations of material, even where stories lack any rational plausibility. Our lives are stories. This is an evolved preference that’s built around how we live life. It wasn’t created in anticipation of a learned language overlay, but it does becomes the substratum for our preferential mode of cultural learning. The comical side of the narrative preference is the common aversion to story problems, which mixes narrative with abstraction. A primitive form: Og leave big rock, walk slow. Oona leave little water, walk fast.
    Numeracy is almost an evolved deficiency. Some cultures may only have ideas for one, two, three, and many. Numbers still appear to many people as magic, or to some, from school age up, as nemeses. Subjective senses such as imposingness will often take the place of larger number sets.
    Operant conditioning, or instrumental learning, is a form of learning that modifies the strength or frequency of a behavior with reinforcing rewards or punishments. Positive and negative reinforcement here do not refer to pleasant or unpleasant quality, but rather to the presence or absence of reinforcement. An ass-whupping is a positive punishment, and not getting one for being a good boy is a negatively reinforcing reward. They just did this to confuse people.
    Pareidolia is a tendency to project meaningful form onto a mere suggestions of form. This is most common in vision, hearing, and touch. Seeing figures in clouds, Jesus on toast, Rorschach ink-blot tests, hearing voices in the rapids, hearing non-existent hidden messages on records being played in reverse, or misidentifying that squishy thing hidden in the box, are classic examples. This is similar to apophenia, but it begins with at least some degree of form instead of random input. Pareidolia is an active part of most of the human systems of divination, but this is not a refutation or criticism. Operating unconsciously, or right at the threshold of awareness, the process is in closer contact with the subconscious than our rational minds can ever hope to be. Forms of divination stimulate our projections, often revealing the subliminal, even if this is distorted. The images that we connect with pareidolia might be snapshots in a dream sequence, dates on a calendar, letters of the alphabet, or numbers on a list. We can always find a way to make meaning where none really existed beforehand. We also have the ability to make our little strings of freshly connected images sound plausible to others. If we have the social magnetism, we can get our fellows to take our connections quite seriously, so that these become adopted and built into the thinking processes of our followers. Still, we need something that’s better anchored, in something perhaps that science can see, if we’re going to call our fabricated stories fundamentally meaningful. There is also such a thing as making too much meaning, seeing too much as being meaningfully connected, a common symptom of paranoia and a common characteristic of conspiracy theories.
    Pattern detection or recognition. We seek patterns, and usually find them, even if we have to invent them with pareidolia. Symmetry and regularity suggest predictability, and that’s good for making good choices. This is also a task for science. Unfortunately, we often succumb to wishful thinking and discover patterns, or false positives, that we want to be there, but are not. Hood (2009) writes “The human brain evolved a drive and associated heuristics to detect patterns in the world and infer their causes or agencies.” He defines the “supersense” as the “inclination to infer that there are hidden forces that create the patterns that we think we detect.” The word supersense has some of the wrong implications for me, but he regards it as a sense for the supernatural which can be either secular or religious. Both the Doctrine of Signatures (from ancient medicine) and the Hermetic “as above, so below” look for patterns to connect different levels and scales of reality, often with the intent to nest them as analogies and use these to cause magical effects. Aberrations of this and related heuristics are common in such mental disorders as schizophrenia and religion. But they were also conserved by evolution. They helped us to jump quickly to conclusions and actions long before we had reason and language, and they are still very much with us. Sometimes they still help us even better than reason and language. We might get the tiniest bit of an edge when the movement we see in the grass is really the tiger we imagine and not just the wind. Clustering illusion is one example of pattern detection, which may perceive meaning in unexpected runs in small samples of random data, like tossing heads six times in a row being taken for a sign. Omens are another example. Our capacity for thin-slicing, being roughly synonymous with snap judgment, can make rapid inferences with the minimal information gleaned in brief windows or time or experience. With data mining we can sift through large quantities of data, and the real rules of randomness predict that local patterns can always be found, although we’ve yet to find monkeys typing even one sentence of Shakespeare. Pattern seeking may be supported by an insecurity that demands closure of knowledge or a sense of predictability to support a sense of being in control.
    Recognition heuristic will tend to assign a higher value to an option that’s already familiar. Recognition connects the new or freshly perceived with the known, beginning with faces in infancy, and ending with trying to find the right words in our dotage.
    Reification, or entification, will attribute a metaphysical, physical, material or objective sense of reality or thing-ness to patterns, relations, processes, activities, mental constructions, and mystical experiences. Processes like consciousness, emotion, or emergent qualia do have a kind of reality, but this is distinct from materiality, without at the same time becoming things in a parallel, metaphysical, or spiritual reality. The arupa jhanas of Buddhist practice may advise meditation on infinite space or infinite consciousness, but these are stretching exercises for the mind, not objective cosmic realities or principles to be explored. Naive realism is a reification of the sensory world.
    Representativeness or classification heuristic, or mental sorting, prepares perceptual and mnemonic substrata for lexemic and conceptual associations. It’s a rush to judgment on a subject based on a larger classification or category into which it’s been placed. A quick assessment of similarity may treat this as an equation. The perceived similarity, however, may be little more than a superficial resemblance, or irrelevant to the puzzle at hand. It bases decisions on similarities to generalizations, stereotypes, or prototypes. In more non-native domains, we would more likely call it categorizing. Among the things represented or classified, we find: opposites and binary conceptions and distinctions (substratum for antonyms); similarity heuristic (substratum for synonyms); divisions of spectral experiences (as of colors, notes, tastes); taxonomies of fauna, flora, and fungi; types of groups and social dynamics (decision making, coalitions); kinds of tools and instrumental objects by utility (containers, toys, props, weapons); parts and wholes; doer and done to (giver and receiver, active and passive); subordinate and superordinate classes; traits, modifications, and properties of things (substratum for adjectives); traits, modifications, and properties of activities (substratum for adverbs); norms and exceptions; low integers and crude measures; inclusion and exclusion; and classification of internal states and emotions. Errors identified by Tversky and Kahneman can be due to “Insensitivity to prior probability (base rate fallacy); insensitivity to sample size; misconceptions of chance (same as sample size); insensitivity to predictability (unwarranted guesses); the illusion of validity (unwarranted confidence); and misconceptions of regression (to the mean). We might also add that categorization sometimes draws lines across nature where there are none, as we put conceptual distance between two adjacent points on a spectrum, or at arbitrary points in fuzzy transitions.
    Satisficing, introduced by Herbert A. Simon (1956), is the termination of further inquiry or problem-solving behavior when a sense of sufficiency for the task has been reached, or an acceptability threshold is met. It’s where the learning becomes having learned. We get a feeling of knowing enough, at least for now, and it’s this feeling, rather than the knowledge, that puts the brakes on further inquiry. At this point, we need to grow dissatisfied before resuming the investigation: otherwise it’s case closed. This is a stop search command that has both merits and hazards, depending on the pressing of needs. We nearly always have constraints on our efforts, of limited time, of limited knowledge, of finite computational ability. All but the simplest, most formal forms of our rationality will be bounded, leaving us reliant on imperfect induction and inference. Simon says (!) “Decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world.” Errors occur in being satisfied with sufficiency too soon, as we quit investigating and squat or hunker down in our beliefs.
    Self-generation effect refers to the superiority of participatory or interactive learning. Material is better recalled when we learn by doing, firsthand, getting involved, than when we receive information passively. We can improve on our second-hand knowledge (“learning in other heads”) and vicarious trial and error if we put ourselves through more active paces in the process, enriching second-hand ideas with our own thoughts, or even if we rewrite a passage instead of just rereading it, which is called the testing effect. This accounts for much of the trouble we cause in our teen years.
    Similarity heuristic is the comparison of a thing or experience to the nearest thing in memory, or a search for that nearest thing. Association likens one experience to another and links them in memory, permitting recall of both in working memory. These need not be the same kinds or classes of experience, but may associate sensations, lexemes, concepts, and emotions, with one able to stand in as metaphor or symbol for another. We make choices based on similarity, assuming these choices will result in similar affective responses.
    Simplicity preference will opt for solutions to problems that best reduce our cognitive load, a sort of built-in oversimplification of the Occam’s Razor idea. It’s a cousin to the fluency heuristic.
    Spacing effect refers to information being better recalled if it’s repeated over a longer span of time. It may be related to having more contexts associated to the thing being learned. A text will be better remembered if it is read twice one month apart, than if it’s read twice in a day.
    Stereotyping allows a simple perceptual trigger to infill general content from a class to which that trigger is thought to belong. If man’s name begins with “Professor,” he’s likely to be a damned beatnik communist liberal. The mind won’t work any harder than necessary, and it’s also inclined to dismiss what doesn’t fit. When the content and character too far exceed the boundaries of a stereotype, the extended portions are often lopped off, so that the particular fits within the general category. This is called a procrustean bed, after the myth of Procrustes, who made fine furniture, but altered his clients to fit his beds, not the beds to fit his clients. For all of its flaws, stereotyping has the instrumental value of reducing cognitive load and avoiding expenses with diminishing returns.
    Thin-slicing, or snap judgment, will use extremely limited information to produce an inference or judgment. We might infer a whole personality from another person’s micro-expression or choice in footwear.

Accommodating
    Anthropocentrism is the tendency to see situations only from the human point of view. Given that we are naturally constrained in our perceptions to the human umwelt, this is easy enough to understand. But its consequences are grievous, especially in combination with human exceptionalism, and its influence on science is just an embarrassment: animals have instincts, humans have reason.
    Anthropomorphism is a humanizing interpretation of non-human species and even inanimate objects, imbuing these with human traits, characteristics, motivations, and emotions. But the opposite is often the case as well, that we don’t see enough of what we have in common with other life forms.
    Association heuristic allows us to link different experiences from widely separated sensory and cognitive modalities, embed these links into memory, and recall clusters of associated memories. The learning of lexemes and recall from semantic memory along with their associations may be the big key in understanding language learning and its evolution.
    Been-there-done-that is a reassertion of satisficing that ends prompts to further inquiry. It will presume that all important value has already been extracted from an experience and nothing is lost in moving on.
    Cognitive biases are more at home in this domain than any other, since they operate out of accumulated memory and its resistance to change. But they also arise in the situational, emotional, personal, and social domains.
    Familiarity heuristic relies on familiar or time-tested approaches to current puzzles. The familiar is favored over the novel.
    Levels of processing effect is the theoretical suggestion, proposed by Fergus I. M. Craik and Robert S. Lockhart in 1972, that differences in depth of analysis of experience, and the routes by which this is encoded in memory, directly affect the fragility or durability of associated memories. Depth of analysis would include familiar associations, self-referential or meta functions, multiple perspectives, etc. Disappointingly, but not surprisingly, common statements of the theory don’t seem to acknowledge a place for associated affect.
    Metaphor and analogy in pre-linguistic shapes become figures of speech, nested analogies, and correlative thought, often related to magical thinking. The process of recognizing and developing these begins in this domain, but extended analogies and models aren’t developed yet, nor are real inferences made from them here. At this point they are just associations, and sensory or conceptual metaphors.
    Prediction by divination, omens, and signs is a universal, with the methods varying by culture. More literate cultures tend to substitute symbols selected at random for physical omens found in the environment. Omens themselves may be the result of one-trial learning and the sharing of the anecdote. Pareidolia is often a factor in divination, as this can read signs and signals arising more closely to the subconscious or subliminal and thereby take advantage of more intuitive processes.
    Recency illusion is the sense that perceptions only recently noticed suddenly begin to appear with frequency. We learn a new word and we suddenly start noticing it everywhere.
    Superstition represents a failure of multiple higher-order cognitive skills to augment native heuristics and simpler cognitive processes. Conclusions that have been jumped to remain, and are supported by the hopes and fears of older parts of the brain without conscious intervention. Poor systematizing skills are especially implicated, together with poor mental rotation, and animism, or the awarding of intentions to the inanimate (Lindeman 2016).
    Truth and falsehood are universals as basic conceptions, or at least trust and good faith against deceit and betrayal.

Situational
    Agency assumption tries to distinguish actions under self-control from those that are not. An assumption of agency will imply a subsequent accountability, but may be withdrawn when accounts call for penalties instead of rewards. Since so many of our decisions have already been made by the time they rise into awareness, agency is often a psychological convenience. The Libet experiment is often raised as a refutation of agency altogether, but this employs a straw man fallacy. Events in the mind can still become true causes in the physical world, but these processes take longer than the few hundred milliseconds that typical conclusions from the Libet experiment arbitrarily limit them to. However, it may still be that most of the people most of the time either lack or else do not engage what is commonly thought of as free will.
    Agent detection may help us infer the presence of people, animals, and other organisms that would harm or eat us. But it can get overactive and perceive inanimate objects, or imagine supernatural agents, as if they have minds and intentions. Earlier on, the forces of nature were regarded as deities.
    Attribute substitution will import an easy stock response or reflex to take the place of a complex part of a problem or cognitive task. Because I said so. It is what it is. It’s a way of giving up, and not a good way to teach children.
    Boiling frog adaptation refers to gradual acclimatization to an increasingly intense stimulus, but the phenomenon itself is only urban legend.
    Cognitive fluidity is the ability to switch between heuristics and modules at will, enabling, among other things, a broader use of metaphor and analogy and enhanced creativity. De Bono’s lateral thinking is an example. This idea is the attempt of archeologist Steven Mithen to account for behavioral modernity in homo sapiens and the development of “Swiss penknife minds.” The term post-Toba is included in my own hypothesis.
    Confidence preference will prefer scenarios where we either have a sense of control or we are given clear direction by an authority.
    Contagion heuristic or wariness, or the cooties, will avoid experiences or contact with others if they are perceived to have contacted something even metaphorically or supernaturally diseased or unhealthful.
    Contamination heuristic will warn us away from the poisonous, venomous, toxic, contagious, and unsanitary, inferring these from evolved sensory cues. This gives rise to the innate phobias that affect some but not all of us, as of spiders and snakes.
    Default effect is essentially choice without the hassle of choosing. We will generally opt for the default condition and thereby save ourselves the trouble of weighing options, unless the choice threatens consequences significantly greater than the effort-expense of deliberation.
    Equity or 1/N heuristic distributes resources among available options. It’s also known in the forms of diversification, hedging, and not putting all of your eggs in one basket.
    Inferential prediction and predication, like extrapolation and interpolation, recombination of conceptual parts, nesting of analogies, and other forms of conjectural reasoning that occur in context-independent cognition allow us to imagine and assess things that aren’t there. We can both find and create the missing pieces to our puzzles.
    Intuitive physics allows us to estimate the weight of a stone to be moved, or infer trajectories and even allow for wind shear when playing catch.
    Intuitive probability lets us guesstimate the likelihood of things or events reoccurring, but this is of course subject to a number of automated biases that are often exploited in fallacies. This may be accomplished in part by prior learning in similar situations, or by analogy with other kinds of experience, but no real math or statistics is involved yet.
    Law of the instrument (named by Abraham Kaplan) is the inclination we have to organize an experience or perceive a task in order to fit the tools that we have to work with. “I suppose it is tempting, if the only tool you have is a shammer, to treat everything as if it were a nail.” The idea is often attributed to Maslow, but it’s older.
    Necessity and sufficiency, perceived in combination, satisfy questions of non-causal precedence and prerequisites. They say when enough is enough.
    Orientation in space and landscape is a universal, both in developing a sense of spatial scale and directions, and in giving direction to others. Both mapping in the dirt and mental mapping are native functions.
    Quick-estimation heuristic is a postulated method of generating numerical estimates, but may as easily or sometimes be a subjective impression of the size of a set relative to the known size of another.
    Recognition primed decision, or take-the-first heuristic, takes a perceptual snapshot of a problem, intuits solutions, and selects the first one that assumed constraints seem to allow. It’s related to the availability heuristic. The concept is the primary outgrowth of “naturalistic decision-making” or NDN theories.
    Savvy in needs satisfaction (food, shelter, clothing, etc.) is a universal. We know to seek these things, and explore and map unfamiliar environments, and are not entirely helpless even without social and cultural learning, as the few feral children we know of can attest. This suggests some level of instinctual knowhow.
    Seasonal awareness and planning, as a universal, might appear to be less universal in tropical climates, but there usually remain monsoonal cycles to be planned for. Weather categories and intuitive predictive algorithms are also human universals.
    Simulation heuristic will assess the likelihood or probability of an event or outcome based on the ease with which it’s envisioned, imagined, or mentally constructed.
    Take-the-best heuristic, articulated by Gigerenzer & Goldstein (1996) will make a choice between alternatives based on the first perception, cue, or point of distinction in the aspect of the choice deemed most important.
    Zeigarnik effect makes the claim that uncompleted or interrupted tasks are remembered better than completed ones. Perhaps clear endings, coming to closure or wrapping things up allows us to withdraw our attention more completely, as we have now been there and done that. We sense it in dreams about missing that test we were supposed to take.

Emotional
    Affect heuristic uses the current feeling or emotion as a guide to the truth of things and a sign of the quality of the decision that led to it. This is the gut-feeling heuristic, making the hidden decisions but allowing the conscious mind to believe it’s had a say. It’s praised as fairly reliable in Epicurean hedonism, provided that eudaemonia, or human flourishing, was preferred over the baser pleasures. Modern neuroscience recognizes a vital role for the prefrontal cortex in examining our hypothetical options and deciding which one feels best. Feelings are weak, however, in matters of proportion, scale, and time, so the affect heuristic may be narrow in scope, simpleminded, and shortsighted. The heuristic is fundamental to a large part of decision-making, as we run trial and error scenarios and choose the option that promises to feel the best or most satisfying.
    Affection seeking, giving, and trading has effects in both the personal and social domains, but the primary motive is hedonic. Affection is now known to be sought by more non-human species than we ever thought possible.
    Animism is a sense of a living essence or manas in things, each thing with its own particular medicine. This can be a religious experience, as in Shinto, or experienced globally, as with panentheism. Individual things may have their individual essences, precluding an acceptance of even perfect copies. Things may absorb and transfer some of this essence in handling. Relics are worshiped as having something akin to life. Talismans attract and amulets repel mysterious forces. Essence may contaminate where reality would not, much as few people would ingest soup stirred with a never-used toilet bowl brush. Cooties dwell on the threshold between worlds and partake of both. Perceived underlayment of magic, mystery, mysterious agency, sacredness, connectedness, vital energy, life force, vibrations, or otherworldliness can work good or ill, depending on how it’s exploited by higher cognitive and emotional functions. Bruce Hood suggests, “Culture and religion simply capitalize on our inclination to infer hidden dimensions to reality.” At its most mundane, the preference encourages us to give each thing or each kind of thing a name of its own, to store more data, and more interconnected data regarding its specialness. And it drives the curiosity of children. Aside from this being generalized to existence, however, we can distinguish early in life between what’s alive and what only appears agentic, like moving dolls, puppets, and animated pictures. In it’s pre-rational form, animism must infer things unseen doing the directing of events. This doesn’t make religion and religious speculation innate: those are the cultural exploitations of the sense of mystery that will often combine with the fears of death, illness, and meaninglessness.
    Attitude adjustment effects have real effects in life, but not through direct causality, which is magical thinking. They may be seen in placebo effects, affirmations, setting intentions, and prayer. They may simply stimulate better heath, greater confidence, and more reciprocated social interaction.
    Childhood fears, as of loud noises, heights, insects, snakes, strangers, and nightmares, are human universals. Some develop out of one-trial learning, others may magnify inherited anxiety responses that are subject to eventual unlearning or extinction.
    Cognitive closure may be attempted quickly wherever there is anxiety about remaining uncertainties. To finish an experience is to put an end to anxiety over any threatening future it might pose. Hastening to make sense of a too-complex world is one of the drivers of conspiracy theory, along with a desire to feel in the know in some special way.
    Duration neglect will fail to account for the duration of an event in assessing its importance. This is especially prominent in hot cognition.
    Emotional distinctions, or the articulation of affective states, are human universals, although the complexity of articulation varies with culture, nurture, and temperament. The next chapter represents an attempt to develop a finer granularity or articulation of this.
    Escalation of commitment might also be simply called stubbornness, or throwing good money after bad. We will rely on previous choices, decisions, evaluations, or conclusions already drawn, even when these have proved less than optimum, because investments here have already been made. New investments not only require additional effort - they also threaten to require unlearning or losing the older ones. Where it occurs in specious reasoning it’s called the sunk cost fallacy, and it’s related to the choice-supportive cognitive bias. It’s the push to keep pushing because you’ve gone this far, and time or energy might otherwise be lost.
   Fears of death, existential meaninglessness, infirmity, abandonment, disease, and exile are human universals, and all of them are strong enough to motivate an acceptance of explanations without regard to their rationality or plausibility.
    Hedonic treadmill refers to how the pursuit of happiness game is rigged in favor of dissatisfaction. Expectations adapt primarily upward as goals are satisfied. Our affections of pleasure and happiness can be problematically similar to our sense of acceleration: we will tend to forget them whenever we remain in a balanced state and attend them best when things are changing. We are wired to keep seeking improvement, not homeostasis. This bodes ill for maintaining pleasure and happiness in steady and more sustainable states. This phenomenon is also called hedonic adaptation. We get used to the pleasant things, and until we can learn to control our subjective states, we are left with having to combat this boredom or ennui by adding endless variations to our experiences.
    Humor effect can increase an event’s memorability and value either through the additional cognitive effort required to process the event, the heightened affect it produces, or else by the opportunity it presents to share something with others.
    Magical thinking, where wishes, fantasies, prayers, mindsets, affirmations, and intentions are thought to bring about results, is a human universal, and is sometimes challenging to unlearn, since we may have rewarding emotional experiences while entertaining a hope or having a fantasy.
    Misoneism, a fear, suspicion, dislike, or intolerance of novelty, innovation, or change, is a human universal, though it arises with individual differences, and it can also be overcome.
    Peak-end rule is the tendency to perceive and remember experience by the average of qualities it had at its peak intensity, including the dismount, if that’s scored, and to neglect the more average qualities of the overall experience. In PTSD, the worst of the trauma is better recalled than the relief of beginning to survive it.
    Placebo effect “is a false belief that has real value” (Robert Burton). The best known placebo effects are in medicine, where people are cured because they only believe they are getting medicine, and biological and psychological research, where subjects believe they might be affected by an introduced variable. Some placebos are absolutely useless, like decaffeinated coffee. It’s always best to buy the name brand placebos and avoid the generics, and the red ones are definitely better than the blue.
    Prospect theory, in its shortest form, charts a tendency to value losses as more significant than exactly equivalent gains. We’re more sensitive to a loss than to a gain: when our precious thing gets lost or stolen we usually have stronger negative feelings than we had positive feelings when we acquired the precious thing in the first place. It’s more painful to have a toy taken from you than it was pleasurable to receive it. But at the same time, and paradoxically, decisions are made in prospect with more concern for gains than losses, and we would rather hear good news and remain ignorant of the bad. We will more eagerly look at information promising better outcomes and avoid researching the potential of negative outcomes.
    Scarcity heuristic will assign value to a thing in proportion to its rarity. The scarcer thing will be thought worth a greater effort in its acquisition. There’s a strong cultural component to this, however, and some cultural attitudes may override it almost completely, especially where possessions are regarded as a burden.
    Single or one-trial learning is the result of a particularly intense or traumatic experience, often resulting in persistent and recurring flashbulb memories. It will generalize from a single instance. It can be a major source for phobias and superstitions, or a dietary lesson learned from a bout of food poisoning, or the suddenly acquired wisdom to never mix quaaludes and tequila ever again. It can also lead to dissociative disorders.
    Subliminal cueing can have subtle effects on behavior. Pictures of eyes on the wall of a retail outlet may actually reduce shoplifting. This is related to the priming effect.
    Threat assessment recruits a number of cognitive and affective modules or networks, many of which can trigger fight or flight responses before they are even noticed consciously.
    Transitivity heuristic, transitivity of preferences, or transitive inference, will tend to maintain order across chains of preferences. To prefer bananas to apples and apples to oranges suggests a preference of bananas over oranges. But Warren McCulloch (1965) cites an experiment where “a hundred male rats all deprived of food and sex for a specified period will all prefer food to sex, sex to avoidance of shock, and avoidance of shock to food.”
    Vicarious trial and error is a function of the prefrontal cortex whereby we imagine our options and examine how we will likely react to those scenarios. Appraisal or evaluation nearly always involves affect or emotional reactions, and we more often opt for the option that promises to feel the best.

Personal
    Adornment is social signaling from the personal domain to the social. It’s a human universal communicating a range of conditions related to status and reproductive availability.
    Altered states are sought to explore the inner world, even though most seem content with more timid exploration, smoke, and fermented beverages. Most methods for their attainment are learned, but straightforward natural activities like running, breathing, and vocal expression may set behavior in motion in search of more methods. Ritualized forms develop culturally. Fear of altered states may be learned culturally, but even cultures that avoid them may allow a shaman or two, if not completely obsessed with fears of witchcraft.
    Barnum or Forer effect is a species of pareidolia wherein a subject finds relevant, personal meaning in a general and ambiguous dataset applicable to a wide range of people. What’s perceived is in fact tailored to the individual, but this tailoring is performed by the heuristic itself. The information presented will be cherrypicked and interpreted specifically for personal relevance. The classic example is the daily horoscope, or astrological interpretation in general.
    Conspiracy theory is an aberration combining active agency detection with hyperactive pattern recognition and a personal want to feel especially in the know. It becomes self-satisfied in cognitive closure.
    Dreaming is a universal function of the brain, regardless of whether dreams are experienced and remembered or not. Since their activity is concerned with neural nets recently engaged in working memory or placed on standby, the individual images in dreams are perceived as personal, timely, and otherwise relevant. This gives the pattern seeking and confabulation heuristic a way to string the pieces together in ways that seem to make sense and feel currently relevant.
    Effort heuristic tends to assign a higher value to something requiring more effort to obtain, which often translates incorrectly to a price assigned for other complex and unrelated reasons. Found and free things and opportunities may be deemed more cheap or less valuable. Pricier advice might often be taken more seriously for this reason. You don't always get what you pay for.
    Hygienic personal care is a universal. It’s marvelously satirized in Horace Miner’s 1956 “Body Ritual Among the Nacerima.” Cognitive hygiene is less commonly practiced.
    Introspection illusion is the incorrect assumption that people have direct insight into the origins of their own mental states. The origins, or originations, of both thoughts and emotions, are overwhelmingly unconscious processes, and most of our explanations are little more than after-the-fact rationalizations of their appearance. This is not to say that we can’t have original thoughts, born out of conscious creative processes, or that we can’t originate feelings and emotions with conscious intent, or modify them with conscious intervention. It only says that this is a lot more rare than we think it is. Introspection illusion is also a cognitive bias, with which people tend to assume that they have more control over internal states than other people have. It can also be problematic in experimental psychology where dependent for its datasets on self-reported assessments.
    Psychologist’s fallacy will presume an objectivity of perspective about a perceived behavioral event, or think our perceptions reflect a commonality between perceiver and perceived. This has the analog of anthropomorphism in looking at non-human species. However, we are not always well served in dismissing it, since both human to human and human to animal comparisons do involve a degree of shared nature and relativity isn’t always the rule.
    The self regarded as both a subject and as an object is a human universal. Self-image, status, and reputation require a degree of self-objectification.

Social
    Age assessment providing needed qualification for social and reproductive relationships is a human universal. The most serious consequences of error here involve her father, and the weapons he may or may not keep loaded.
    Awareness of death, and the mourning of lost companions, is universal in humans and some animals as well, most notably other primates, elephants, and cetaceans.
    Behavioral archetyping is an inherent or inherited tendency to recognize and classify certain typical situations found in social settings. This may be inferred from universal recognition of these kinds of behavior across diverse human societies, and in many cases by behavioral evidence of their occurrence in other primate species. Evolution has shaped the processing of our perceptual input to detect certain types of behaviors and entities in the world. There’s nothing metaphysical or new age, or even all that Jungian about this: genes don’t encode or transmit semantic or eidetic content, they synthesize proteins. Archetypes as just areas of evolved cognitive facility. They pre-exist social, cultural, and linguistic domains. Some candidate examples are adulation, alliance, altruism, apology as contrition, apology as explanation, quid-pro-quo balance sheets, banishment, betrayal, bluff, boasting, censure, cheating, coalition formation, commiseration, competition, cooperation, crime, counsel, deception, dominance, exchange, fairness, feint, flattery, fraud, gossip, gratitude, grooming, hospitality, humor, incest taboo, influence, insult, intimidation, negotiation, nurture, obligation, persecution, praise, pranking, reciprocity, reconciliation, rescue, retaliation, sacrifice, seduction, sport, stinginess, straying, submission, supportiveness, surrender, suspicion, theft, trade, treachery, trust, and xenophobia. Subliminal sensations or perceptions of neurochemical cocktails and combinations of involvement of different parts of the brain may play a major role in their recognition. Being inherited, they are not encoded as concepts, ideas, or symbols, as many mistake archetypes to be. They are merely a preparedness or readiness to perceive, and then build or organize perceptions as schemas and scripts, a priming to sort experience in certain ways or according to specific criteria. This may underlie any innate sense of morality that we inherit, although much of innate morality can be overridden by strong emotions in children, and in adults with poorly developed PFCs. This is Donald Brown’s “classification of behavioral propensities.” The inevitable and unlearned emotional reactions we have in encountering these behaviors form the inherited substratum of our moral sense. Behavioral archetypes underlie many of the scripts that we encounter in understanding the Social domain.
    Cheat detection heuristic posits a neural readiness to perceive deception. Cheating, in at least some degree, appears to be universal in primates. Human children figure out how to lie between 2 and 5, and the frequency seems to peak in the mid-teens. This is likely a function of the developing prefrontal cortex learning to do cost-benefit analysis. Whether done to protect feelings, conceal inadequacies, make excuses, pass blame, promote ourselves, or acquire resources, there are costs to be paid, particularly in loss of trust, the gold standard in social currency, and in any harm done to their self-image as honest, worthwhile people. But overall, as Dan Ariely (2013) writes, “cheating is not limited by risk; it is limited by our ability to rationalize the cheating to ourselves.” Most of us eventually develop intuitive or subliminal limits on the extent of our lying and cheating. And most of us do better in experiments when primed first about honor and honesty. Further, humans count heavily on reliable communication to function as a society. We build most of our minds out of stuff we get from this, so the consequences of dishonest communication plague our species constantly, especially in politics, religion, and economics. These often exploit what we want to hear at the expense of what we need or ought to hear.
    Childhood education and instruction is a human universal, and it seems to bring with it an intuitive understanding of developmental stages and critical learning periods. This is not to say that new parents know what the hell they’re doing. It follows a natural pedagogy that makes much use of demonstration, mimicry, and role modeling.
    Competence and warmth were identified by Susan Fiske (2006) as two “universal dimensions of social cognition,” at least with regard to an intuitive heuristic assessment of others. People classed highly on one axis and low on the other, as a warm incompetence or a cold competence, elicit more complex reactions. Cold incompetence might elicit straightforward contempt or disgust, as warm competence would admiration or respect. Our expressions of these dimensions are reflected in respecting and liking, and assumptions made about agency and communality.
    Conflict mediation and resolution are human universals, but forms vary by culture. Such forms as consensual decision making (that might still allow a voice of dissent), binding arbitration, and restitution or restorative justice get less application than they merit, while adversarialism seems poorly understood to be an inferior approach. Cooperation and cooperative decision making are universals that may be easily subverted or overwritten by cultural forms, as well as by betrayals of trust.
    Division of labor, or specialization in tribal functions, are ancient enough to be innate, although this didn’t evolve to be what it became with urbanization and militarization.
    Emigration of one sex and immigration of the other in marriage is a human universal, to assure heterosis or hybrid vigor and prevent group inbreeding. Which sex moves on varies by tribe. This will deeply affect one of the sex’s sense of permanence, home, and family bonds.
    Fairness heuristic is an innate sense of justice and a balancing of the books. When violated, it can trigger an anger response. Even infants as young as three months will show preference for puppets and dolls who help others, and will avoid those perceived to be hinderers (Baillargeon, 1994). This can motivate reciprocal altruism with the right beginning; with the wrong one, an ongoing cycle of revenge. The fairness heuristic is best expressed in the Golden Rule, first attributed to Confucius (Analects 15:24).
      Gossip is a human universal, perhaps unconsciously motivated to track the complexities of social interaction, to track social norms and their violators, to maintain a sense of what’s socially appropriate, and to watch the continuous flux in social status, power, and influence. This doesn’t mean it isn’t stupid, or that we should take it seriously.
     Group dispersals, sometimes driven by seasonal factors, environmental conditions, and resource availability, stimulate social protocols for reunion. Tribal gatherings, or superpods of cetaceans are examples of such reunions.
    If in doubt, copy it heuristic, from Joseph Henrich, identifies an imitative response to role models felt to be either successful or prestigious. A celebrity endorsement can get people to use a deodorant that the spokesperson has never tried, or even get brand new words added to the dictionary.
    Imitate-the-majority heuristic, or follow the majority, will tend to move an individual to where the numbers are, not only for security and safety, but to reduce cognitive load as well. This is why democracy doesn’t work.
    Imitate-the-successful heuristic, or follow-the-best, will try to do that. It may get bitten or smacked down for playing out of its league, or it may find a sponsor.
    In-group assumptions and biases, ethnocentrism, and xenophobia all belong to an inherited drive to seek out points of adhesion with others and a collective identity. It’s supported neurochemically with oxytocin and other rewards.
    Intuitive economics will maintain a general balance or quid pro quo accounting of favors given and owed, and will trigger a sense of indignity when cheated. The process might best be studied first in primates and move to both sides of our timeline from there.
    Moral sense is at least to some degree innate, and we can probably assume that this is at least as developed in us as it is in other great apes. But this leaves quite a range, with bonobos on the moral high ground. Eventually, this evolved broad-brush sketchiness becomes codified by rules, taboos, laws, and religious commandments. David Hume was committed to what seemed like natural law in explaining the moral sense, although he did give voice here to the is-ought problem: that what naturally occurs isn’t necessarily the way things ought to be. But neither can reason stand alone in explaining what ought to be. Perhaps we could assume that beginning with respect for whatever moral encoding nature has given us will lead us into less internal conflict as culture takes over. Perhaps we can also assume that if culture has told us we are angels walking around in meat, that we are in for some significant internal conflicts, especially wherever our naughty bits are involved. Any discussion of an evolved moral sense should also raise the sociobiological subject of group selection. Darwin, in The Descent of Man, wrote, “Although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe . . . an increase in the number of well-endowed men and an advancement in the standard of morality will certainly give an immense advantage to one tribe over another.” For a social group to adapt as a unit, its members must help each other out, even at some cost to individual advantage. We may have needed to evolve ways to tell how adaptive this trade-off might be, and this study would require acknowledgement of both trait and sexual selection.
    Non-verbal communication (NVC) encompasses a wide range of behaviors and our perceptions of behaviors in others. It’s claimed that the majority of information exchanged between individuals in a society is non-verbal. We pick up cues from gestures, postures, and micro-expressions that inform us without needing to rise into full awareness. We are sensitive to meanings embedded in the physical distances we maintain between us. Much attention is given to inferring what’s concealed, and trusting what’s revealed. Physical movements other than yawning are sometimes contagious. We read emotions pretty well in the tone of voice and inflection in others, even among speakers of the tonal languages. This may be an adaptation to the need to communicate at night. All of these play parts in our empathy and theory of mind, but get conceptually subsumed under intuitive sense or gut feeling. Mimicry and mirroring also play important roles in the transfer of information and social signaling.
    Norm internalization. “Why would natural selection have built us to be norm internalizers? Broadly speaking, internalizing motivations helps us to more effectively and efficiently navigate our social world, a world in which some of the most frequent and dangerous pitfalls involve violating norms. Such motivations may help us avoid short-term temptations, reduce cognitive or attentional loads, or more persuasively communicate our true social commitments to others” (Henrich).
    Out-group assumptions, together with associated xenophobias, suspicions, and strategies, are universal occurrences. These will sometimes exploit or recruit perceptions of common enemies as a cohesive force for an in-group, even where these perceptions have no basis in fact. The occasional needs to exchange mates or trade with other tribes, as well as the high costs of warfare, have prevented these tendencies from becoming even more absolute and deadly.
    Pair-bonding protocols and proscriptions are universal, although forms vary by culture. These include rules around sexual attraction, permanent bonding, erotic expression, homosexuality, cheating, separation, and rape.
    Persuasion, or attempting to persuade or influence, is a social universal with a repertoire of behavioral expressions including gestural synching, non-verbal communication,  and personal adornment.
    Proxemic distancing refers to gradients in the distance we keep from each other, according to the type of social interaction, whether intimate, personal, social, or public. There is information to be had from this, although there are both individual differences and a significant degree of learned variations between cultures.
    Punishments for crimes against the collective are human universals, but types vary, some involving labor and service, loss of social access or privilege, sequestration, ostracism, or banishment.
    Social role archetyping is an inherent or inherited tendency to recognize and classify certain typical familial and social roles. This may be inferred from universal recognition of these roles across diverse human societies, and in many cases, by behavioral evidence of their occurrence in some other primate species. They pre-exist our social, cultural, and linguistic domains. Some of the candidate examples are adoptee, adversary, ally, alpha, bully, caregiver, challenger, child, coward, cuckold, elder, explorer, father, fool, gossip, hero, infant, lover, mediator, mother, rebel, sage, sibling, spouse, suitor, thief, sucker, sycophant, and trickster. Subliminal perceptions of neurochemical cocktails and combinations of involvement of different parts of the brain may play a major role in their recognition. Being inherited, they are not encoded as concepts, ideas, or symbols, as many mistake archetypes to be. They are merely a preparedness or readiness to perceive, and then build or organize perceptions as schemas, a priming to sort experience in certain ways or according to specific criteria. This may underlie any innate sense of social structure we inherit. Most of these categories run contrary to a literal sense of social equality or egalitarian society. Social role archetypes underlie many of the schemas we encounter in understanding the Social domain. Assumption of social roles as personal identities is a human universal.
    Social proof heuristic is reference to what others might do under the same circumstances in coming to a decision for action. The social role models for warmth, competence, and status are especially useful here.
    Theory of mind (TOM) allows us to infer the thoughts, emotions, motives, and probable actions of others. The affective contents of our TOM are often referred to by such terms as sympathy, empathy, compassion, mudita, fair-mindedness, and reciprocity. An ability to recognize that others have minds of their own, with their own beliefs and intentions, is necessary for most social interactions. Theory of mind seems to accompany self-awareness in the most cognitively complex creatures, and invariably if they are also socially complex. This seems to hold at least for the great apes, elephants, cetaceans, some parrots, some corvids, and mantas. We don’t know enough yet about TOM in the brighter cephalopods, or about the differences between social and asocial cephalopod species.
    Tit-for-tat heuristic, a concept developed in social game theory, optimizes success by cooperating in round one, and then reciprocating the opponent’s move in subsequent rounds. Trust gets the first benefit of doubt.
    Xenophobia reflects an ancient history of competition and conflict between primate troops, and then between human bands and tribes. Our genes learned to look for cues of otherness, probably by species and race first, then gradually getting ever more subtle, and moving into cultural differences like costume and language. Thankfully, we had four major counterforces to this: we still had lebensraum to move away if need be, we needed to connect to exchange mates to avoid inbreeding, we had incentive to cooperate in order to trade, and it served us to learn how to negotiate alliances to combat still larger issues and enemies. Being native and intuitive, xenophobic reactions can occur in the first fraction of a second of a social encounter, before our better angels have a chance to intervene. This even happens to nice people who know better. These can be overcome eventually, but not without work.

Cultural
    Medicine, together with the desire to learn some functional ethnobotany and effective practice is universal in social groups, but not in individuals, who will often willingly defer to a specialist.
    Organization of violence in hunting and battle is a human universal. The most primitive forms are well developed in any animals that hunt in packs.
    Pedagogic scripts for educating children are universal in both the cultural domain and the social. This also adapts to our developmental stages and critical learning periods, and makes much use of demonstration, mimicry, and role modeling. More explicit forms of communication, including linguistic and explicitly structured experience are used in the cultural.
    Religious instinct is a common theory, but this is an illusion. Seeking more interesting and expansively informative cognitive states is an instinct. Jumping to hasty conclusions about the nature of reality is an instinct. Taking social advantage of other people’s gullibility is an instinct. Seeking hidden agents behind events is an instinct. Doing weird ritualized shit to comfort yourself is an instinct. Smarter people can often work it out that there are big differences between these five, and that these things don’t need to be all addressed in one package of solutions. Religion, as broadly understood in human culture, is a bundled, ideological software package that exploits these more “primitive” instincts. But the usual solution is little more than a kluge. In order for this to be considered an instinct, religiosity would need to be encoded genetically. In evolutionary neuroscience, evolved modules are far more likely to be simple, one-dimensional heuristics, associative pathways, or behavioral traits. Fully integrated biological systems usually take a lot longer to evolve than humans have been around. This is why we need to break the above down into simpler dimensions, components, or threads, which would then be woven together by culture, or not. Nothing as complex, multi-dimensional, and integrated as what we call religiosity could qualify as an instinct, even though we are born with most of its component parts, and without anything remotely resembling a religious ideology. The perennial philosophy conjecture does us a disservice.
    Rites of passage from adolescence into adulthood, and initiatory rites into group membership are human universals. They invariably have some kind of costs or dues to pay.
    Ritualization, ceremony, and sacred practices are universals. These seem to confer a sense of control through participation and tend to reduce uncertainty, particularly when done in groups.
    Taboos, the strong cultural prohibitions against violating sacred values, or triggering a curse, seldom run much deeper than the culture that institutes them. Some taboos may simply be lessons from single-trial learning that got embedded in culture, as might might have emerged from eating a poisonous fruit or good food gone bad. Some activities may simply have resulted in too much violence for a tribe to cope with. Much of the cultural and religious neuroticism surrounding human sexuality is hard to explain except by way of historical social problems that “deviant” behaviors might have led to. Simple xenophobia may have led to prohibitions on miscegenation, but that didn’t stop us from mixing our genes with neanderthals. Incest is likely the clearest example of an innate taboo, even though it does get violated more frequently than we would like to admit. While the odds of deformed and genetically challenged offspring aren’t as high here as usually assumed, they do climb significantly over generations of inbreeding. Cultural taboos against intra-tribal murder and cannibalism are likely innate in origin.

Linguistic
    Logical operations, at least to a very limited extent, appear to be universal and innate. We all understand equivalence, parts and wholes, particular and general, Boolean operators (and, or, and not), conditionals (if, then; either, or; neither, nor; not), and the four two-term syllogistic premises (all, some, not; the universal and particular affirmative and negative).
    Motherese, or baby talk, as annoying and counterproductive to maturation as it may seem, is well-received by infants and appears to be a universal that facilitates more mature language learning.
    Non-verbal communication includes facial expression, micro-expressions, gestures, mimicry, symbolic gestures and bodily movements, dominance signaling, status signaling, proxemic distancing, and other subliminal cuing. “Nonverbal communication is an elaborate secret code that is written nowhere, known by none, and understood by all” (Edward Sapir). Many higher animals have repertoires of cries, calls, vocalizations, gestures, and postures, and these are so advanced in some species as to make human exceptionalism in animal language research a real embarrassment. Animals far “lower” than primates can use pre-verbal communication skills to warn, intimidate, identify, submit, proclaim, assess, and seduce. Dominance and subordination are frequently signaled by posture and gesture. We are not above this, although goalposts are moved in whatever direction necessary to maintain the exclusive human claim on the image of god. The great apes have a better facility at learning words than grammar, but they are able to combine words into new words, like Koko’s trouble surprise for crazy, or Michaels’s insult smell for garlic, or Washoe’s water bird for duck. Nim Chimsky had a basic grammar for active and passive: me-verb and verb-me. Alex the gray parrot is the only animal known to have asked a question: what color? Questions about native cetacean language won’t be answered until researchers stop looking for human-type language and start building their models on the cetacean sensorium and umwelt. They should perhaps start wondering if cetacean lexemes might more closely resemble sonograms, which could be strung together into stories.
    Phonemic array, range, or repertoire is a human universal, constrained by anatomy. Individual languages use only a portion of the available phonemes, and only a small fraction of the potential number of vocalized words. Potential monosyllables alone number in the tens of millions. Human babies are primed to respond to human vocalizations, if not yet to human speech. Infants may experiment with ranges beyond what they hear, but no early babbling exhausts the range of the International Phonetic Alphabet (IPA). Babbling is quickly dampened to the phonemes heard in the environment.
    Protolanguage posits certain sets of evolved cognitive features that are later exploited by language learned in culture. The lower strata would be elaborate sets of vocalizations and the nonverbal forms of communication genetically related to those found in primates and other related animals. Protosemantics would look at semantic and procedural memory for cognitive processes that support the development of vocabulary, and especially schemas, and heuristics such as classification. Metaphors will connect an entry in a source cognitive domain with an entity in another, and provide an opportunity to connect this connection to a lexeme in the linguistic domain. We have metaphors before we have the lexemes. Learned lexemes, then, might be regarded as associated hyperlinks that are ultimately given semantic realities comparable to sensory and conceptual memories and metaphors.
    Protosyntactics would look at cognitive processes supporting grammatical relationships between lexemes, such as scripts, or causal inference for subject-verb-object relationships, or our procedural and efferent memories to recall what verbs feel like. These pre-linguistic experiences and memories are accessed by language as old-style libraries were accessed by card catalogues. But it’s the experience and the memory, not the word or lexeme, that carries the qualia, dimension, perspective, affect, richness, texture, connotation, and implication. Protolanguage can be inferred from developmental norms and stages in both human children and in language experiments with apes. Roger Brown offered some of the first research in this field, observing some of the thematic relations or theta-roles of linguistic theory, identifying agent-action, action-object, agent-object, action-locative, entity-locative, possessor–possession, entity-attribute, and demonstrative-entity. The structures of young toddler sentences are limited to a small set of grammatical relationships like nomination, recurrence, disappearance, negation-denial, negation-rejection, negation-non-existence, attribution, possession, and agency. The big arguments here are over whether these functions are modules that arise (or arose) concurrently with language, or if language is merely a software exploit of these cognitive processes as native abilities.



3.3 - Emotions and Affective States

Affect Suggesting Approach:
Affection-Affinity, Agency, Anticipation, Appreciation, Commitment, Confidence-Courage, Connectedness, Contentment, Desire, Empathy-Supportiveness, Engagement, Equanimity, Exuberance, Friendship, Happiness, Interest, Love, Pathos, Patience, Playfulness-Humor, Relief, Security-Trust, Surprise

Affect Suggesting Avoidance:
Anger, Anguish, Anxiety, Condescension, Confusion, Cruelty, Defensiveness, Depression, Disappointment, Disconnection, Disgust, Distress, Displeasure, Distrust, Envy, Exhaustion, Failure, Fear, Frustration, Hurt, Insecurity, Irritation, Loneliness, Loss, Mania, Masochism, Neediness, Reticence, Sadness, Shame, Shock, Surrender

    By design, this assortment blends all categories of affect, emotions, feelings, and moods into a single vocabulary. A feeling becomes an emotion when it stimulates or initiates real, imaginary, or cognitive behavior. Many of these states may be thought of more often as characteristics or traits, but their entry here refers to the feeling of the exercise or experience. A few states appear in more than one cluster, usually with different associations to the word. Note, too, how many of these have strong cognitive components, because, as we’ve asserted throughout, affect and cognition can’t really be separated. A portion of the non-English words and phrases have been found in or borrowed from the Positive Lexicography Project and The Dictionary of Obscure Sorrows (see links at the end).
    The basic division here is by the general hedonic tone. This is the Buddhist vedana, the reaction to contact of attraction or aversion, of pleasantness or unpleasantness, wanting more or wanting less, sukha (sweetness, well-being, happiness, or satisfaction) or dukkha (dissatisfaction, discomfort, frustration, or suffering). This is one of the five khandhas, and generally tells us whether to approach or withdraw. The division isn’t perfect, as some of the states normally thought of as unpleasant are actively sought out by some people (like anger, indignation, or surrender), and others usually thought pleasant might be avoided (like commitment, surprise, or tolerance). And sadness was so full of ambivalence and contradiction that it had to be divided into pathos on the plus side and sadness on the minus.

Affect Suggesting Approach:

Affection-Affinity
comfort; cordiality; gemas (Indonesian) regarding as huggable or squeezable; gemütlichkeit (German) togetherness or coziness; gigil (Tagalog) regarding as pinchable or squeezable; gjensynsglede (Norwegian) a happiness of reunion; hygge (Danish) togetherness; cozinessinnig (German) heartfelt or dear; metta (Pali) lovingkindness and benevolence; namaste (Hindi) recognition of spiritual resonance with others; pietas (Latin) familial affection and loyalty; retrouvaille (French) rediscovery, finding someone again; reunion; warmheartedness; warmth; xiao (Chinese) filial piety

Agency
arête (Greek) excellence, quality; aútexoúsios (Greek) exercising agency; competence; dignity; ikigai (Japanese) meaning or purpose in life; jeitu (Portuguese) having the knack; self-mastery; skillfulness; sophrosyne (Greek) excellence of character and mind; sovereignty

Anticipation
awaiting justice; desire; eagerness; enthrallment; herz-klopft (German) heart-knock; hope; iktsuarpok (Inuit) anticipation of visitors; inspiration; lust; optimism; Þetta reddast (Icelandic) reassurance that things will work out; readiness; vorfreude (German) joyful anticipation; wishfulness; zeal

Appreciation
admiration; adoration; awe; being touched; being moved; byt (Russian) ordinariness, everyday; existence; conscientiousness; degrassé (Neologism) response to the vastness of existence; elevation; force majeure (French) sense of superior force; geworfenheit (German) involuntariness of context; goya (Urdu) suppose it were, suspension of disbelief; gratitude; hozho (Diné) the beauty way; inuuqatigiittiarniq (Inuit) intercultural respectfulness; kanso (Japanese) elegant simplicity; mahalo (Hawaiian) thanks, gratitude, admiration, praise, respect; muraqaba (Arabic, Sufi) wakefulness, heedfulness; relish; respect; reverence; richness; savoring; shibumi (Japanese) subtle, effortless beauty; shizen (Japanese) naturalness; thankfulness; ukiyo (Japanese) living in transient moments of fleeting beauty; víðsýni (Icelandic) panoramic view, open-mindedness; vipassanā (Pali) equilibrated observation and insight; xibipíío (Piraha) subtlety of liminality or threshold perception; yugen (Japanese) impenetrable mystery

Commitment
appamada (Pali) conscientious diligence, concern, care; avowal; engagement; giri (Japanese) debt of honor, sense of obligation; noblesse oblige (French) noble obligation; on (Japanese) moral indebtedness; pundonor (Spanish) committed to honor, dignity, self-respect; ren (Chinese) benevolent or altruistic; satyagraha (Gandhi’s neologism) being or holding true; startijenn (Breton) a boost of energy; steadfastness; stubbornness; tikkun (Hebrew) commitment to repair of the world; valar dohaeris (High Valerian) all men must serve; viriya (Pali) energy, effort, zeal

Courage-Confidence
andreia (Greek) courage, valor; candor; determination; empowerment; engagement; etterpåklokskap (Norwegian) afterwisdom, hindsight; grit; having heart; memento mori (Latin) a re-minding of death; oikeiôsis (Greek) sense of appropriation or ownership; overskud (Danish) with energy to spare; parrhesia (Greek) outspokenness; pluck; pride; self-confidence; self-esteem; self-respect; sisu (Finnish) extraordinary courage against adversity; valor

Connectedness
aloha, breath of presence, hello and goodbye; communion; dadirri (Aust. Aboriginal) tuning in, deep listening, still awareness; dhikr (Arabic, Sufi) remembrance of unity, absorption; enthrallment; fanaa (Arabic, Sufi) annihilation, overcoming ego, getting over yourself; garava (Pali) reverence or deep respect; maadoittuminen (Finnish) grounding, earthing; nibbana (Pali) quenching of self and its incompleteness; occhiolism (Neologism) awareness of the smallness of your perspective; rapture; redemption; relatedness; sonder (Neologism) sense of strangers having vivid lives of their own; tenalach (Irish) deep connection and relationship with the land-air-water; ubuntu (Zulu) commonality of human bonds; waldeinsamkeit (German)
connectedness to nature and forest, solitude

Contentment
adosa (Pali) non-aversion, absence of antipathy; alobha (Pali) absence of greed, non-attachment, generosity; ambientamento (Italian) settling in; anatta (Pali) impermanence of self (can also be terrifying); belonging; calm; comfort; composure; ease; feeling at home; feeling fortunate; fulfillment; gratification; gratitude; harmony; humility; inner peace; magnanimity; mellowness; patience; placidity; relaxation; satisfaction; tranquility

Desire
appetite; ardor; attraction; avidity; epithymia (Greek) sexual passion; eros (Greek) sexuality or sexual love; hunger; kilig (Tagalog) exhilaration or butterflies in attraction; kuai le (Chinese) quick pleasure, hedonic happiness; lust; passion; pleasure; resistance to deferring gratification; thirst

Empathy-Supportiveness
ahimsa (Sanskrit) commitment to doing no harm; altruism; approval; caring; charity; commiseration; compassion; concern; decency; empathy; encouragement; fremdschämen (German) embarrassment for another; generosity of spirit; gunnen (Dutch) congratulatory support for reward; jatorra (Basque) genuine, truthful, and agreeable; kalyana-mittata (Pali) advantageous friendship; karuna (Pali) empathy or compassion; kindness; kreng-jai (Thai) deferential consideration; melmastia (Pashto) hospitality and sanctuary; mudita (Pali) supportive joy, in another’s success or well-being; myötähäpeä (Finnish) embarrassment for another; omoiyari (Japanese) intuitive understanding of another’s desires; naches (Yiddish) pride in another’s success, especially intimates ; pena ajena (Spanish) embarrassment for another; pity; reciprocity; respect; solidarity; sympathy; taarradhin (Arabic) win-win agreement

Engagement
absorption; aiki (Japanese) harmonizing of energies; captivation; committment; enchantment; engrossment; enthusiasm; fascination; fingerspitzengefühl (German) fingertip feeling, the touch; flow; in the zone; jhana (Pali) absorbed concentration, origin of the word Zen; kairos (Greek) sense of the opportune moment; preoccupation; ramé (Balinese) something at once chaotic and joyful; sabsung (Thai) being revitalized or enlivened; spontaneity; upaya (Pali) attracted engagement, skillfulness; versenkung (German) immersion in a task; ziran (Chinese) spontaneity, not doing, moving like water

Equanimity
aplomb; ataraxia (Greek) robust, lucid, and serene tranquillity; balance; clarity; composure; equilibration; eunoia (Greek) beautiful thinking of a well mind; imperturbability; lotność umysłu (Polish) buoyant mind, vivacious and sharp; poise; resilience; sangfroid; serenity; stehaufmännchen (German) bouncing back, recovery; upekka (Pali) equanimity, detachment; zanshin (Japanese) enduring mind, imperturbability

Exuberance
abandon; being overjoyed; buoyancy; brio; briskness; curglaff (Scottish) bracing effect of cold water; ebullience; ecstasy; effervescence; ekstasis (Greek) standing beside oneself; enlivened; euphoria; exhilaration; fervor; friskiness; frisson (Fr) combines thrill, shiver, excitement, a little fear; ignition; jubilation; quickening; refreshment; switched on; tarab (Arabic) swept up in music; thrill; triumph; vitality; zest

Friendship
accord; affiliation; agreement; amicability; benevolence; brotherly love; camaraderie; comity; concord; fraternity; friendliness; gadugi (Cherokee) cooperative spirit; good will; philia (Greek) friendship of familiarity; philoxenia (Greek) welcome for strangers and guests; platonic love; rapport; sociality; solidarity; xenia (Greek) friendship for strangers and guests

Happiness
baraka (Arabic) being blessed, sanctified; bliss; cheer; cheerfulness; contentment; delight; elation; enjoyment; enthrallment; eudaimonia (Greek) happiness, good spirit, human flourishing; euphoria; fun; gaiety; gladness; glee; gratification; gratitude; high spirits; joie de vivre (French) joy of living; joy; pleasure; satisfaction; sukha (Pali and Sanskrit) pleasantness, blessedness, ease; well-being

Interest
arousal; being charmed; being impressed; being intrigued; captivation; curiosity; déjà-vu (French) feeling the new as if familiar; engrossment; enthusiasm; excitement; inquisitivity; intrigue; jamais vu (French) feeling the familiar as if for the first time; fascination; mystery; ostranenie (Russian) defamiliarization, common things seem unfamiliar; passion; proairesis (Greek) considered choice or preference; seeking; shoshin (Japanese) original intention, beginner’s mind; vigilance; wanderlust; wonder

Love
adoration; affection; agape (Greek) selfless, unconditional love; amor; amorousness; amour fou (French) mad or crazy love; appreciation; attraction; bhakti (Sanskrit) devotional love; caring; coup de foudre (French) lightning bolt, at first sight; devotion; empathy; enchantment; fondness; infatuation; maternal care; mudita (Pali) supportive joy, in another’s success or well-being; naz (Urdu) assurance or confidence in love; respect; sentimentality; support; sympathy; tenderness; veneration; warmheartedness

Pathos
bittersweetness; charmolypi (Greek) sweet or joyful sorrow; fernweh (German) call of faraway places, longing for the unknown; longing; mono no aware (Japanese) sense of the pathos and transience of things; nostalgia; onsra (Boro) to love for the last time; pining; plaintiveness; poignancy; sadness we often seek to feel; saudade (Portuguese) longing for the absent, the love that remains; sirva vigad (Hungarian) intermingling of joy and sorrow; wabi-sabi (Japanese) acceptance of transience and imperfection; weemoet (Dutch) courage in overcoming sorrow; wistfulness; yearning

Patience
assurance; calm; centeredness; composure; endurance; engelengeduld (Dutch) angelic patience, forbearance; ilunga (Tshiluba) tolerance limited to third strike; khanti (Pali) patience; laissez-faire (French) to let things be; peace; poise; provisional acceptance or resignation; resignation; restraint; self-possession; sitzfleisch (German) sitting flesh, enduring boredom; stoicism; tolerance

Playfulness-Humor
amusement; delight; caprice; cheek; cockiness; desbundar (Portuguese) exceeding one’s limits; erleichda (Neologism) lightening up, from Jitterbug Perfume; exuberance; friskiness; frolicsomeness; funniness as a precursor to humor; hilarity; humor; impishness; jocularity; joviality; laughter; mischievousness; paixnidi (Greek) game-playing love (ludus in Latin); sauciness; silliness; waggishness; wai-wai (Japanese) the sound of children playing; whimsy

Relief
alleviation; balm; consolation; datsuzoku (Japanese) shedding worldliness and habit; deliverance; desengaño (Spanish) awakening from enchantment or deceit; forgiveness; freedom from anxiety; freedom from fear; freedom from pain; fukkit (Neologism) liberation from a task or idea; hoʻoponopono (Hawaiian) reconciliation and restitution; non-attachment; reconciliation; renunciation; reprieve; respite; salvation; strikhedonia (Neologism) joy of saying to hell with it; sturmfrei (German) storm free, unassailable solitude; succor

Security-Trust
assurance; benefit of doubt; bondedness; certainty; confidence; conviction; dependence; faith; honesty; integrity, as continuity of character, non-duplicity; loyalty; mana whenua (Maori) power of exemplary moral authority; promise; querencia (Spanish) security and strength from place; reliability; reliance; responsibility; s’apprivoiser (French) to tame, learning trust and acceptance; saddha (Pali) confidence, faith, confidence and confiding; soundness; surety; turangawaewae (Maori) holdfast, a place to stand

Surprise
amazement; astonishment; awe; balikwas (Tagalog) starting, jumping to one’s feet; disclosure; discovery; distraction; epiphany; eureka; excitement; exposure; impact; revelation; startledness, thunderbolt; treat; unveiling; wonder

Affect Suggesting Avoidance:

Anger
acrimony; animosity; annoyance; antipathy; enmity; fury; hatred; hostility; indignation at betrayal; indignation at unfairness; indignity; insult; offense; rage; rancor; outrage; spite; wrath

Anguish
agony; angst; devastation; existential nausea; grief; heartbreak; hurt; litost (Czech) self-reflective misery; misery; pain; queasiness; suffering; torment; woe; wretchedness

Anxiety
anticipatory anxiety; anxiety about having anxiety, preemptive behavior, default conformity; apprehension; butterflies; concern; disquiet; doubt; dread; edginess; fear of anxiety; foreboding; free floating anxiety; insecurity; jitters; misgiving; nervousness; paranoia; tenseness;  tension; torschlusspanik (German) gate-closing panic, fear of diminishing opportunities; uncertainty; uneasiness; upset; worry

Condescension
arrogance; belittling; carping; complacency; contempt; cynicism; derision; disdain; disrespect; disgust; disparagement; effrontery; haughtiness; insolence; invidiousness; gloating; loathing; mockery; rudeness; scorn; smugness

Confusion
ambiguity; bafflement; befuddlement; bemusement; bewilderment; daze; disbelief; discombobulation; disorientation; dissonance; dumbfoundedness; fluster; hysteria; koyaanisqatsi (Hopi) life out of balance; muddledness; panic; perplexity; turmoil; uncertainty

Cruelty
barbarity; bloodthirst; brutality; callousness; cold bloodedness; coldness; harshness; heartlessness; inhumanity; malice; malignity; persecution; revenge; ruthlessness; sadism; schadenfreude;
spite; vengeance; vengefulness; venom

Defensiveness
acting out; aversion; being overprotective; being refractory; belligerence; carrying a chip; combativeness; competitiveness; contrariness; defiance; denial; hostilty; inat (Serbian) proud defiance, sometimes to survive; jealousy; obstinacy; passive aggression; refractoriness; reluctance; stubbornness; truculence; willkür (German) willfulness, arbitrariness

Depression (distinct from sadness, more mood than emotion)
bleakness; degradation; dejection; demoralization; desolation; despair; despondency; disconsolation; downheartedness; ennui; existential despair; gloom; hopelessness; indifference; lethargy; listlessness; melancholy; pessimism; sadness without an object; slough of despond; weltschmerz (German) world-weariness; woefulness

Disappointment (the result of appointment or expectation)
being shot down; defeat; desengaño (Spanish) disenchantment; discontent; discouragement; disenchantment, result of incantation; disillusionment, result of illusion; dismay; dissatisfaction; failure; frustration; lost confidence; lost hope; impasse; letdown; setback

Disconnection
alienation; anhedonia; apathy; boredom; detachment; disengagement; disharmony; disinterest; dissociation; dissonance; disorder; doldrums; ennui; estrangement; incoherence; indifference; negation; numbness; resignation; separation; stagnation; tedium

Disgust (can have both hygienic and moral senses)
abhorrence; aversion; condescension; contempt; derision; disapproval; disapproval of norm violations; disdain; dislike; disparagement; distaste; loathing; nausea; repugnance; resentment of free riders & parasites; revulsion; scorn; surfeit

Distress
affliction; agony; anguish; being startled; discomfort; disease; dismay; alarming homeostatic signals; misery; pain; shock; suffering; torment; torture; trauma; vexation; wretchedness

Displeasure
aversion; chagrin; disapproval; dislike; disgruntlement; dissatisfaction; ingratitude; irritation; novaturience (neologism) desiring powerful change; pique; provocation; resentment; umbrage; unhappiness; vexation

Distrust
apprehension; caution; cynicism; disbelief; disquiet; doubt; hesitancy; incredulity; indecision; lack of faith; misgiving; mistrust; perplexity; qualm; scruples; skepticism; suspicion; wariness; withdrawal

Envy
backbiting; begrudging; covetousness; greed; jaundiced eye; jealousy; resentfulness; rivalry; spite

Exhaustion
akrasia (Greek) weakness of will ; being beset; being drained; being overwhelmed; burnout; collapse; consumption; debilitation; depletion; ebb; enervation; enfeeblement; engentado (Spanish) worn out, socially overloaded; fatigue; frailty; indolence; jadedness; languor; lethargy; listlessness; melancholia; sluggishness; sloth; torpor; weariness

Failure
awkwardness; clumsiness; debacle; defeat; deficiency; embarrassment; fallenness; frustration; gracelessness; impotence; inadequacy; incompetence; inefficacy; ineptitude; insufficiency; stupidity

Fear
alarm; apprehension; consternation; dread; fear of aggression; fright; horror; intimidation; panic; phobia; superstition; terror; worry

Frustration
agitation; aggravation; annoyance; disappointment; discontent; disgruntlement; displeasure; dissatisfaction; dissonance; dismay; exasperation; grievance; impotence; irritability; irritation; nuisance; obstruction

Hurt
abandonment; anguish; betrayal of trust; bitterness; being burned; being deceived; grievance; grudge; hard feelings; heartbreak; infidelity; injury; insult; offense; petulance; resentment; slight; trauma; woundedness

Insecurity
akrasia (Greek) weakness of will; anxiety; clinginess; dependency; desperation; doubt; fragility; hesitancy; imperilment; indecision; indigence; instability; jealousy; jing shu (Chinese) fear of losing out; passivity; possessiveness; self-doubt; threat of loss of investment; uncertainty; vacillation; vulnerability

Irritation
agitation; aggravation; annoyance; being distraught; crankiness; hypersensitivity; impatience; irksomeness; irritability; nails on blackboard; peevishness; restlessness; skritchets (Neologism) my coinage, means what it sounds like; stress; tiresomeness; upset

Loneliness
alienation; abandonment; barrenness; bleakness; dejection; desolation; forlornness; heartache; inferiority complex; isolation; kaiho (Finnish) hopeless longing in involuntary solitude; neglect; rejection; separation; separation anxiety; unworthiness

Loss
bereavement; calamity; defeat; despair; destitution; disadvantage; dismay; grief; heartache; misfortune; mourning; privation; ruin

Mania
amplification; catastrophizing; compulsion; delirium; dementia; derangement; eleutheromania (Greek) mania or frantic zeal for freedom; emotional hijacking; exaggeration; frenzy; furor; hyperbole; hysteria; infatuation; obsession; vividness effect

Masochism
blame mentality; disease mentality; helplessness; hypochondria; inanimacy; passive aggression; passivity; self-flagellation; servility; servitude; subservience; suffering; victim mentality

Neediness (emotional)
amae (Japanese) behaving to supplicate for love or indulgence; attention seeking; beggary; being pitiful or pitiable; bellyaching; complaint; craving; fawning; griping; hunger for affection; insecurity; plaintiveness; sycophancy; whimpering; whininess; wretchedness

Reticence
awkwardness; circumspection; diffidence; guardedness; hesitation; low self-esteem; passivity; reluctance; self-consciousness; shyness; stage fright; suspicion; tentativeness; timidity; wariness

Sadness
cheerlessness; dejection; despair; despondency; dysphoria; forlornness; gloominess; hiraeth (Welch) sick for a home to which you can’t return, or never was; heartache; heartbreak; homesickness; hopelessness; incompleteness; melancholy; mournfulness; pensiveness; sehnsucht (German) inconsolable longing, pining, yearning; self-pity; sorrow

Shame
abasement; abashment; chagrin; contrition; degradation; embarrassment; dishonor; disrepute; guilt; guiltiness; hiri (Pali) moral shame, disgust with evil; humiliation; ignominy; inadequacy; morkkis (Finnish) moral or psychological hangover; mortification; ottappa (Pali) fear of evil consequence, moral dread; regret; remorse; self-reproach; stigmatization

Shock
being appalled; being rattled; being unnerved; confusion; consternation; dissociation; dread (beginning of experience); force majeure (French) sense of superior force; horror (aftermath of experience); overwhelm; stupefaction; trauma

Surrender
abandonment; abdication; acquiescence; capitulation; complaisance; concession; defeat; giving up; helplessness; knuckling under; being a loser; quitting; relinquishment; renunciation; resignation; submission; succumbing




3.4 - Cognitive Biases

Anticognitives Sorted by Domain: Accommodating, Situational, Emotional, Personal, Social

“The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate. .. . And such is the way of all superstitions, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled.”
Francis Bacon, 1620.

    The term Cognitive Bias was first  proposed by Amos Tversky and Daniel Kahneman in 1972. It refers to our motivated deviations from dispassionate, rational, objective, logical, and often more time-consuming judgments. These are normally unconscious and driven by prior learning, both cognitive and emotional. The relative universality of these biases in humans, indicated by the widespread agreement on names for specific processes, suggests they may be functions or side effects of evolved heuristics, and at least some appear to have adaptive value in reducing cognitive load and shortening our decision times. Cognitive biases function in several (but not all) of the anticognitive domains developed here. These biases will always carry some some sense of personal relevance, but when they operate in the emotional, intrapersonal, and social domains, they are often more highly charged with emotion, and the result is then referred to as “hot cognition.” Tversky and Kahneman (1974) suggest that not all cognitive biases are motivated by “wishful thinking or the distortion of judgments by payoffs and penalties.” Some simply stem from over-reliance on heuristics, and here they identify cooler sources in the representative and availability heuristics and adjustments from anchoring. I can’t concur. These particular heuristic “biases” are treated here as simple errors in the native domain. The word bias is reserved here either for resistance to change due to the inertia of accumulated apperceptive mass, or else resulting from our more motivated, heated, or defensive misperceptions.
    The biases are indexed below by their most common domains of operation. Within these domains, they may have several functions, particularly allowing users to: 1) take shortcuts and make snappier decisions, 2) inject personal relevance or meaning into a situation, 3) supplement an insufficiency of information, and 4) simplify an excess of information. The Sensorimotor and Native domains have limitations rather than biases, while the Cultural and Linguistic domains are more concerned with Logical Fallacies.
    In the Accommodating domain, the mind will interpret its environment to fit its pre-existing framework, and will try to resist any and all changes from puny to paradigmatic, in large part due to prior investments made in learning. This resistance is a function of apperceptive inertia or mass. It also concerns the functional competence of working memory. The Situational domain concerns problem solving and simple stress. This, too, involves working memory, as well as implicit or procedural memory. Given the association with stress, some anticognitives called biases should be and have been reassigned to Coping Strategies. The Emotional domain may fiddle with the values of situations and their outcomes. The Personal domain will attempt to shore up self-schemas and scripts and personal confidence. As such, some anticognitives called biases should be reassigned to Defense Mechanisms. The Social domain sees biases deployed to maintain conformity, belonging, and status, which often entails the distortion of perspectives on in-groups and out-groups, and our places within them.
    Like native heuristics, these biases are perceptual and cognitive shortcuts that spare us pondering and agonizing when our urges are urging us on, or when the beast of time is snapping at our heels. We aren’t really built for rational thought. And we never were. We had more pressing concerns as we evolved, and many, the ones that led to natural selection, left us little or no time for pondering. Feeling confident, collecting allies, convincing others, and winning arguments, was often more important than being right. Neither does intelligence always come out on top. As Michael Shermer noticed, “Smarter people are better at rationalizing bad ideas.”
    About the best known of these processes is the confirmation bias, being an inclination to cherry-pick information that confirms our preconceptions. The self-serving bias is another big one: our inclination to remember our successes and forget our failures. The scariest is the bandwagon bias, the force that holds Nazis and other hoardes together. Neither does it always matter that a biased opinion be supported by a majority: siding with a minority can also confer a sense of specialness.
    To my limited knowledge, nobody has yet taken a comprehensive look at anticognitives that integrated cognitive biases with coping strategies, defense mechanisms, and logical fallacies. For the most part, these four sets appear to have evolved independently with little cross-communication. Vasco Correia has at least made a beginning of this with “Biases and Fallacies: the Role of Motivated Irrationality in Fallacious Reasoning” (2011). Several of our anticognitives have made appearance on two or more of these lists, and some, like cognitive dissonance, are underlying processes or stressors fundamental to them all. Lists of cognitive biases are easy enough to find, but they vary widely. Many lists conflate the biases with native heuristics like pareidolia, or the clustering illusion, but these really don’t belong in the same category. There exists no inherent momentum or motivation to distort in the heuristics: they are merely imperfect. Tversky and Kahneman (1974) made some inroads into associating a dozen or so of the less or least motivated cognitive biases with three of our evolved heuristics (representativeness, availability, and adjustments from anchoring). But this was only done with respect to estimates of probability, a primary function of cognition. Additional domain assignments for the biases are given here, and a conceptual distance between heuristics and biases will be maintained.
    Since biases normally find a more unconscious expression, modifying them requires a learning curve and feedback into the unconscious via re-memory, starting with training in the recognition of specific biases when they arise. Just as hypocrisy is invisible to hypocrites, there is nothing already inherent in impaired judgement that can detect impaired judgement. These partialities, presumptions, and prejudices must be detected from outside their boundaries, or from a larger frame of reference.

Accommodating
    Availability bias (or availability cascade) overestimates the information that’s already available, a presumption that familiar information is true, or else it wouldn’t be so familiar. This can also favor recent experience still fresh in memory, particularly when still emotionally salient. Paradoxically, repeated attempts to challenge or refute a familiar bit of information might strengthen its hold due to repetition. Processing the familiar has a lighter cognitive load and comes more readily to mind. This is also called the illusion-of-truth effect. Oft-repeated platitudes, no matter how vapid or vacuous, are assumed to be true, at least until one develops an aversion to hearing them.
    Belief bias drives evaluation of the strength of an argument or its premises based on prior beliefs about its conclusion. This is also called the Semmelweis effect or reflex, including automatic rejection of new evidence contradicting established beliefs or norms.
    Bias blind-spot is the inability to see the operation of our own cognitive biases, even when we have the ability to see them in others. An example is the inability of hypocrites to see their own hypocrisy, a phenomenon also driven by cognitive dissonance and a need to partition off conflicting information.
    Cognitive distortion is a general term, referring to exaggerated and irrational thoughts given their structure by logical fallacies and force by maladaptive emotions.
    Confabulation or false memory syndrome is a creative function of memory  used to infill lacunae or gaps in memory with imagined experience. It’s like Spackle and Bondo for the memory. As Lewis Carroll noted, “It’s a poor sort of memory that only works backward.”
    Confirmation bias is the best known and most common bias. We’re inclined to seek confirmation of things we’ve thought and felt, and refute things we haven’t. Cognitive construction is an investment, of work and energy. It’s lots of work ripping out a partition you’ve spent a long time building, especially where it might be a bearing wall, or the utilities are interconnected with what must remain. We attend and accept whatever confirms our preconceptions and we must ignore or dismiss what contradicts them. Michael Faraday (1791-1867) summed this up: “The force of the temptation which urges us to seek for such evidence and appearances as are in favor of our desires, and to disregard those which oppose them, is wonderfully great. In this respect we are all, more or less, active promoters of error. In place of practicing wholesome self-abnegation, we ever make the wish the father to the thought: we receive as friendly that which agrees with, we resist with dislike that which opposes us; whereas the very reverse is required by every dictate of common sense.” We will even receive a little dopamine attaboy whenever we confirm our own rightness. The media has evolved to cater to this by providing specialized news and programming to target demographics, cherry-picking data, and referencing only a limited universe of discourse.
    Congruence bias is the concentration and over-reliance on testing a given or pet hypothesis and a corresponding failure to consider alternatives or let them prove themselves.
    Continued influence effect is the persistence of misinformation previously learned after it has been corrected. Errors don’t always come out by their associative roots and can linger with other connections. The mind gives it second and third chances. Extinction of learning is more of a process than a simple erasure. There is a preference for information acquired earlier, which is consistent with having more acquired associations.
    Cryptomnesia is a form of misattribution whereby a recollection reemerges in the guise of a new idea. It could be a legitimate excuse for plagiarism, but there’s Google now.
    Familiarity backfire effect is a close cousin to the availability bias. In trying to refute a common piece of misinformation, it only gets repeated and thereby gets reinforced.
    Framing bias relies on too narrow a range of available frames of reference or contexts. Perceptions may be manipulated by others simply by limiting the available frames, controlling the universe of discourse, or constraining a dialog to talking points. This includes phenomena like déformation professionnelle, seeing only according to one’s own professional lens.
    Hindsight bias is the retrospective assumption that completed events were more predictable than they were. It’s also called the “I-knew-it-all-along effect.” This will incorrectly remember earlier states of mind based on knowledge of how things really turned out. A variant, called the historian’s fallacy, assumes that decision makers of the past had access to information that may only now be available. Hindsight bias leads many to believe that there are no coincidences, that everything happens for a reason, because things will normally work out. Nietzsche counters that with: “A loss rarely remains a loss for an hour,” citing life’s opportunism as a better explanation.
    Information bias is a presumption that more information is always better. There may be a tendency to prefer quantity over quality and not vet signals for mere noise. Sometimes decisions can be best made with simplified, summarized, or gist information. The prolonged collection of information may also delay decisions beyond their optimal timing.
    Leading question, or suggestive interrogation, is an inauthentic inquiry with the motive to obtain a specific result, or direct the interrogated to a desired answer. Questions may narrow the universe of discourse, excluding potentially promising but unwanted answers.
    Loaded question is a form or inquiry that contains implicit assumptions that themselves ought to be questioned. This is the epistemic side of the logical fallacy of many questions or plurium interrogationum. A common example would be an inquiry into details of the historical life of Jesus, while assuming it established that this was an historical person.
    Magical thinking is the attribution of causal correlations or connections between phenomenon or things which have no real causal relationship. This has roots in native heuristics that look for meaning in simultaneous or serial events, and it can be incorporated into memory with single-trial learning. Superstition is a common example, and the overuse of Jung’s notion of synchronicity is another. Up a level higher in abstraction, many real-word systems operate in patterns that can be compared by analogy. Magical thinking can seize upon this and take the analogy as the underlying reality, rather than an overlain abstraction, so that producing an effect in one part of one system is thought to magically reproduce the same effect in the corresponding part of another system. To simply use the analogy to inform or suggest new ideas is known as correlative thinking. This may or may not get involved in magical thought.
    Narrative bias is a preference for learning new material in meaningful linear sequences, as if experienced in real life. It has its roots in native heuristics, where the preference is for demonstration, or a simple, coherent sequence of experiences, even before linguistic content is added. The narrative bias is also observable in dreams, where random recollections are stitched together into sequences by inserting interpolative connections and confabulations. It’s also active in sleep paralysis, where stories must be told, as of a succubus or an alien abduction. This is the bias that gives legend, fable, and mythology so much emotional power compared to rational explanation, even when a myth might make no rational sense.
    Novelty bias, or novelty effect, is the tendency of novel experience to draw both attention and interest disproportionately to the familiar. Stress response is also greater with unfamiliar threat suggestion. This behaves analogously to a homeostatic mechanism, operating until the novel has become the familiar, emptied of mysteriousness, reduced to “been there, done that.” It’s an evolved prompting for the organism to learn more about (or map) the environment, with roots in native heuristics and learned methods of implementation. But this does not mean that new information will be favored over old.
    Observational selection bias comes to life when we suddenly start noticing something that we’ve recently learned about, as though the frequency of its appearance has increased from zero. We are primed to see this as a new or recent phenomenon until it becomes familiar, and in this it’s related to the novelty bias. It’s is also called the frequency illusion and the Baader-Meinhof Phenomenon. This is different from the “selection bias” or non-random sampling.
    Overkill backfire effect is a defensive response to a complex attempt to correct a simple error. The simple myth or explanation will be perceived as preferable to a correction that entails a greater cognitive load. One may also suspect the critic of protesting too much. Generally, the fewer and more simple the counter-arguments, the better. Scientific explanations may be furiously rejected in favor of a simpleminded myth.
    Self-evident truth is the illusion that a proposition proves itself with no need for further evidence. The basis for the claim, however, is still found in human cognitive processes ranging from naive realism to unexamined assumptions, and not in a sufficiency of evidence provided for the proposition itself.
    Selection bias is the collection of non-representative samples of data from a target population. We will find what we are looking for.
    Sorting error is the placement of an object or experience into an incorrect category, or else the use of a flawed set of categories. Further, as Vincent Sarich offers, “we can easily forget that categories do not have to be discrete. If this were not so, then why should the notion of “fuzzy sets” been seen as so revolutionarily productive?” The human races are fuzzy sets in this sense, distribution curves around a central normative value, but with some deviations bleeding far into other sets. But this is not a reason to reject categorization as such. Instead, we merely qualify what the limitations of the categories are. We remember that reality isn’t bound by our categories. Categories can limit what we are able to perceive if we fail to develop alternative strategies for stepping out of our taxonomic boxes and biases.
    Status-quo bias, or conservatism bias, is resistance to new ideas or evidence that depart significantly from the established or familiar. It’s also the inability to learn from the mistakes of history or past generations because the traditions are seen as tested or proven by time. It’s also called the system justification bias. Peer review can include a subset of this bias. The is-ought problem is sometimes answered by this bias. The statement, “If it ain’t broke, don’t fix it” also belongs here. And the oxymoron of “liberal institutions.” The assumption is that change, being untested, will likely be inferior, or will likely make things worse. Systems evolve to be self-sustaining, and sometimes they can recruit human cognition to accomplish this.
    Superstition bias combines confirmation bias and magical thinking. This is learned, often through hearsay, although individual superstitions can be picked up via single-trial learning, as with lucky undergarments that contribute to the championship win. It’s distinct from innate phobias, as of snakes or spiders.
    Survivorship bias is the assignment of greater worth or value to survivors of a selection process, or winner of a competition, and the associated neglect or devaluation of the losers, in part due to lessened visibility. That the winner writes the history is one example. Survival of the fittest is another example, although Herbert Spencer’s term “fittest” is almost universally misunderstood in terms of conquest, instead of adaptation, or fitting in. Dismissal of the role of bad luck or simple accident is a frequent source of error here.
    Unstated assumptions are often held subliminally or unconsciously, and may even have been absorbed and incorporated into our cognitive processes before we had the ability to critique them.
    Verbatim effect is the inclination to remember the meaning or the gist of an experience, packet of information, or sentence better than we remember its precise form. Memories aren’t duplicate copies of inputs or experiences, but representations of them.

Situational (See Coping Strategies, too)
    Ambiguity effect is a preference for known quantities over probabilities and likelihoods, even when a better risk assessment strongly favors the fuzzier choice. It’s a form of risk aversion. It might help keep rational people out of the casinos and away from lotteries.
    Anchoring effect is an over-reliance on the first piece of information that gets presented, the lead, or the clickbait headline. It offers an arbitrary starting point that interferes with subsequent big picture comprehension and is often used in persuasive speech like advertising. This is a bias version of the native heuristic called the priming effect. It’s also called insufficient adjustment.
    Attentional bias is simply distraction, or being distracted, being pulled away from the present by recurring or prepossessing thoughts.
    Context effect is a temporary deficit in memory recall when outside of the memory’s original contexts. Also called cue-dependent forgetting, or retrieval failure, it shows the usefulness of present memory cues.
    Current moment bias is a favoring of the present that will interfere with deferring gratification, or lead to hyperbolic discounting of both future rewards and negative consequences. Immediate payoffs are preferred, regardless of projected future value. The present is a known quantity, a bird in the hand, while the future is hard work to imagine or predict.
    Decoy effect, or asymmetric dominance effect, will alter choice behavior when a predominance of choices are similar. A third option that’s presented as a decoy may increase preference for the option it most resembles.
    Distinction bias will view two choices as more distinctive in comparison than when examined alone. The direct comparison highlights the particulars or uniquenesses.
    Dynamic inconsistency is a failure to see preferences and other variables changing or evolving over time, particularly in projecting future outcomes. Temporal framing or time horizons often need extending to perceive change.
    Effort heuristic is the one-dimensional mental correlation of the value of something with the amount of effort expended in acquiring it. Things that play hard-to-get appear to be worth more than others that offer themselves gratis or pro bono.
    Einstellung effect refers to an installed, established, or habitual problem-solving approach being preferred, where better approaches exist, and may be known, but get ignored. This is just how things are done.
    Expectation bias is the tendency for expectations to affect perception and behavior. It’s also called the observer-expectancy effect, subject-expectancy effect, selective perception bias, or most commonly, self-fulfilling prophesy. A researcher or experimenter may expect a certain result and unwittingly tailor an experiment to produce that result, or notice only results that fail to falsify the hypothesis.
    Focusing effect, or focusing illusion, occurs when we placing an excessive importance on one aspect of an event, or subset of a category, at the moment it occupies our attention. The object of focus is rarely as important in the greater scheme of things as it is at that moment. You can’t see that forest.
    Gambler’s fallacy, or the hot-hand bias, is the expectation that streaks of winning or losing will continue in defiance of probability, or that previous events that are statistical rather than causal will influence future events. It’s also a logical fallacy when used in argument.
    Hyperbolic discounting is a preference for the immediate results or rewards relative to the later, or discounting the value of the later results and rewards in proportion to the length of delay. This is also called a time horizon bias where it favors short-term issues and decision criteria over the long-term.
    Illusory correlation is the assumption of meaningful or causal relationships between two events that may be related only by coincidence. The two may be either simultaneous or sequential. Sometimes a third factor can be in play and acting as cause for both. This can span several domains. It has roots in native heuristics and has a pathological expression in paranoia. The illusion found some articulation in Jung’s notion of synchronicity, but subsequent readers carried the idea into absurdity. It’s also overextended into logical fallacies of causation.
    Inconsistency bias is a tendency to employ different metrics and evaluative criteria for different sides of the same question. One side of an issue might be addressed in best-case-scenarios while the other is seen in worst-case. This is frequently seen in hypocrisy, and most obviously in politicians.
    Insensitivity to sample size will pay too little attention to the scope of data sampling. Variation from norms is more likely in smaller samples and may seem unexpected, striking, or highly significant. This applies to statistics over time as well, where the daily newspaper might report “crime rate soars” and “crime rate plummets” in editions spaced only a week apart, while the crime rate remains fairly constant over the season.
    Ludic fallacy, from Taleb’s Black Swan, is the use of games or game theory to model complex, real-life scenarios. It works sometimes, sometimes well, but it’s oversimplification and can easily mistake map for terrain. The games may be also modeled from studying only a single culture and assumed to be universal, as with assumptions about “economic man,” based on experiments run only on Western college students.
    Neglect of probability, or probability neglect, will ignore known, or easily grasped, statistical probabilities in decision making. Sometimes this will be to simplify a decision, reducing it to a simple all-or-none, or black-or-white binary choice. The dismissal of risk, however, is inversely proportionate to the known severity of potential consequences. We may believe we are too special for statistics, in which case this would be filed in the personal domain.
    Parkinson’s law of triviality, also called bikeshedding, refers to giving a disproportionate weight to trivial considerations, rather than apportioning attention and effort according to their importance or value. A political board or deciding body may devote as much time to the color of a chair as it does to an annual budget. This may point to a failure to delegate, decentralize, or devolve authority to the appropriate level. It’s busywork and fussing, instead of triage. Data is treated with too much equality.
    Pro-innovation bias is a haste to have innovation adopted by the culture at large without need for testing and modification. Usefulness of the innovation may be overvalued, perhaps simply because this is thought to be the new and improved thing, and its limitations are unsuspected or undervalued.
    Recency bias is a framing bias applied to the progression of events, where recent and current trends are taken to be the norm, often ignoring long-term fluctuations and the impact of immanent innovations. The latest information is the best, it’s just how things are now.
    Saliency bias, or perceptual salience, is a tendency to concentrate on the most available, recognizable, or noticeable feature of an experience in making a decision or judgment. It’s a focus on the clearest figure-ground distinction. It’s also referred to as the focusing effect, and the von Restorff or isolation effect, particularly when understood as facilitating memory.
    Zero-risk bias is a preference for the complete elimination of all risk over just minimizing the probability of risk, despite incurring disproportionate costs in security and investment with diminishing returns. This is frequently seen in government policies requiring generalized design across the board for worst-case scenarios, rather than case-by-case design for more realistic probabilities. We see the dark side of opposition to this when a product is given a green light because it doesn’t kill more than an acceptable number of consumers.
    Zero-sum heuristic means holding a prior assumption that a game follows a zero-sum model, that a two-party transaction must have a winner and a loser, or that a win-win- scenario is not an option in a particular case. This can be a manifestation of an excluded middle or black-and-white fallacy.

Emotional
    Attitude polarization is a kind of refutation bias, opposite the confirmation bias, but much hotter. It’s also known as belief polarization and polarization effect. Beliefs become more hardened and extreme as discussion, argument, or investigation continues. An argument is unsurprisingly persuasive to its maker. It’s closely related to the backfire effect as a defense mechanism, used when personal beliefs are challenged.
    Duration neglect is the tendency to ignore the duration of an unpleasant or painful event in assessing its overall unpleasantness. Emotional hijacking also hijacks one’s perspective in time. It doesn’t matter to some if the hypodermic needle is only in the arm for a second: the horror will be absolute and forever. This is related to the native heuristic called the peak-end rule. In the heat of an argument, use of the words “always” and “never” multiplies.
    Exaggeration bias is a preference for inflated presentation of information over a simple statement of fact. Hyperbole is the stupidest thing in the history of the universe. Nietzsche wrote. “At bottom, it has been an aesthetic taste that has hindered man the most: it believed in the picturesque effect of truth. It demanded of the man of knowledge that he should produce a powerful effect on the imagination” (WTP #469). We feel we can’t get through to others (or even ourselves) without some powerful feelings, without some punch. We miss out on seeing how remarkable the ordinary is, or the simply authentic.
    Impact bias or durability bias is related to duration neglect. This makes it difficult to estimate the prospective duration of a feeling, emotion, or affective state, with a tendency to overestimate.
    Interpretive bias will add an extraneous valence or value when interpreting ambiguous or ambivalent stimuli or situations. The bias has been best studied in conjunction with anxiety, but nearly any emotional reaction or state can influence interpretation. There may be no relation at all between the affective state and what is being interpreted.
    Loaded words recall associated secondary meanings’ evaluative emotional charges along with them, calling up unintended associations that either distract from the intended meaning of a statement or effectively deliver a persuasive nudge. The emotional responses are usually subliminal and not subject to conscious management until they’re already felt. Dog-whistle words will trigger hidden associations only in an intended group.
    Loss aversion bias operates when the unpleasantness, disutility, or sense of loss of something is more emotionally motivating than the pleasure, utility, or satisfaction in first attaining or acquiring it. It might not feel like we value things more once we have owned them awhile, but this will show when that ownership is threatened. This can also apply to giving up cherished ideas. This is also called the endowment effect.
    Mood congruent memory bias is an improved recall of information that’s congruent with the current emotional state or mood. The affect associated with a memory is another, often-overlooked point of access.
    Negativity bias is a tendency to perceive negative experience or news as more important, intense, or profound. Pain and loss will have more powerful effects on us that pleasure and gain. Conservative mindsets with a status-quo bias will also have a stronger negativity bias. We are primed by evolution to be more wary of pain and suffering, leaving us more risk averse than neutral, but we will also take risks to avoid imagined negative outcomes. We err on the side of caution except when the promise of reward is greatly disproportional. We also have better recall of unpleasant memories than pleasant ones. But we are, at the same time, more ready to accept and integrate good news than bad.
    Suggestibility is an inclination or readiness to accept and act upon propositions, arguments, and information from others. It belongs as much in the social domain as in the emotional. It’s among those doors flung the widest open to logical fallacy or specious reasoning because people want to believe, and they want to believe other people. The attractiveness of the information, or its appeal to emotional needs and wants, such as self-esteem, will drive our gullibility or willingness to accept.
    Wishful thinking is a preference for seeing or projecting desired outcomes over more realistic ones. We will often set aside hard-won knowledge from painful experience to envision things in a rosier light. Even when we know deeper down that sadness, disappointment, or pain will be inevitable, we will postpone these as long as possible. We do at least derive some pleasure while entertaining the fantasy. Such tenuous realities depend upon our depending on them. It’s also a paradoxical companion to the negativity bias (above). This is also functions as a coping strategy.

Personal (See Defense Mechanisms, too)
    Choice-supportive bias will reaffirm prior choices even in the presence of reason to doubt. It may entail an excessive devaluation of choices forgone. It’s also called post-purchase rationalization, and sometimes buyer’s Stockholm syndrome. It’s the opposite of buyer’s remorse. It reflects a desire to remain consistent, as though this were an important marker of character.
    Consistency bias will project present attitudes and behavior back onto an earlier self, or will remember the earlier versions of the self as more closely resembling the present, thus exaggerating the continuity of one’s character. However, when a person is trying to change behavior, as in addiction recovery, they may do the opposite, and imagine themselves having been more horrible people than they actually were.
    Dunning-Kruger effect, or illusion of confidence, is an inflated confidence or self-assessment at the earlier stages of a learning curve. This is an upward spike on a graph of experience (x) against confidence (y), prior to confidence falling again as humility is gained with further education, to rise again only gradually. It’s “knowing just enough to be dangerous.” This spike or peak is referred to as “Mount Stupid.” Darwin wrote, “Ignorance more frequently begets confidence than does knowledge.” There is also a tendency for experts to underestimate their own ability, relative to the less expert self-estimates.
    Effort heuristic, first introduced in the Accommodating domain, arises frequently here as well. We don’t want let go of schemas and scripts that were gained at some personal cost. We can often use a Salvation Army of the mind to donate our old stuff to. Or maybe a big old dumpster.
    Egocentric bias will claim more credit than is due, especially in collective endeavors, or it will recall our pasts in creatively self-serving ways. People will think themselves “all that and a bag of chips.” They might also think themselves exceptions to the laws of probability. It becomes narcissism when generalized across the personality.
    Illusion of explanatory depth is a tendency to assume our understanding, or the source of our opinions, is deeper and more detailed than it is. We believe we know more than we actually do, but when called upon to set forth our knowledge, we may stammer. It’s related to the Dunston-Kruger effect. It’s exposed when someone is asked to explain something in depth, hence the name.
    Illusory superiority is the overestimation of our desirable traits and the underestimation of our deficits. It’s also called a superiority bias, leniency error, or the better-than-average effect, and it’s a cousin to the self-serving bias (below). This has versions that contribute to group delusions, including racial and national exceptionalism.
    Introspection illusion is the difficulty people have in understanding the processes by which their own mental states and behaviors emerge. We will tend to overestimate our own levels of understanding and self-awareness, and simultaneously underestimate them in others. Our incorrect explanations or accounts come readily to mind, often by way of distorting self-schemas.
    Overconfidence effect means that confidence we have in our own judgment tends to be reliably or measurably higher than the objective measure of its accuracy. It serves us sometimes when it allows an extra quantum of courage. This is not the case in those with low self-esteem or confidence issues.
    Rationalization alters the explanation for a behavior or behavioral trait to make it seem more rational or more consistent with an approved self-schema. We will cement our misperceptions with causal theories, reasons, and ad hoc explanations. Making excuses is a common form, or finding ways to say no harm has been done. It’s a confusion of reason and reasons. It’s also a coping strategy and a defense mechanism. In argument, this is reasoning backwards, justifying a preferred conclusion with rationally tortured premises, the essence of intentional logical fallacy.
    Restraint bias will tend to overestimate our ability to control our impulsive behavior or to show restraint in the face of temptation. It might stem from an inflated idea of the strength of our character, or a denial of the strength of our natural inclinations, or a false belief that we are fundamentally rational beings. It may ignore the function of affect in making choices. It’s part of the complex of reasons why we don’t simply walk away from addiction by straightforward choice.
    Self-serving bias is our inclination to remember our successes and forget our failures, and also the willingness to believe we are better than we are, or better than others. Much higher than 50% of any population will think itself above average in any desirable metric. We recall the past in self-serving ways, remembering when we shone, forgetting when we didn’t. We may reinterpret our failures as successes. We may claim full responsibility for successes, and blame all failures on circumstances or other people. We are driven to distort perceptions that fail to maintain or increase our self-esteem. The ambiguous information will be interpreted favorably. This includes the Lake Wobegon effect, wherewith we’re all more above average than we are, smarter, more attractive, more competent, and better drivers.
    Third-person effect will underestimate the effect of broadcast messages (like propaganda and advertising) on ourselves, and yet overestimate their effect on others. Those others are objectified, whereas I myself am more special than that. But as the saying goes: “You aren’t stuck in traffic: you Are traffic.” And as mass communication gets more proficient in its persuasive abilities, self-delusion here can lead to the buying of unneeded things and ideas.

Social
    Appeal to misleading or bogus authority is a willingness or tendency to believe authorities or celebrities when they are speaking outside of their areas of accomplishment or expertise. It’s the opposite of guilt by association and also a logical fallacy.
    Bandwagon effect will get swept up in the movement or cognitive position of a crowd. It’s an in-group bias, whether this is the general mob or any fair number of the extra special elite. The group’s thoughts rub off and supplant, or at least will take precedence over, private ideas, even demanding the abandonment of private ideas. In flock or herd behavior, or groupthink, the pressure to conform tends to override individual thought.
    Double standards will apply different lexicons, metrics, standards, or sets of standards to different groups or classes of people. These will run afoul of legal systems that require impartiality or equal treatment under law, although some legal systems will just wink.
    False consensus bias, or projection bias, overestimates how much others think like us, share our current emotions, or hold the same beliefs and values. It will overestimate our own normalcy and exaggerate our confidence.
    Fundamental attribution error will account for the errors of others in terms of personal deficiencies or character flaws, while accounting for our own in terms of environmental influences beyond our control. We made this error due to unfortunate nurture and uncooperative circumstance, while those other guys did it because they have weaknesses in their natures and lack conscience. This error is also a logical fallacy. This will underestimate social and environmental influences on others while underestimating the problems in our own character. But this bias may be also seen more often than it occurs, leading to an over-crediting of nurture and under-crediting nature. This is common in socialistic and egalitarian thinking. It can also be seen in some conclusions drawn from the experiments by Milgram and Zimbardo, where the smaller percentage of participants who took a stand on conscience fails to get much attention in the analysis. The norm isn’t all that  there is. For some, character really is destiny, regardless of environmental challenges.
    Group polarization, or circling the wagons, is the tendency for people in groups to make riskier decisions than individuals would alone. This is seen frequently in going to war and other adversarial relations, and a lot of people die from it. It also occurs in social media circles where individuals will get little information from beyond their own circle. These get the bulk of their information inputs from echo chambers and reverberations.
    Guilt by association is the dismissal of a person or an idea on the basis of associations that may be entirely unrelated to the question at hand, or the value of the person or idea at hand. It’s the opposite of appeal to misleading or bogus authority, and both are also used as logical fallacies in argument.
    Halo effect will tend to perceive a person’s positive or negative attributes as spilling over into other aspects of their character or behavior. But people are usually more complex than this.
    Illusion of asymmetric insight assumes that we know others better than they know us. This is not, however, always an incorrect assumption except in the act of presumption itself. It’s extended in groups, where it’s assumed that we know those other people better than they know us.
    Illusion of transparency is an overestimation of human abilities to know what’s in another person’s mind, and this works both ways in assuming others know our thoughts and that we know theirs.
    In-group bias, or in-group favoritism, has ancient roots in neurological processes, and will often go uncorrected by social and cultural programming. It’s driven by the need to belong and its content is supplied by the group being applied to. It often requires cognitive exercise and practice, or powerful and emotional epiphanies, to connect, re-identify, or find common ground with out-groups. The cosmopolitan or world citizen identity began as a workaround to the in-group bias. It was first propagated by the ancient Greek Cynics and Stoics.
    Normativity bias, a term coined here, seeks the norm or center of the bell curve as the most valuable or useful information in a dataset, often ignoring what the exceptions have to tell us. Examples abound in psychology, where human norms are even used as the first measure of mental health, calling to mind the Krishnamurti quote, “It is no measure of health to be well-adjusted to a profoundly sick society.” Both positive and Eupsychian psychology seek the measure of health in the exceptional. However, the truly exceptional data points, especially beyond the second and third standard deviations, are almost by definition anecdotal, and so tend to be dismissed as irrelevant to science. Examples of this bias can be found in conclusions drawn about the Milgram and Stanford Prison experiments, and others like them, where a percentage of subjects, say one in ten, refuse to display disappointing moral and behavioral characteristics exhibited by the norm. As real data points, these can’t simply be dismissed. They offer relevant information countering hasty conclusions drawn about fundamental attribution, nurture-over-nature, inequalities of character, and agency. What is it about this ten percent, and can it be taught? Or is the real question: can the learned behavior that’s so disappointing be unlearned? Zimbardo suggests some methods for unlearning the normative behavior. We need to stop dismissing the non-normative. How, for instance, can we learn all we can about the computational abilities of the human brain as long as we disregard the autistic savant as being outside the scope of our investigation? The normativity bias is consistent with a norm internalization heuristic, and will only be overridden with learning.
    Omission bias will tend to judge harmful acts of commission as worse, less moral, or more worthy of punishment than equally harmful willful inactions or sins of omission. The “trolley problem” illustrates one of its implications. The law can prohibit a behavior more easily than it can compel a behavior. 
    Out-group homogeneity bias is the inclination of associated individuals to see members of their own group as being more variable, textured, nuanced, or interestingly diverse than members of out-groups. An individualized version, comparing oneself to other individuals, is the trait ascription bias.
    Parochial or provincial bias is a blindness to the nature or character of out-groups. The in-group bias will attend to out-groups incorrectly, the parochial will ignore them. Loyalty to the in-group may be taken for the only truth worth knowing. Patriotism and religious affiliation can encompass both parochial and in-group bias.
    Publication bias is the preference for publishing positive findings over null or negative. The outcome of a study can influence the decision to publish. It influences the overall weight of public support for a theory. This is analogous to a preference for publishing breaking news over retractions and apologies.
    Reactive devaluation is the automatic dismissal of ideas or proposals when they are thought to originate in the agenda of an adversary or antagonist. The problem can be the black-or-white rejection of common ground on which a resolution might be constructed. This bias is highly toxic to diplomacy. It’s also a seldom listed defense mechanism where ego is a driving force.
    Saying is believing effect is the tendency to believe what we say to others, whether we originally just told them what they wanted to hear, or modified our sense of the true for some other reason. We convince ourselves because we cannot think of ourselves as liars. Perhaps we pled innocent in court, while knowing we were lying, but after many retellings, we come to believe we are unjustly imprisoned.
    Self-serving bias also has an in-group version, another locus of the Lake Wobegon Effect (where all the children are above average). It’s also called exceptionalism. Accidents of place and culture of birth become the primary criteria for assessing superiority.
    Profiling bias, or social stereotyping, is a form of saliency bias applied to the cognitive sorting of people, particularly in relation to in-groups and out-groups. We expect members of the profile to have the defining characteristic without having specific information about the individual. This has proven useful on sufficiently numerous occasions to be an evolved heuristic, and erroneous on enough other occasions to warrant use with better care.



3.5 - Coping Strategies

Anticognitives Sorted by Domain: Situational, Emotional, Personal, Social

    Coping strategies, or coping mechanisms, are scripted perceptual, cognitive, emotional and behavioral responses to stress, both internal and environmental. They are attempts to master, manage, adapt to, tolerate, minimize, eliminate, or reevaluate our position in difficult situations. Stress may come from any taxing experience, including trauma, anxiety, panic, cognitive dissonance, threat, conflict, personal loss, exhausting demands, domestic violence, and physical, sexual, or psychological abuse. This also includes imagined or imaginary stressors. In most cases, aspects of our experience are inflated or deflated through conceptual or emotional lenses. Coping strategies exist to provide a refuge from or alternative to a situation we cannot now cope with. Many are adaptive, and improve both the situation and our comprehension of it. Some may distort our cognition a little, but harmlessly, without producing any significant damaging consequences. These harmless forms are not our concern here.
    Where we can help it, strategies should not be deployed at the expense of a more authentic view of the world, or of ourselves within it. Useful strategies will address our appraisal or evaluation of the problem, or the causes and dimensions of the problem itself, or they may reevaluate our own affective response to the situation in search of a calmer approach. Here we will focus on maladaptive coping strategies and the persistence of ineffective strategies. Maladaptive strategies may offer short-term “solutions” with long-term and disproportionately negative consequences. We can also draw a distinction between proactive and reactive coping. Proactively, the coping response is aimed at preventing a possible encounter with a future stressor. This may follow a trauma or an overly intense single-trial learning experience. Used habitually, a coping strategy can morph into a neurosis, or shrink a life down and leave it behind defensive walls.
    There are a number of distinctions to be made between coping strategies and defense mechanisms, although there is some overlap and the names of some anticognitive processes occur in both lists. Where they do appear twice, they should be described in the terms of their appropriate category and domain. Subconscious or non-conscious processes will tend to be classified as defense mechanisms and thus are generally excluded from coping strategy lists. So are reactions to threats to the ego or self-schema. The proximity of the ego here makes a dysfunction more difficult to attend to objectively. Strategies tend to be more fully experienced as they are deployed. Coping strategies also respond to a wider range of environmental and social stresses than threats of hurt feelings or damage to self-schema or ego.
    Wayne Weitan, in Psychology Applied to Modern Life, identifies three broader categories of coping strategies: 1) appraisal-focused, a cognitive approach that questions the assumptions and perspectives; 2) emotion-focused, an affective approach working with our emotions and values; and 3) problem-focused, that reduces, eliminates, or moves away from the stressor. These are some of the examples of the positive strategies that won’t be further detailed here, and all of them belong to the metacognitive domain: 1) Appraisal-focused: Samma Sati, Right Mindfulness, reappraisal, reevaluation, reeducation, reframing, and taking up cognitive challenges; 2) Emotion-focused: Samma Samadhi, Right Concentration, the revaluation of values, renunciation of attachment, the cultivation of gratitude, forgiveness, humor, play, and anger management; and 3) Problem-focused: upskilling, reskilling, repurposing, relocation, problem-solving challenge, support seeking, taking control, and delegating tasks.

Anticognitives by Domain: Situational
    Attack is the fight-or-flight reaction, but without the flight. It will try to beat up or down on the problem to override the fear or anxiety triggered by the stressor. This strategy sometimes precedes the posting of bail bonds.
    Avoidance or distraction places attention elsewhere, removing the stressor from awareness, avoiding the extreme emotional agitation that’s the other meaning of distraction. Avoidance may only be a short-term option, or it may be the correct thing to do if attention itself creates or exacerbates the problem.
    Compartmentalization will separate conflicting parts of the situation and allocate them to separate parts or compartments of the self. If this is done so that each part can handle its allotment more effectively, then it may also be adaptive. This is also a seldom-mentioned defense mechanism in the personal domain.
    Displacement is the venting of a response to a stressor or trigger onto some alternative object or target, preferably one that won’t or can’t retaliate (or bleed), or where the consequences of overreaction will be less. Throwing or smashing things that aren’t the thing to be fixed, or cursing the local deities, or simply blowing off steam, are common examples. Where ego or social frustration is involved, it’s also a defense mechanism expressed in the emotional domain.
    Dissociation is a departure of the mind, attention, and consciousness from a stressful situation. There is a splintering or disconnect from the real world, and a discontinuity within the self. A dissociative disorder is a persistent dissociative state. A fugue state is dissociation that’s severe enough that time and memory may be lost.  Where severe insults to the ego or social trauma is involved, it’s also termed a defense mechanism.
    Distancing can be either a mental or physical withdrawal from the stressor and its influence. Reframing and strategic retreat are positive forms, while denial and escapism are negative.
    Moving the goalposts, raising the bar, and lowering the bar, are attempts to manage a situation cognitively in mid-experience by altering the evaluative metrics or standards. Where deployed in mid-argument, it’s a logical fallacy.
    Obsession uses a thought, feeling, or activity that occupies the mind to the exclusion of other things worth attending, or other things in need of attention. But it may also involve a sensitization to the issues at hand or an inflation of their import. An obsession may focus on the stressor or the solutions to the problems it creates, or it can focus on a distraction while the stressor and the problems grow worse.
    Provocation is the deliberate escalation of a situation in order to force some resolution, to increase motivation by upping the urgency for resolution, or to get other parties to act first and thus justify a reactive or retaliatory response. Baiting, trolling, and carrying a chip on one’s shoulder are examples.
    Rationalization is the creation of plausible logical or rational explanations for a situation in order to support or bolster a problem-solving approach that’s already in progress. Its use implies that this is not the truest of explanations, but rather, one of convenience. It’s also a way to make meaning out of loss or failure without impugning one’s own competence. Making excuses or laying blame elsewhere are the commonest examples. Leaving the situation having learned a good lesson is the lost opportunity here. Rationalization is also used as a defense mechanism to preserve our own sense of worth, competence, or righteousness.
    Ritualizing is the addition of extraneous, stereotypical patterns of behavior into a situation to give it an artificial sense of structure or meaning. It may force the situation into a prescribed or more acceptable kind of order. But the ritual itself may have no meaningful connection to the situation, other than its ability to add a level of comfort, confidence, or stress relief. This is your cat licking itself after doing something embarrassing. Connections may still be drawn between the stressor and the ritual with heuristics such as pareidolia or pattern seeking.
    Take-your-ball-and-go will treat a problem as all or nothing. Where perfect success is threatened or already foreclosed, and success is sure to be less than ideal, we turn the involuntary loss into a voluntary choice by leaving.
    Trivialization will make light of a situation to decrease the sense of its severity, downplay its importance to us, inflate our own sense of confidence or control, or hedge our expectations to diminish disappointment in case of a loss. It’s the opposite of exaggeration or hyperbole and a form of intentional cognitive distortion.

Emotional
    Acting out is an immature surrender to an impulse to misbehave, as a way of changing the subject. The behavior may be deliberately naughty or it may simply be thoughtless. It’s the unmediated expression and gratification of a minimally conscious wish, such that at least something gets satisfied, even if it isn’t a mature, successful resolution to the stress. It’s also an immature defense mechanism, where it’s the opposite of “use your words.”
    Catastrophizing, or awfulizing, is an egress from a stressful situation by way of an exaggerated or hyperbolic admission of surrender or defeat. It may also be a not-so-silent cry for help or disaster relief. It can also be made in advance of the peak of the crisis, or even in advance of its beginning. This is the worst, most apocalyptic thing that could ever happen to us, and at exactly the wrong time. Perhaps after this, if we get a glimpse of the reality, it won’t seem as bad as all that.
    Catharsis, or emotional purging, is the process of processing emotions at an amplified or accelerated rate to get to the other side sooner, leading to a sense of renewal. There is a mistaken model in the idea, however, that may liken emotion to some kind of hydraulic fluid that’s conserved and must be emptied out. Emotions can be produced as though out of a bottomless well, can self-perpetuate, and self-amplify in ways that can be destructive. It will often be sufficient to avoid setting up the conditions that create the emotion to begin with. An emotional outburst may, however, offer a sense of having processed or expressed the emotion sufficiently.
    Emotional withdrawal may be a milder form of psychological shock, or a strategy to shut down affective responses to a stressor to better deploy rational solutions, removing unpleasant thoughts and painful feelings from awareness. This can be the kind of coldness that surgeons are known for. The risk here is in dismissing affective components of the picture that could contribute to a solution to the problem, as when a superior solution might call for heartfelt sincerity, compassion, or kind words.
    Fantasizing is an escape from the stressor into an imaginary or alternate sense of reality. This is often to a parallel world where something different could have been, a world of coulda-woulda-shoulda. It could also be a nostalgic withdrawal, or rosy retrospection, a remembering of the past as being better than it was. Both wishful thinking and magical thinking, the cognitive biases, might make appearances here. Somewhere in the fantasy there may lie an unseen solution to the problem, but it would be unwise to call this a heuristic instead of dumb luck.
    Grieving is the gradual coming to emotional terms with a traumatic loss, whether this is the loss of a home, a community, a treasured possession, a marriage, or the life of a loved one. It’s often thought to be undergone in a number of stages, and usually named following the model by Elisabeth Kübler-Ross. The original five are: denial, anger, bargaining, depression, and acceptance. Others have added shock to the beginning, and testing in between depression and acceptance, to make seven stages. It’s also often said that the order of these isn’t always fixed. The  upshot is that both perspectives and perceptions change throughout this process, and none will bear a faithful representation of the true until we get to the acceptance.
    Idealization and intellectualization both represent strategic shifts towards the cognitive interpretation of a stressful situation, specifically to get away from the emotional or affective side, with its biases, pain, and intensities. It’s emotional detachment, but here the emotion is directly replaced by thinking. This renders the situation less complex and multi-dimensional, but removes components that may be important concerns in deciding optimal outcomes. Intellectualization is also a defense mechanism where reasoning is used to avoid facing threats of conflict from the unconscious.
    Misoneism, or neophobia, will take a defensive or protective stance against novelty, innovation, or change. Threats to the status quo, or to a tenuous hold on inner security, must be avoided or defended against, because these are just slippery slopes into scary unknowns.
    Passive aggression is the avoidance of direct confrontation, using excuses and delays. A passive aggressive may agree to commitments and not fulfill them, or may derive satisfaction from denying others on the basis of petty reasons and interpretations of the rules. A bureaucrat might elect to turn all of his powers to prohibit from mays into shalls. Problems associated with delays may multiply despite the appearance of working towards solutions. It may be that direct aggression is disallowed or inappropriate in a way that has too many consequences. This is also a defense mechanism.
    Phobias may be used as coping exit strategies, reasons to move away from the stressor altogether. They may be genuine, but amplified or intensified for their use here. As Mark Twain noted, “Courage is resistance to fear, mastery of fear, not absence of fear.” Ambrose Redmoon added that it’s “the judgment that something else is more important than fear.” If dealing with the stressors is truly called for, this is an anticognitive. If not, it might even be a smart move.
    Regression is a retreat from a stressor into a more childlike state. In stressful situations unrelated to ego or self-image, it’s simply expressing the urge to quit by behaviorally claiming unreadiness or incompetence, or a return to a time before the stressor appeared. It’s better known in its other function as a defense mechanism.
    Repression, used as a coping strategy, will put distracting thoughts, prior experience, or unpleasant memory out of the mind while addressing a stressor or solving a problem. It might free us from a learned phobia or other aversion. It’s better known as a defense mechanism, where it’s deployed differently.
    Psychological shock, trauma, stupefaction, or going numb, is a normally an involuntary shutdown, incapacitation, or paralysis upon entering a distressing situation, perceiving a severe threat, or receiving bad news. This is a natural strategy that only becomes an anticognitive coping strategy when an exit from this state is available and not taken.
    Shooting the messenger denies the facts in evidence, or distracts the hearer, by displacing aggression onto the bearer of unwelcome news.

Personal
    Addiction and self-medication will surrender to a repetitive and predictable behavior to escape accountability and make the world go away. Addiction, etymologically,  is to sign oneself over or be signed into slavery, signaling an abdication of agency. The accounts don’t go away, neither does the world.
    Blaming locates the source of stress other than where it belongs. It might renounce responsibility for being in a situation, or claim only a passive role in the conditions. A victim or a disease mentality might be adopted here, or a complete attribution transfer to bad nurturing and environmental influences. These fingers pointed elsewhere may in fact be pointed correctly, but blaming may foreclose some creative solutions that involve upping one’s game and claiming additional agency and accountability. There might be advantages to take in owning a situation that wasn’t really ours before.
    Compartmentalization, like it is in the situational domain, will separate conflicting parts of the situation and allocate them to separate parts or compartments of the self. The strategy may avoid stress, but it can also split our identity, and this underlies such phenomenon as hypocrisy and double standards. The word integrity refers to not being two different people at once.
    Denial is a refusal to perceive or confront an unwanted or unpleasant reality, the turning of a blind eye, the sticking of our heads in the sand (that ostriches don’t really do). It’s one of the several typical stages in the grieving process, where reality is put on hold as the mind gropes frantically about for alternate explanations of events. The import or impact of the thing being denied may be severely reduced in import or proportion, or its alternative inflated. It doesn’t look at realities. The horrible prospect of withdrawal from an alcohol addiction (short of the DTs) is in reality no more unpleasant than the next one or two scheduled hangovers. That can’t be looked at too closely. Denial is also a defense mechanism.
    Introjection is the temporary borrowing or adoption of other people’s traits or adaptive skills. This is a helpful strategy if the imported material is helpful, as advice from one’s consigliere. If it’s the adoption of inauthentic material or a bad faith position, it’s adaptive in proportion to its usefulness and inversely to its duration. This is also a defense mechanism, where it behaves a little differently, internalizing alien, threatening, or contrary stimuli and adopting them as one’s own.
    Just-world hypothesis is a forced explanation for why bad things happen to good people and good things to bad. It will entail a belief that the world is fundamentally just, and that merit and demerit are ultimately rewarded or punished, which in turn requires a belief that there is either a universal force that maintains moral balance or a system by which souls reincarnate to account for imbalances of justice in previous lives. As a rule or law of merit, it serves the self-schema of the fortunate, like the Brahmin caste, and it may diminish the affective burden of compassion towards the less fortunate. But character, nevertheless, is still destiny.
    Over-personalizing is a cognitive or emotional insertion of the ego or self-schema into a stressful event in a stronger causal role than may be the case. It’s taking things too personally. It may reflect a narcissistic or paranoid point of view, or it may be an inflated sense of self-importance that’s of temporary use in getting the current problem solved. The phenomenon is also active in survivor’s guilt.
    Self-harm includes cutting yourself, beating your head against the wall, and punching holes in the doors, and metaphorical versions of these. It includes committing crimes for reasons other than conscience. All are ways to amplify a sense of being alive, even if pain is required to do this, where pain may be more easily obtained than pleasure. This guarantees at least some affect. This may also be a way to divert our attention from a different variety or source of pain. Suicide is not really the goal here, nor is this always a cry for help.
    Suppression will force any distracting thoughts, feelings, memories, and behavioral options out of awareness in order to concentrate on solving a problem. It runs the risk of forcing useful information out of awareness. This is also a defense mechanism, where it’s a less conscious act with a wider range of expressions.
    Two-wrongs-make-a-right seeks to justify what should be an inappropriate solution to a problem. A wrong is canceled out by another that’s equal and opposite. It’s either a rationalization or an expedient. It will favor ends over means, although this in itself is not always a bad thing. This is regarded as a logical fallacy when used in argument with others. Here it’s an argument with oneself to protect from the stress of bad conscience.

Social
     Courtesy bias is a choice to offer a more submissive verbal or behavioral response to a stressor or threat in order to avoid offense or confrontation. It runs the risk of appearing to be sycophancy, servility, or obsequiousness. It also embraces politically correct speech in violation of a conflicting urge to be outspoken or truthful. It’s related to the relativist fallacy called If-By-Whiskey, in which the speaker’s position is a reflection of the party line or audience expectation. A political campaigner may have no more convictions than a wind sock. And, of course, polite society is completely greased by this slick and slippery stuff.
    Stockholm syndrome, or capture-bonding, begins as a survival strategy that sees captives allying themselves with their captors, but risks convincing the captives that they truly are allies. A bond is formed by repetition of thoughts and feelings of common cause. Survival takes precedence over any belief, although this part of the process may require time. This occurs in situations of domestic abuse, or in the adoption by a slave race of their conqueror’s religion, or stated somewhat more controversially, in right-wing wives and black Christians.
    Thought-terminating cliché, or thought-stopper, is the use of a commonly-used phrase, expression, or platitude to terminate an uncomfortable paradox, dissonant cognition, or conflict, to change the subject, or declare an intent to think no further on the matter. Examples both religious and secular abound in social discourse: Don’t be judgmental. You think too much. Because I said so. Tomato tomahtoe. It is what it is. God had other plans. Everything happens for a reason. Let it be. What can one person do. We can only change ourselves. When you’re my age, you’ll understand.




3.6 - Defense Mechanisms

Anticognitives Sorted by Domain: Emotional, Personal, Social

    Most people, it seems, would like to think of themselves as important, real, special, good, worthy, valued, unique, and immortal in spirit at least. To the extent that this is important, we will tend to reprocess and reinterpret our interactions with the world to support these desires. Since we usually live in several different contexts, we have an array of self-schemas, scripts, and personalities for fulfilling these desires in each context. Collectively, these are called the ego. Defending this ego is a lot of work, employing a number of mechanisms, since we really aren’t all that important or invulnerable, and demoralization may have even more devastating consequences than rigorous honesty. Defense mechanisms are the armory, the stratagems that we use to protect our various selves from perceived threats, even if this means lying to ourselves or distorting reality. They are triggered by anxiety, itself arising in the unconscious from threatening stimuli. Triggers are pulled, traps tripped, and switches flipped before the stimulus reaches the upper levels of the brain. Guilt, embarrassment and shame often accompany anxiety, and we can also be threatened by a hint of our own naughty impulses rising to the surface. In 1936, Anna Freud suggested that threats to our egos need not be immanent, but merely anticipated, a foreshadowing which she termed signal anxiety.
    The concept of defense mechanism was developed by Sigmund Freud, and a lot of the descriptions available are still cast in terms of conflicts between the id, ego, and superego. Here, we’ll just refer to them as conflicts between the ego or self-schema and the rest of life, with the former defending itself from the latter. Because they deal with our secured ego, our reactions are often either unconscious or too close to see clearly. Outside of direct attacks on our person or sense of self, threats are challenges directly to the ego only insofar as we have identified with what is being threatened, which can consist of either beliefs, ideological convictions, material belongings, or human relationships. These can thus span a large part of the cognitive life of a true believer. The easiest bit of ego defensiveness to remedy, though it’s still a bit of work, concerns the identifications we form with things we’ve been told, that we haven’t even experienced ourselves, or with explanations for things we’ve experienced that we haven’t formed ourselves. Clinging or attachment to these creates a great deal of our anxiety and our suffering, and there’s a great deal more of this than any comfort or security they might provide. These ideas and beliefs are not us, they don’t come from us, so why should we regard them as integral to who we are?
    Daniel Goleman, in Vital Lies, views some defenses as variants on Harry Stack Sullivan’s selective inattention  (or I don’t see what I don’t like) and others on automatism, a legal term (or I don’t notice what I do). George Eman Vaillant, in Adaptation to Life (1977), introduced a four-level classification of defense mechanisms, which ranged across a spectrum from dysfunctional to healthy. On level one are the pathological defenses (e.g. autistic fantasy, delusional projection, distortion, and psychotic denial). On level two are the immature defenses (e.g. acting out, fantasy, projection, and passive aggression). On level three we have the neurotic defenses (e.g. dissociation, rationalization, repression, intellectualization, displacement, and reaction formation). On level four are the so-called mature defenses (e.g. acceptance, altruism, compassion, emotional self-management, forgiveness, gratitude, humility, mindfulness, moderation, humor, kindness, sublimation, patience, respect, and tolerance).
    But Valiant’s categories have some weaknesses. Sometimes situationally mature defenses are called neurotic. It’s fine to use both Buddhism and meditation defensively, such as in dissociating yourself if you’ve been associating with the wrong thing. In many cases, what may be thought of as repression is simply to not produce or support a particular thought or emotion. It’s frequently OK simply to flee from trouble, to ignore a bad influence, to get distance from a thought that would otherwise over-involve or obsess us. We don’t need to be mindful in ways that harm us. The level four defenses above may sound more like a toolkit than an arsenal, or a recitation of some Buddhist values. As with some coping strategies, the more mature approaches are either not anticognitives, or they are largely harmless ones, and are outside the scope of this work, except as they may appear in the metacognitive domain as some of the scrubby bubbles of cognitive hygiene.

Anticognitives by Domain: Emotional
    Acting out is an immature surrender to an impulse to misbehave, as a way of changing the subject. The behavior may be willfully naughty or may be simply be thoughtless, but going full speed ahead may be the only way the impulse will be expressed, especially if it’s prohibited. At least something gets satisfied, and consequences be damned. Acting out here is the opposite of “use your words.” This distraction avoids becoming conscious of the contrast between the behavior and its unacceptability. As stress relief, this is also a poor coping strategy.
    Conversion, or somatization, is the internalization of ego- and self-schema-related discomforts where these manifest as internal pain, tension, illness, anxiety, or disease. Either cognitive dissonance between out-of-synch parts of the self, or social triggers that threaten the ego can cause problems. Both psychosomatic illness and hypochondriasis can result, but knotted muscles and skin rashes are more straightforward and common. Hypochondriasis can have additional causes, as where it’s simply used as a means to escape or regress, or to engage in some self-reproach.
    Denial is a refusal to accept the reality of a threat to ego or a self-schema. It’s a delusional attempt to reduce something that’s real to non-existence, particularly when the effect of such threats will be experienced as pain. Nietzsche offers, “But in countless cases we first make a thing painful by investing it with a valuation” (WTP # 260). In other words, if we haven’t attached ourselves to something, or identified with it, we are only witnessing the pain felt by an inanimate, non-sentient thought or object. It may require some work to correct an erroneous thought, and adjust some of the thinking it’s connected to, but that will likely be a lot less work than continuing to defend the error and the mistakes that connect back to it. Denial is also one of the distinct stages of the grieving process, but in this case it’s part of a coping strategy and the sense of loss is more justified.
    Displacement, when considered a defense mechanism, is the venting of reactions to personal challenges onto some alternative person, object, or target, preferably one where the consequences of reaction will be less. It’s a bit more straightforward, or less complicated by social repercussions, when used as a coping mechanism.
    Intellectualization is a movement from hot cognition towards the cool to escape the added cognitive load of affect. It’s the shift from flesh and blood into realms of abstraction. A somewhat healthier version, called apatheia, without pathos or suffering, was held as high virtue by the Stoics, but the modern word apathy has more troublesome connotations consistent with this defense mechanism when done to excess. Feelings are neutralized, subjects are objectified, we move into our heads, and the world now runs more mechanically.
    Isolation of affect is the metaphorical equivalent of air-gapping a data network, an unconscious attempt to split or separate a cognition or memory from an emotion, perhaps to deprive an unacceptable emotional impulse or memory further access to behavioral functions. This may be a response to trauma, or terrible news, where it may resemble shock or dissociation, but it still retains an ability to describe the traumatic event.
    Regression is a return to an earlier, especially more childlike emotional state or behavior pattern to avoid confronting problems, anxieties, or fears. It’s pouting that shows on the lips. It’s throwing a tantrum, and throwing things across the room. It’s the childish giggling people do over forbidden themes of adult sexuality, the oxymoron of adult comedy. Regression is distinct from play, a healthy form of behavior still permitted to adults.
    Tone policing is a reaction to the emotional tone or a message rather than its more meaningful content. It may too readily mistake the mood of a critic as carrying an underlying agenda or implication that requires dismissing the message. “What’s THAT supposed to mean?” may signify an example. It’s often used to ignore the message altogether.

Personal
    Backfire effect, in this domain, is an overreaction to a challenge to a self-schema, script, idea, or ideology with which one has personally identified. When the security held in this view of self is being threatened by exposure to disconfirming facts, evidence, or propositions, anxious believers will simply dig in their heels, or double down on their identities and any associated errors. This is an emotional version of the sunk cost fallacy and an escalation of commitment. Especially among the deluded, the ignorant, and the willfully stupid, once these have managed to make delusion, ignorance, and stupidity into points of pride, there is no turning back or around. They can only self-destruct. Backfire effect is a strong attitude polarization and also appears as a cognitive bias. It seems odd that it isn’t included in surveys of defense mechanisms, but we are correcting that omission here, twice (see the social domain).
    Blocking is a temporary or transient inhibition of either cognition or affect, creating a sense of resistance in the process. It may begin as an unconscious reaction, but the effort to hold back an ego-threatening experience becomes noticeable. It need not deny the existence of what is blocked, distinguishing this from denial. It’s more closely related to suppression.
    Compartmentalization will separate conflicting self-schemas and scripts, and allocate them to separate parts or compartments of a self-schema. Given conflicting temptations, beliefs, emotions, values, or habits, this reduces the cognitive dissonance in identity, but this is the opposite of integrity or wholeness. Like dissociation, one part tends to remain unaware of what the others are doing. The process is fundamental to hypocrisy and is the reason hypocrites remain generally unaware of their own hypocrisy.
    Compensation attempts to counterbalance a weakness by overdevelopment in a another area, in order to maintain one's self-esteem. It may address a frustration with over-gratification elsewhere, or a lack of skill in one area with highly developed competence in another. If the part of self being neglected holds little promise anyway, this is not such a terrible thing, but this can be problematic where it leads to lopsided personal development or the set of traits now being ignored is of more general importance in life.
    Dissociation is a departure of the mind, attention, and consciousness from severe insults to the ego, as in social trauma, or an extreme threat to physical safety or psychological well-being. There is a splintering or disconnect from the real world, and a discontinuity in the self. A person can lose their sense of time passing, or inhabit another distinct self-schema. Childhood abuse is a well-known driver of dissociation. A dissociative disorder is a persistent dissociative state and multiple personalities are a well-known, if uncommon, mode of  expression. A fugue state is dissociation that’s severe enough that time and memory may be lost.
    Distortion is an extreme revisioning of experience, reshaping perceptions of external reality to accommodate an ego or its self-schemas. Every person will distort a little to suit idiosyncratic needs. This is part of having a perspective. But strong or persistent occurrences can indicate of narcissism, entitlement, delusions of superiority, or exceptionalism, with psychotic levels expressed as megalomania and malignant narcissism.
    Externalization is a general form of projection, where the originally internal elements of an ego, self-schema, or personality issue are hallucinated onto the external world, imbuing other people, and even inanimate objects, with one’s own, personal intentions, motives, emotions, and characteristics. Perhaps it gives us something in common with the greater world, so we don’t feel so left out. The native pareidolia heuristic can be used to examine the phenomenon, as with the Rorschach or ink-blot test, which uses abstract pictures of your parents fighting to uncover your feelings about that. Jung wrote that “Everything that irritates us about others can lead us to an understanding of ourselves.” Perhaps this is generally true, but some of the new age folk have twisted this into silly platitudes that say “things that bother us are only reflections of what we don’t want to see in ourselves” or “everything you hate in others is something you hate in yourself,” assertions which are profoundly moronic.
    Passive aggression, where deployed as a defense mechanism, is more than simply the avoidance of direct confrontation with excuses and delays. It’s a form of self-assertion within an inferior position, where outright assertion might lead to a beatdown and loss of self-esteem. It gets away with what it can, still using excuses and delays, but with the intention of feeling a little bit smug and powerful. It isn’t normally as subtle as it feels, but it feeds on reactions, and if it can’t gather praise or admiration from others, it will settle for frustration and seething that falls just shy of serious consequences. These people frequently find employment in such departments as Motor Vehicles, Planning and Building, or Internal Revenue.
    Rationalization, as a defense mechanism, is a way to make meaning out of loss or failure without impugning our own competence, or else to justify otherwise unacceptable beliefs and impulses. It’s used to preserve our sense of worth, competence, or righteousness in the face of full or partial failure, or internal conflict, and will be attracted to the least threatening explanation. Making excuses is the commonest example, which can include blaming others or the environment. Rationalization may also be used positively if it creates logical reasons to continue on a rewarding course. The ability to leave a situation having learned valuable lessons is usually the biggest casualty here. This is also a coping strategy, and a maladaptive one when important lessons never get learned.
    Reaction formation is the outward expression of a belief, urge, trait, or behavioral inclination that’s the opposite of one that’s held inwardly. It’s a fight against ourselves, trying to overcompensate for something unacceptable within. The unacceptability may be our own self-loathing or disgust, or the fear of exposure or public opinion. It often looks identical to hypocrisy. This is the closeted, gay legislator proposing anti-gay legislation. Or it’s the boy who punches the girl he has a crush on. The actual purpose of the behavior may remain confused, whether it’s to deceive ourselves about even harboring the forbidden thought, or to cover it up in front of others. Opposites may be fully compartmentalized and out of sight of each other.
    Repression is also called stuffing or bottling up your feelings or impulses. Those who are committed to the hydraulic fluid metaphor for emotions see it as inevitable that this builds pressure that must be released via other channels, but the brain doesn’t really work like this model. More to the point, memories retain their emotional associations, and until they are recalled in a manner that adds a sense of closure to them, they tend to return as resentments, or re-sentiments, whenever related or associated experiences are recalled or come into awareness. They don’t go away without having some process of closure, especially when they have a lot of closely associated memories. As Freud claimed, “the penalty for repression is repetition.”
    Scapegoating is blaming others, or other factors, to avoid accountability for wrongdoings or simpler mistakes, and trying to salvage ego or self-esteem and a self-schema or script in the process. This mechanism also appears in the social domain when the ego is more strongly dependent on reputation than self-image. The delusion is that the problem goes away with the goat. The ultimate scapegoat, of course, has been the goat-foot god. The Devil made me do it. Robust mental health is not religion’s strong suit.
    Self-serving bias is the distortion of incoming information to maintain and support self-esteem. This is more commonly classed as a cognitive bias, but since it defends the ego from real-world insults, it belongs here as well. It’s a cognitive bias when it exaggerates our own accomplishments, and a defense mechanism when it shuts down or obfuscates criticism and other unfriendly or disconfirming personal feedback.
    Splitting is the result of a failure to fully integrate both praiseworthy and unacceptable parts of the self into a more coherent ego or self-schema. The unacceptable may be projected onto others or simply denied. We may lay all the bad that we do on our evil twin Skippy, or even our Lord Satan. There is normally no admission of any middle ground on which these opposites can come together.
    Suppression is another way to reject unwanted or unacceptable feelings or impulses. As with repression, this is a way to stuff them down deep inside or bottle them up. This is done somewhat more consciously than repression, and there is some art to doing it effectively. The idea is to make the unwanted thing disappear, to not be heard from again. The most effective way to suppress something is to assign it a lesser value or a lesser degree of relevance to the rest of our lives. This isn’t done only cognitively: the devaluation needs to be done with some emotional content. And this process can backfire if the content is a strong negative emotion like hatred or disgust. It won’t go away by giving it the metaphorical equivalent of an energetic charge. We just fuhgeddaboudit. Like repression, this is also subject to the hydraulic misinterpretation, thinking that stuffing things inevitably leads to later outbursts. There is also a model wherein the damned thing is stuffed into anaerobic environments, resulting in sulfurous gas compounds. Both of these are possible outcomes, metaphorically at least, if the suppression is done incorrectly and feelings aren’t properly vented or aired out.
    Undoing, the mechanism, attempts to take back an unacceptable expression or behavior by performing just the opposite as an act of contrition, atonement, redemption, or making amends. We try to either counteract or make up for the damage done. We cover our angry words with kind words. We confess our sins and do our penance. This can be one of the most mature defense mechanisms, even though it’s usually called neurotic. One may assume it becomes neurotic when it’s merely an attempt to conceal and preserve the unacceptable by going through the opposite motions, or when we just leave the confessional feeling cleansed and ready to sin some more.

Social
    Autistic or schizoid fantasy is a retreat or withdrawal into a private world, perhaps to pretend to resolve the conflict without that annoying feedback, or without that person who would think less of you. Without communication, or the demands of a socially constructed reality, nothing is true, and everything is permitted. It does serve the function of getting us outside the box to let our imaginations run free, and we owe much escape fiction and mythology to this. But for the most part, or unless they’ve escaped a bad or even more delusional crowd, it’s ultimately maladaptive for the escapee. Unless the fantasy gets published.
    Backfire effect, in the social domain, is an overreaction to a challenge to an in-group with which one has identified as belonging to. Good members dig in and double down on defense. Belief or faith in the group and its doctrines can be made into points of pride, even and especially when the enemy is simply disconfirming fact or evidence. It’s pretty unlikely that anyone has converted a Jehovah’s Witless when they came to the door. The backfire effect is a strong attitude polarization and also appears as a cognitive bias. It’s most often found in overreactions to polemical speech, showing a natural affinity for false dilemma, but even calm and rational argument can set it off, especially where a believer is secretly insecure.
    Identification refers to the adoption or internalization of socially acceptable characteristics, belief systems, lingo, or personality traits in order to fit in and be accepted, or at least enough of these to avoid rejection. This is particularly common when entering a new social environment. We may mimic or model the successful to the limit of our abilities, but the risk of embarrassing failure may hold this mechanism in check. Identification may also be with a superior foe or aggressor as a means of self-defense. The problems are in being untrue or inauthentic to ourselves, and becoming someone we’re not.
    Introjection is an unconscious borrowing or adoption of characteristics, beliefs, ideas, or personality traits from another person or members of a group, with an appearance of imitation that may need to be concealed. This differs from identification, although it also forms valence-type bonds with others and creates common ground, even where one side may be inauthentic. It’s at its unhealthiest when we internalize shame, guilt, undue criticism, victimhood, or self-loathing, or when we adopt abusive or self-destructive behavior patterns and pass these down through the generations. It’s also a coping mechanism, where it may have different and positive characteristics, such as a deliberate and strategic practicality.
    Poisoning the well is an attempt to deprive others of a benefit that we have been denied. If I can’t have this, nobody can. This restores all players to the same level of failure, to pull them all down to one’s level and so reduce the comparative distance. This is also used as an ad hominem logical fallacy that attempts to discredit more than what merits discrediting.
    Projection is the transferring of a person’s undesirable motives, impulses, feelings, or thought patterns onto another person. These may be internally unacceptable, but seeing them elsewhere in others may help to explain our sense of their presence. Their undesirability, or even malicious intent, often explains the sense of their being persecutory. This demonization is a simple, more primitive form of paranoia. We might explain a subliminal sense of prejudicial dislike for someone by perceiving that it’s that person who dislikes us. Or a cheating husband might grow certain of his wife’s infidelity. Some people incorrectly generalize this and claim that if something bothers us in others, we are only seeing something in ourselves that we don’t want to look at. But projection is not universal, and my dislike of Jeffery Dahmer did not mean I secretly wanted to rape, kill, and eat young boys.
    Reactance is a perverse form of self-assertion that will respond to a suggestion, directive, or command with an urge to perform the opposite. Being given orders might be perceived as an attempt to limit personal freedom or constrain options. It’s also known as rebelliousness or being refractory. But you’re kind of screwed here if the other person knows reverse psychology.
    Reactive devaluation is the automatic dismissal of ideas or proposals when they are thought to originate in the agenda of an adversary or antagonist. The problem can be a black-or-white rejection of common ground on which a resolution might be built. It’s more commonly listed as a cognitive bias, but this is an ego problem here, and a commitment to nothing short of victory in a zero-sum game.
    Social comparison might focus on others endowed with lower status, fewer life skills, or a stronger reputation for misbehavior or error, all in order to evaluate ourselves more highly, or at least to feel better about ourselves. Or, and in the opposite direction, we may seek to identify with those who are obviously our betters by seeking some piece of common ground, such as liking the same soft drink. The latter is played upon a lot in advertising. This has roots in native heuristics concerned with absorbing status in the troop or tribe.
    Spotlight effect is the tendency to assume we’re being noticed or watched more than we really are. We all live at the very center and heart of our own worlds, but not at the center of others’ worlds. Assuming that we’re being watched may add the stress of guarding our behavior more carefully, but it removes some of the stress from unguarded, careless mistakes and faux pas. In any event, it will distort our view of the world to help maintain self-esteem.
    Therefore leave, or ergo decedo, or the traitorous critic fallacy, will react to criticism by assigning the critic to an out-group, entitling us to an out-and-out dismissal without hearing the criticism or argument. “Love it or leave it” is a common example. These people are traitors and should be sent back home immediately. Used in an argument, it’s a logical fallacy. It’s an ego defense that grants no quarter to dissent.



3.7 - Logical Fallacies

Anticognitives Sorted by Domain: Native, Accommodating, Situational, Emotional, Personal, Social, Cultural, Linguistic; Formal Fallacies

    The SEP states rather succinctly, “We may take a fallacy to be an argument that seems to be better than it really is.” Fallacious and specious arguments will show a degree of plausibility to the naive, and to seem convincing is too often the same as convincing. These are often found in persuasive rhetoric, debate, journalism, propaganda, and advertising. They’re especially effective where exploiting affect and cognitive biases. The fallacies are organized below according to the cognitive domain that the fallacious arguments appeal to us within. I’m listing both fallacies of thinking and fallacies of argument, areas where our own thinking might go astray, and areas where attempts to persuade us may be either poorly conceived or intentionally deceptive. The word fallacy comes from the Latin fallere, to deceive, but it now applies to any incorrect argument where premises fail to support conclusions. Fallacies will be created with the assumption that the conclusion is true. Premises are then developed to suggest this truth or to rationalize that assumption.
    The first enumeration of logical fallacies was by Aristotle, who noted 13 of them, and divided them into linguistic types (fallacies of accent, amphiboly, equivocation, composition, and figure of speech) and logical types (accident, affirming the consequent, secundum quid et simpliciter, ignoratio elenchi, begging the question, cum hoc ergo propter hoc, and plurium interrogatum). Aristotle, however, would have said all of these Latin names in Greek. All are described below. Others thinkers have divided fallacies into three categories: questionable premise, suppressed evidence, and invalid inference. But the hardest dividing line is between formal and informal fallacies. The formal are tools of deductive argument, whereby the conclusion is firmly established if the premises are true and the logical form is valid, where it’s impossible for its premises to be true and its conclusion false. The argument is rendered invalid by flaws in its structure. A sound argument is logically valid and all of its premises are true. Formal logic, along with math, is one of our coldest forms of cognition. Specious arguments are more often forced to use hotter cognition to persuade. The formal fallacies are all grouped here within the linguistic cognitive domain, although not all of those are presented here.
    Informal logic is our main concern here, and most of the fallacies below are of the informal variety. Informal arguments are expressed in ordinary language and speech, everyday language used in conversation, explanation, and exposition. Informal logic is inductive in nature: general conclusions are inferred from an accumulation of particular instances, and an argument’s conclusions are supported, or made more probably true, by the structure of the argument and the soundness of its premises. Sloppy and unsound premises may fail to support the conclusion, but they cannot be said to invalidate it. It’s through the medium, vehicle, or instrument of ordinary language that we are most often deceived or misled.
    Statistical theory, probabilistic inference, Bayesian inference, game theory, and decision theory together comprise yet another important domain of logic. These are closely related to formal logic for their reliance on mathematics, but akin to informal logic in their inductive nature. This set of cognitive practices is beyond the scope of this work, even though using these practices badly, with subjective interference, bad guessing, poor probability estimates, crap luck, and incomplete data sampling will also lead to errors in inference and decision.

Anticognitives by Domain: Native
    Argumentum ad naturam, appeal to nature, or naturalistic fallacy, argues that if something is natural, then it’s also good or morally acceptable. If it’s unnatural, then it owes us a stronger proof of its worth. This is related to the is-ought problem, also called Hume’s law, or Hume’s guillotine, that questions whether the natural state of things, like evolved behavioral traits, necessarily constitutes a verification of the good. At minimum, a descriptive or normative statement can’t be construed as a prescriptive one. Epicurus had asserted that the pleasurableness or desirability of experience testified to goodness, which today would be taken as an explanation of the persistence of pleasure in our biological forms, but he also took a larger view of pleasure that embraced its consequences, and so favored eudaemonia over the shorter-term pursuits of sensuality. Despite the fallacy of argument, there are consequences to ignoring our nature while trying to replace it with something else, something new or less tested by the ages. The converse of this appeal, that what is unnatural is bad, that unnatural acts are wrong, is equally fallacious.
    Moralistic fallacy is the inverse of the appeal to nature or is-ought fallacy, claiming that what something ought to be is what it truly is, and what it ought not to be is what is alien to its nature. This thinking got going in earnest with Platonic ideals. There ought not to be homosexuality in nature, therefore it is unnatural wherever it naturally occurs. But it’s comical to watch this used against homosexuals and their unnatural acts, while so many examples of such exuberance in nature are being pointed out. As a matter of fact, Remy de Gourmont pointed out that “Chastity is the most unnatural of the sexual perversions.”

Accommodating
    Anecdotal fallacy uses personal and reported accounts of experiences and events as premises or evidence. There may be multiple events and multiple witnesses giving testimony. A collection of these is often called anecdata, and it’s spoken with a scornful or pejorative tone. Anecdotes can still be data, and they can also provide us with useful information or intelligence about the world, even where they may overestimate the frequency or regularity of the anecdote’s occurrence. They just can’t be used to prove anything logically, and they certainly lack the rigor that science requires. Several governments twisted this fallacy around when they forbade scientific research into certain classes of drugs, thus allowing themselves to claim that any reported benefits are merely anecdotal and therefore unscientific.
    Appeal to analogy, or false analogy, assumes that the detailed structures of extended analogies reliably inform each other of their corresponding parts. Analogy remains one of the most useful of our evolved heuristics, but it only suggests relationships to look for. It proves or constrains nothing. Correlative thought can make use of extended analogy without logical inference. One of the more popular arguments from analogy begins with the noting similarities between artificial intelligence and that developed in the human brain, then concluding that if we program AI to follow the structural and functional patterns of the human brain, and when we reach a similar critical mass of complexity, then that AI will wake up and be conscious just like us. This is by no means certain, and if consciousness is an emergent property with roots and dimensions in physics, chemistry, and biology, and not simply electronics, the conclusion might be quite doubtful. As another example, the frequent use of the phrase “war on drugs” may confuse or conflate this with military war in such a way as to normalize or legitimize the use of lethal force on unarmed civilians.
    Argumentum ad ignorantiam, appeal to ignorance, asserts that a proposition is true until it can be proven false, or less commonly, that it’s false until it can be proven true. In either case, absence of evidence is not evidence of absence. That a defendant may lack an alibi does not make him guilty. Science doesn’t violate this, because it merely asserts that something can’t be accepted until it’s been tested. In the cultural domain, it’s an onus probandi, or burden of proof fallacy. The argumentum ex silentio, argument from silence, draws conclusions based on the absence of something.
    Argumentum ad lapidem, or appeal to the stone, is the response of the stone wall to a proposition. A: that’s a really stupid idea. B: make your case. A: it’s just stupid, that’s all. This is the negative version of the proof by assertion, or “just because.”
    Argumentum ad nauseum, or argumentum ad infinitum, appeal to “OK, just make it stop,” is a proof by attrition, wearing the other down, or argument to the point of disgust. Repetition is what hammers it home. This may employ the availability bias, where it’s called an appeal to repetition, or proof by assertion, keeping the idea close at hand so that eventually it’s just assumed. Gaslighting is one of its forms. The false is normalized. It is because it just is.
    Enumeration of favorable circumstances, proof by selected instances, biased sample, card-stacking, or the cherry-picking fallacy will admit only selected statistical data to support the premises of an argument. We find the patterns that best fit our expectations and presumptions, and then we ignore, or even conceal data we don’t wish to see. It highlights the importance of objective logs or record-keeping and the use of randomization in sample selection. A version called the Texas sharpshooter fallacy is the adjustment of a hypothesis after the data is collected, such that the bullseye is moved or placed after the fact, over the highest concentration of bullet holes. This is permissible as an investigative technique, as inductive reasoning looking for natural laws and probabilities, but not as a logical argument.
    Extrapolation fallacy, unreasonable or unwarranted extrapolation, assumes that past and present trends will continue with constituent and contributing factors holding constant. Extrapolation, then, is useful only up to the point where we can safely assume that the trend will remain unaffected by its own consequences or other unforeseen factors. Such extrapolation tends to be one-dimensional and ignore feedback loops and other systemic interactions. While Thomas Malthus was guilty of this, and failed to anticipate dampening forces to runaway population growth, this fallacy is seen more often today in those who argue against the same dangers of population growth and its environmental consequences. We are awash with projections of what life will be like in the year 2100, including UN population projections that are based solely on human reproductive choices. But almost none of these looks at multiple dimensions, the fact that we are now in overshoot, the consequences of long-term damage done to support systems, and the potential for cascade failures in complex systems.
    False equivalence will make a specious comparison between two persons or situations with superficially similar descriptions, while their actual characteristics may differ wildly in quality or quantity. Minor flaws or transgressions may be equated with major ones. One candidate’s arrest for shoplifting of paper clips may be compared to another’s embezzlement of millions of dollars. Since both have criminal records, they are equally criminals now. Intelligent design and evolutionary theory will both provide accounts of speciation. Therefore, both deserve to be taught. Killing a person is murder, and aborting a fetus kills a potential person, therefore abortion is the same as murder. This is also called false balance, but it’s marketed as fair and balanced, giving equal weight to both sides of an argument.
    Guilt by association will attempt to discredit an idea based on its association to an unpleasant experience. It asserts that because two things share a property or trait, they must have more than that property or trait in common. It will also appear in the social domain with some different properties, and it’s a cognitive bias as well.
    Hasty generalization, converse accident, false induction, or insufficient sample, will extrapolate general conclusions from an inadequate sample size, including single-trial learning. Stereotyping can sometimes be cited as an example, but not always. The smaller the sample size, the greater the margin of error. It’s related to the anecdotal fallacy, but it has a more specific reference to statistical significance.
    Imperfect enumeration is the overlooking of an alternative, or relying on an incomplete dataset. The universe of discourse is deliberately, inconveniently, or unnecessarily limited. A version called the either-or fallacy is grossly imperfect in eliminating the dataset of everything between its extremes.
    Insufficient evidence, or the perfect solution fallacy, is the claim that no amount of evidence can be collected that will adequately prove a truth. The unexplained might as well be forever unexplainable. Offered in mid-argument, it’s a moving of the goalposts towards this asymptote. Therefore, the goal can’t ever be achieved or the conclusion ever supported. In effect, it’s a claim than an inductive argument cannot succeed because it’s not a deductive one. There are issues of justice centered around this issue, such as with having to prove a charge beyond a reasonable doubt, and highlighted by the number of convicts imprisoned or executed for crimes that they didn’t commit.
    Poor comparison is a simple form of appeal to analogy, or maybe an appeal to simple simile or metaphor. It can be based on superficial characteristics unrelated to the germane issue, or incomplete, or inconsistent. It’s sometimes comparing apples to oranges. It can be used to increase or decrease the perceived value or desirability of one of the pair. Tofu contains more protein than carrots, therefore it makes a better snack.
    Questionable premise might present erroneous claims, official myth, common assumptions, platitudes, and cultural biases as though they were established and true propositions. But the poor premise doesn’t affect the conclusion. It neither invalidates it not supports it.
    Reductio ad absurdum, or reduction to absurdity, is a kind of extrapolation fallacy. It’s a means of questioning a proposition by asserting that, if this is correct, then even the most extreme of its consequences must also be correct. An argument is extended until it collapses. From Monty Python’s Life of Brian, “All right, but apart from the sanitation, medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?” There may be overwhelming exceptions to something asserted as generally true. Offering an argument subject to this criticism may be called proving too much, such that if the argument were accepted, the conclusion would be too much of something. As a disproof, it takes an erroneous premise and draws increasingly unlikely conclusions from its extrapolation. This is related to the slippery slope fallacy. But this really only  suggests that the limits of a conclusion’s asserted truth may need to be clarified within the argument.
    Reification, hypostatization, or fallacy of misplaced concreteness, will take up an abstract idea, or an emergent subjective property, and then regard it as a concrete or material thing. It’s related to the referential fallacy in the linguistic domain, when the idea is a word or grammatical function. Mystical experience is often reified into metaphysical reality. In my oceanic experience I felt that my consciousness went on forever. Therefore, consciousness goes on forever. I encountered the one true god, and my heart was filled with love. Therefore, god is love and he loves all things.
    Suppressed evidence, unstated premises, or evading the facts, is more than simple error of omission, or the cognitive bias of cherry-picked data. It’s the intentional concealment of evidence inimical to the desired conclusion. This is frequently seen in the justice system, where it can range in nature from judge-approved inadmissibility to malicious prosecution. Data may also be concealed with deceptive wording or statistics.

Situational
    Appeal to probability, or possibility, will take a premise or statement as true merely because it’s probably or possibly true. Statistics is in its own, separate branch of logic and cannot makes such claims of truth or falsehood. However, an appeal to probability will likely have a useful place in decision making, particularly when the option deemed most probable excites us to approach or withdraw.
    Argumentum ad baculum, appeal to force or the cudgel, is the “because I said so” argument, like it or else. “Might makes right” is a common expression here, as is “the victor writes the history.” “Survival of the fittest” is sometimes used as well, but this shows ignorance in understanding that phrase. Force can refer to any form of coercion, blackmail, or extortion, including psychological. Duress doesn’t need to be physical.
    Base rate fallacy, or base rate neglect, will concentrate on localized, event-specific information and disregard the general probabilities or base rates when the two are presented together. It’s one of the framing fallacies that won’t see the forest for the trees. We make exceptions to statistics whenever we find it convenient or flattering.
    Cum hoc ergo propter hoc, with this, therefore because of this, will draw a specious causal relationship between concurrent phenomena. This is a false cause argument, or non causa pro causa, non-cause for cause. It’s is one of the most common fallacies committed by scientific researchers and journalists, as well as by journalists in general, especially in attention-grabbing headlines. Correlation can suggest causation, and this may be a necessary condition, but it isn’t a sufficient one. Other explanations will need to be eliminated. Creative people may enjoy LSD more than others, but that doesn’t mean that LSD opens creative channels. It may simply mean that creative people are less risk-averse and more inclined to explore alternative states. The correlated phenomena may also come about by a shared third factor, or they may be purely coincidental, since most things don’t really happen for a reason. (As an aside, Jung’s idea of synchronicity will draw potentially specious correlations between concurrent phenomena, but with the causality removed).
    Fallacy of the single cause, or causal oversimplification, strips explanatory depth from a proposition or argument and concentrates on describing a single cause that is not alone sufficient to produce the effect in question. Since cures may be regarded as causes in healing, singling out specific remedies as causes of good health may be seen in this category. Quack healers will each tout their specific form of snake oil as the one and only miracle cure, and that should be taken as a metaphor for advertised solutions in general.
    Gambler’s fallacy, or the fallacy of the maturity of chances, might expect that something that’s been happening more frequently, or less so, is due for a change in trend, according to some law or principle of balance, while at the same time, it will expect runs of good or bad luck to continue, until they will somehow cease to continue (as with hot and cold hands). It’s a neglect of statistical independence: when heads have come up ten times in a row, the odds on the next toss are still 50-50. There is no accumulation of good or bad luck to compensate for.  This is conceptually related to the regression or regressive fallacy.
    One-dimensional thought is an argument concentrating solely or primarily on a single dimension of a multidimensional problem. It’s related to the fallacy of the single cause or causal oversimplification, but it includes cures as well as causes. Complex systems want holistic systems analysis, not reductive or narrow perspectives. The most frightening example is the typical human approach to the carrying capacity issue, which will likely look at only a single issue, such as overpopulation, overconsumption, non-vegan diets, topsoil loss, Ponzi economics, standard of living, wealth distribution, food distribution, water shortages, women’s education, biodiversity, non-renewable resource depletion, etc., while leaving the others ignored. Even lay greens and environmentalists largely neglect the big picture and longer time horizons, to concentrate on a single cause or effect. Whack-a-mole, an arcade game in which players strike toy moles with mallets, provides a good analogy to non-comprehensive, piecemeal solutions that result only in temporary gains. That struggle never ends.
    Overprecision will present evidence as being more precise than it actually is. 89.7432 percent of logicians call this a fallacy. One or more of the digits presented aren’t significant, or are smaller than the margin of sampling error. Implausible levels of precision may signal bogus data or a lack of credibility in the presenter.
    Perfectionist fallacy, line-drawing fallacy, moving the goalposts, or raising the bar, occurs in mid-argument, when the response to an initially adequate provision of evidence or information is now met with a demand for more evidence. Still more may be demanded ad infinitum. This is a more dynamic version of the static insufficient evidence fallacy.
    Post hoc ergo propter hoc, after this, therefore because of this, will draw a specious causal relationship between sequential phenomena. This is slightly different from cum hoc, and tends to be a somewhat more seductive argument that suggests the former is cause of the latter. But third factors may easily be at work here as well. A common argument suggests that heroin addiction often follows experimentation with pot. Therefore, pot is a gateway drug and a cause of heroin addiction. But the newcomer to pot will also be newly exposed to the black market dealers, who also have an assortment of new goodies to try. The novice has just discovered that his government has been lying to him, and he now has less respect for following the law. Real causes here must include the laws themselves, their role in promoting a black market, and any lies that were told to get them passed. The post hoc thinking may also be a consequence of superstitious thought subsequent to single-trial learning. My lucky underpants win football games. I did my special dance, and three weeks later, it rained.
    Regression, or regressive fallacy, is a conclusion drawn from a small-picture slice or peek within the statistical phenomenon called regression to the mean. Most statistical effects eventually return to mean values as the specific occurrences accumulate over time, and this is by the very definition of the word mean. But wilder fluctuations will occur more locally. Daily newspapers may commit this fallacy when they report “crime rates soaring” one week and then “crime rates plummeting” the next, while in larger frames the rate holds steady.
    Relative privation, or it could be worse, or it could be better, will attempt to make a conclusion or outcome seem better or worse by comparing it to a worst or best-case scenario. Eat your goddamn Brussels sprouts, there are children starving in Africa. But even if I myself was starving, I still wouldn’t eat your goddamn Brussels sprouts.
    Short-run fallacy is the assertion of a myopic conclusion without regard to long-term developments or consequences. It will look primarily at immediate and short-term benefits. An example might claim that we can feed everyone on earth using only our available arable land, which ignores population growth, topsoil loss, fresh water shortages, aquifer depletion, drought cycles, peak fertilizer, and so on. It may be paired with the extrapolation fallacy, or unreasonable extrapolation.

Emotional
    Argumentum ad consequentiam, or appeal to consequences, will defend or refute a position according to conjecture about where its acceptance might lead, whether desirable or not, using the future as if it were present. It’s a hobgoblin argument if such an end is nowhere in sight. We can’t look at this because we will likely be disappointed. It asserts that the decision must be made in terms of favorability instead of logical merit. But outside of logic, taking steps to avert bad consequences isn’t always a bad idea.
    Argumentum ad passiones, or appeal to emotions, will use or manipulate a person’s affective response to support a conclusion or win an argument. If it belongs anywhere, it belongs in rhetoric or persuasive speech. Impassioned plea has no effect on the logical strength of a conclusion, even if the assertion convincingly urges something like morally correct behavior. Just about any emotion may be used. Unpleasant feelings commonly named (appeal to ____) in this fallacy are disgust, envy, fear, flattery, guilt, hatred, loss, pity, ridicule, sadness, and spite. It’s asserted that these states can be either cultivated or avoided by accepting the argument and taking action. Or we can argue the other way, and accept the conclusion, and take subsequent action that brings us acceptance, affection, comfort, hope, pride, relief, status, winning, and the gentle touch of women with heaving bosoms.
    Misleading vividness is the use of an emotionally charged, exceptional, or highly salient anecdote or piece of evidence to bias a conclusion or drive a point home. It’s an over-dramatization that gives a point undue emotional influence.
    Persuasive definition, or definist fallacy, is the use of stipulative definitions of terms in an argument that employ emotionally charged buzzwords, trigger words, images, or descriptions to slant the tone of an argument. We might define a liberal as someone who lacks respect for our national traditions, and then proceed to argue why it’s conservatives who must uphold tradition. This is an embedded form of argumentum ad passiones.
    Slippery slope is an appeal to a fear or anxiety that “if you give them and inch, they will take a mile.” Any movement made in the general direction of an undesirable conclusion, however incremental, must be disallowed, or else it will lead to that conclusion. This is an unwarranted extrapolation. It’s similar to a black-and-white fallacy, since there is no claim of a satisfactory middle ground. Allowing A to happen will eventually lead to Z. Domino theory is an example. Outside of logic, there are exceptions: an alcoholic might be wise to avoid that first shot of whiskey. You might hear one claim that he was run over by the caboose.

Personal
    Appeal to final outcome, purpose, or intentionality, will assert that things develop towards a preconceived end, as if by design. This is unrelated to the argumentum ad consequentiam, or appeal to consequences. The concept goes back to Plato’s world of forms or ideas. This is placed in the personal domain because the phenomenon is a projection of the intentionality or agency of living beings onto an inanimate universe. It reverses cause and effect, so that the outcome produces itself. It underpins ideas of intelligent design and the “strong anthropic principle” that suggests that the world, and even the laws of physics, were made as if for us. This has appeared in science in useful metaphorical form as morphogenetic fields, and a bit less plausibly as Rupert Sheldrake’s “morphic resonance.” English grammar can facilitate slips into teleological thinking: it’s too easy to say things like “the brain evolved to do x.” This is a common trap in discussions of subjects like evolutionary theory and requires careful phrasing to avoid it. Douglas Adams had a useful image here: “This is rather as if you imagine a puddle waking up one morning and thinking, ‘This is an interesting world I find myself in - an interesting hole I find myself in - fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, it must have been made to have me in it!’”
    Ergo decedo, therefore leave, or then go off, is to resolve or conclude an argument by truncating it. You take your ball and go home. It’s also called the traitorous critic fallacy, and is sometimes vocalized as “love it or leave it.” It will use a critic’s unsympathetic relationship with the thing being criticized as an excuse for rejection, as though no critique is permitted. The critic is then reassigned to an out-group that has nothing to say worth hearing.
    Psychogenetic fallacy is a Bulverism, a term coined by C. S. Lewis, who explains: “you must show that a man is wrong before you start explaining why he is wrong. The modern method is to assume without discussion that he is wrong and then distract his attention from this (the only real issue) by busily explaining how he became to be so silly.” This first assumes that an argument is wrong, making it circular, and that the wrongness has its source in the mind of the argument’s proponent, perhaps in a psychological defect, or a deficiency in maturity. Since the thought originates or is passed along by such a poorly endowed or biased mind, the thought itself must therefore be in error.
    Special pleading, or ad hoc reasoning (distinct from ad hoc hypothesis or the just-so story), attempts to cite something as an exception to an accepted understanding of what’s due in similar cases, just because, perhaps with a wild claim, just not with hard proof. Conspiracy theories are a common example. A special case of special pleading is an appeal to incredulity: it’s either so unexpected or unbelievable that it must be divine intervention. He moves in such mysterious ways. Or maybe it must be wrong, even if the numbers say it’s right. This time is different: because it just is. You can just feel it.
    Sunk cost fallacy is a deployment of the evolved escalation of commitment heuristic, implying that making a change imperils a prior investment, incurring the unacceptable costs of having to acquire something new and then have to pay to dispose of something old.

Social
    Argumentum ad crumenam, or appeal to the purse, is the assumption that those better endowed with means and wherewithal are also more likely to be correct. A common example of the fallacy equates wealth with merit, and then attempts to justify a plutocracy as a meritocracy. Its more rarely-used opposite is the argumentum ad lazarum, or appeal to poverty, claiming that the simpler folk, the yeoman farmers, the working man, or the salt of the earth are better endowed with merit and the true. It remains true that successful people will tend to be better educated, and might even have somewhat better genetic predispositions to an intelligence that’s independent of nurture, but this is a wild generalization and the very opposite of an argument for unequal rights and opportunities. Wealth has also been shown to be inversely correlated with compassion.
    Argumentum ad hominem, or argument to the person, attempts to refute a position by attacking the character or motives of it’s holder or proponent. This is seen a lot in court, with advocates seeking to impugn a witness’ testimony because of some unrelated human frailty. Since when are people who’ve had two glasses of wine prone to vivid hallucinations? Its opposite is seen in the use of character witnesses. But dismissal of a general character is not disproof of a specific characteristic, or vice versa. A variant called “poisoning the well” attempts to discredit every bit of information a person might offer. This can even work retroactively, as when numerous convicts must be released when a prosecutor is found to have tampered with a single piece of evidence. As a fallacy, this attempts to discredit more than may merit discrediting.
    Argumentum ad populum, or appeal to the people, the masses, or the gallery, is a fallacious appeal to the bandwagon bias or majoritarianism, or a claim that popular agreement, public opinion, the majority vote, or common knowledge constitutes enough validation. This is despite the ease with which mobs can be moved around at will with loaded buzzwords and persuasive rhetoric. A narrower expression of this, argumentum ad numerum, or appeal to numbers, concentrates emphasis on the number of supporters or detractors that the premise or conclusion has. Political polling at election time is an example of this.
    Argumentum ad verecundiam, an appeal to authority or respect, points to an expert’s opinion to verify a conclusion. It may also point to a document held in high regard, although references to laws are treated differently. Any true expert will merit a hearing, particularly if they are speaking within one of their fields of expertise. Sometimes we can assume that a scientist like Einstein was also a philosopher of some note, but perhaps not an expert on soft drinks. Insistence on the certification of expertise is called credentialism, which may be thought a corollary of this fallacy. The word of an authority, even at the heart of the expertise, is merely grist for the mill, and not a philosophical proof. An appeal to authority may also falsely link an idea or statement to a famous name or school of thought, borrowing that celebrity dishonestly or disrespectfully, as we see most frequently in attaching vapid and vacuous new age platitudes to Buddha or Laozi.
    Guilt by association will attempt to discredit an idea based on its association to an undesirable out-group. This is one opposite of an appeal to authority. It asserts that because two things share a property or trait, they must have more than that property or trait in common. It will also appear in the accommodating domain, and it’s a cognitive bias as well.
    Is-ought fallacy, with specific regard to this social domain, is a version of the argumentum ad naturam, an appeal to nature, or naturalistic fallacy and a function of the normativity bias (see cognitive biases). It assumes that normal human social behavior constitutes the definition of healthy behavior, and that abnormality is deviant. One version claims that the majority determines what is right, even in science.
    Tu quoque, or the “you too” fallacy, will defend a specious bit of reasoning by claiming that the same error was made by the argument’s opponent. If you did that, so can I, and my doing it is therefore legitimized.
    Two wrongs make a right will assert that a wrong may be answered by one equal and opposite, to cancel the first one out. It’s the behavioral equivalent of the double negative in grammar. This might be used as a justification for lex talionis, the law of retaliation, revenge, or other crimes of passion.

Cultural
    Ad hoc hypothesis, the just-so story, or far-fetched hypothesis, provides an hypothesis in a narrative explanation for a cultural practice or biological trait. This provides might either a stopgap accounting, or a stay of execution, or a falsification of a theory. Until they are shown to exist by experiment, dark matter and dark energy may do nothing more than quantify the discrepancies between our models, expectations, and measurements. Another example, from anthropology, has claimed that the Anasazi Indians must have been a highly spiritual people because they built so many round kivas, and kivas were used in religious ceremonies. Or: ancient humans buried their dead for religious reasons, not to avoid the smell of death, or to keep predators from developing a taste for human flesh. These assertions are common in science, and the hard sciences are not exempt.
    Argumentum ad antiquitatem, appeal to antiquity or tradition, argues that the persistence or age of a tradition is a testimony to its correctness. This is how it’s always been done, so it’s stood the test of time. It may entail denying the relevance of changing contexts and a maturing culture. Propositions do not accumulate truth in this manner. Although surviving trials over long stretches of time may well be suggestive of viability, survival can depend just as much on adaptive fitness and improvement. Taken across a smaller span of time, a more synchronic version is the appeal to common practice: everybody does this. But this is still a fallacy.
    Argumentum ad novitatem, appeal to novelty, is the opposite of an appeal to antiquity. It’s a claim of superiority based on modernity. It might assert that something newer is better simply because culture is evolving and our technology, science, or other understanding is therefore improving with time. As future-telling, it can also be an appeal to evidence, knowledge, authority, or feedback that doesn’t exist yet. This is not one of those inferior, primitive ideas: this is modern, or even post-modern. But postmodernism was a huge step backwards, instead of the sideways that things should have gone.
    Argumentum ad temperantiam, or appeal to moderation, argues that a midpoint or compromise position between two opposing arguments must be the true or optimal resolution. It’s also called the gray fallacy, the golden mean fallacy, or false compromise.
    Broken window fallacy, or the glazier’s fallacy, asserts that gross economic activity is a better measure of prosperity than the measure of net activity that subtracts damage repair and opportunity costs. GNP and GDP metrics are still in common use as a measure of prosperity, and even national happiness, even though this counts quantities like health care costs of environmental damage, pollution, and inflated prices from long-term resource depletion. Perpetual war and toxic sludge are regarded economic goods. But these unintended negative consequences will affect the economy in ways that aren’t seen or accounted. The broken window image was a parable introduced by Frederic Bastiat in 1850.
    Failure to establish necessity or sufficiency. The assertion that one statement is a necessary and sufficient condition of another means that the latter is true if and only if the former is true. Necessity is established by showing that one is a sine qua non (not without this) of the other. Sufficiency can be established by showing that all prerequisites for truth have been met, and not merely the one necessary condition.
    Fallacy of false attribution invokes or appeals to an irrelevant, unqualified, imaginary, or fabricated source in support of an argument. Perhaps the most commonly used example: It’s in the Bible, which is the word of god. How on earth could that be questioned?
    Fallacy of opposition will assume that all ideas proposed by an opposing school or camp must be either suspected of carrying that out-group’s cooties, or must otherwise be rejected without any consideration. Those people over there just can’t think straight about anything.
    False claim of settled science is an assertion that all of the facts are in and science has fully settled a question. In fact, science should not be thought to think like this. It’s outside the science box. Science is a method of inquiry, not a system of belief. Journalists are more frequently guilty than researchers and scientists of presenting early conjecture as scientific discovery, or presenting scientific theory as fact, and this contributes much to the public’s confusion of science with religion. Claims are particularly insidious when scientific models that seem to be accounting for most measurements and data are assumed to be experimentally confirmed, which is a requirement of the scientific method.
    Fine-print qualifications are hidden or unstated hypotheses or premises that are specifically not intended to be examined or questioned, but which can undermine an entire argument, sometimes by begging a question. Arguments about the historical details in the lives or Abraham, Moses, and Jesus fail to disclose the fact that no historical evidence whatsoever exists for these men as historical persons.
    Genetic fallacy, or fallacy of origins, or fallacy of virtue, will draw conclusions based upon earlier and possibly obsolete meanings of its words or premises. Current meanings and contexts are devalued or dismissed. This overlooks the evolution of meanings (philology) and their adaptations to newer contexts. The evolution of word meanings does have a lot to tell us about culture and its evolution, but etymology is still more art than science, and it’s certainly not a branch of logic. Hogeye Bill (see the Links) gives this example: “You’re not going to wear a wedding ring, are you? Don’t you know that the wedding ring originally symbolized ankle chains worn by women to prevent them from running away from their husbands? I would not have thought you would be a party to such a sexist practice.”
    Inflation of conflict reasons that if two expert sources or authorities cannot agree on a point or issue, then no conclusion can be reached at all. The field of expertise itself can now be questioned or dismissed, because any real expertise means that all questions are settled decisively.
    Intentionality fallacy is the claim that the meaning of an expression must be consistent with the intention of the person or author proposing it. Things can also mean what we want them to mean. This is not, or should not be, the same as claiming that there is no intended meaning inherent in a text, an assumption taken up by some frivolous post-modern philosophers. The statement “all flesh is grass” in Isaiah 40:6 wasn’t intended to refer to either primary production or nutrition, but it has a clear (and far more profound) meaning in this newer sense.
    Lesser of two evils argues that a choice or option will be acceptable merely because its selection entails fewer negative consequences than the alternative. There may be a false dilemma or black-and-white fallacy embedded in the enumeration of options, or an imperfect enumeration in the overlooking of an alternative. An example is Churchill’s unsourced aphorism that “Democracy is the worst form of government, except for all the others that have been tried.” This will discourage many of us from continuing the search (and the majority agrees).
    Onus probandi, or the burden of proof fallacy, attempts to shift the burden of poof in an argument onto the one contesting its claim. In logic, the burden always falls on the one making the assertion. In law, this burden falls on the plaintiff or prosecution and its highest standard is “beyond reasonable doubt.” This is still shy of “beyond any shadow of a doubt.” Hitchen’s Razor restates the principle as: “What can be asserted without evidence can be dismissed without evidence.” In the accommodating domain, this is the argumentum ad ignorantiam, or appeal to ignorance. This can also be a sleazy excuse to say “there are more things in heaven and earth …”
    Quoting out of context, contextomy, or quote mining, is a complement to false attribution, wherein the attributed author might be correct, but the context, meaning, or intent of the quotation has been altered. The Bard notes that “the devil can cite Scripture for his purpose.”
    Relativist fallacy, or subjectivist fallacy, is the claim that something may be true for one person but not for another. Clearly this is only specious in cases where it might be applicable, as with hard objective facts or evidence. And what’s best for people at different developmental stages may clearly differ. It’s extended to claims that the opinion of one society or culture is as good as that of the next, that each is true in its own social or cultural world, and perhaps more than this, all are equally true. This may assert that any minority opinion is equally valid, or that all assertions or criticisms, no mater how idiosyncratic, must be entertained, or that a simple claim of a controversy, even by just a handful of dissenters, legitimately calls an entire theory into question. The 3% of scientists who dispute that Earth is older than 6,000 years should be given the same respect as the opposition.

Linguistic
    Argumentum ad logicam, appeal to logic, the bad reasons fallacy, appeal to fallacy, or the fallacy fallacy, claims that a conclusion drawn from weak or doubtful premises, or from an incorrectly structured argument, is therefore wrong. Sounder arguments than those which have failed may yet support the conclusion. The failure of an argument invalidates only the argument, not its conclusion.
    Continuum fallacy, or line-drawing fallacy, or fallacy of the heap, will claim that two items are not distinct because they exist on a spectrum and are only separated by a continuum. One thing fades into another. This is an effect of conceptual vagueness or misconception and can sometimes be expressed in Zeno-style paradoxes. This is increasingly used to deny the existence of race (in humans) because the genes involved are relatively few in number, because there has been so much mixing since we began migrations, and because other points of intra-racial genetic diversity are even stronger. You also see this in the boneheaded DSM-5 move to lump all of the autism-related conditions into a single “disorder,” thereby turning the word inarticulate into a verb. Autism is a cluster of conditions, and technically not even a spectrum, since there are several dimensions to the texture of the cluster.
    Ecological fallacy, or ecological inference fallacy, infers the characteristics of an individual from the aggregated data of a group to which the individual belongs. But the assignment to a group is an inductive process and a deduction like this is unwarranted. A study of men with long hair shows that they usually love their magic mushrooms. You have long hair, therefore, would you like to purchase some shrooms? Then you will have your right to remain silent.
    Etymological fallacy is a genetic fallacy that regards an etymon as the true meaning of a word, potentially introducing equivocation into an argument. This does nothing to devalue the use of etymology in elucidating ideas and their evolution, which is a subject for philology. Etymology, however, remains more art than science. We can still suggest that the word economy still means thrift and the conscientious allocation of resources, and then we can ask what went so horribly wrong.
    Fallacy of accent is the misuse or ignorance of shifts in emphasis in words or phrases in the middle of an argument. The stress on particular words and phrases that we can add to speech is unavailable in printed matter, except where it’s put in quotes, caps, italics, or bold type. This is one of Aristotle’s linguistic fallacies.
    Fallacy of accident, also called dicto simpliciter ad dictum secundum quid (from the statement unqualified to the statement qualified), might best be termed sweeping generalization. Exceptions to a rule are dismissed, and all specifics will be assumed to meet the general requirements. This ignores the limitations of rules of thumb or soft generalizations, so it’s also called ignoring qualifications. Generalizations are applied mechanically, with little regard for specifics. Stereotyping is the commonest example, where it also partakes of the accommodating domain. While called a logical fallacy by Aristotle, it’s really linguistic or semantic: the general category is too broad for the particulars, and exceptions can exist within the category. You can’t categorize birds as beings capable of flight. The correction is found in the phrase “the exception proves the rule,” a statement which makes little sense without knowing that the word prove originally meant to test, to show the boundaries or limits of.
    Fallacy of amphiboly is syntactic ambiguity, a confusion arising not from the ambivalent meaning of a term (semantic ambiguity) but from a sentence structure that can be interpreted in more than one way. The results are often humorous. Simple examples found in the headlines: British left waffles on Falklands, and Students Expect to Learn from Dead Space Ants. This is one of Aristotle’s linguistic fallacies.
    Fallacy of composition is the assumption that what is true of the part is also true of the whole. Since the committee is composed of intelligent individuals, it follows that the committee will make intelligent decisions. This is one of Aristotle’s linguistic fallacies.
    Fallacy of definition refers to use of improperly defined terms within an argument. Terms may still be stipulatively defined for current purposes. The definitions should define what’s necessary and sufficient for an object to fit the term. They should be without circularity or synonyms, except where terms are mutually defined in contrast with each other. They may at times define an object in terms of what the object is not, as cold is the absence of heat, or blind is a lack of sight, but generally, definitions should describe both content and its boundaries.
    Fallacy of division is the opposite of the fallacy of composition: it assumes that what is true of the whole will also be true of the part. This fallacy ignores synergy and emergence. I am wide awake now (now that I’ve had my coffee), therefore my cells and atoms are conscious.
    Fallacy of equivocation, or semantic ambiguity, will use a word in a premise or argument that has two different meanings in this particular context. When it’s necessary to use a polysemous word (one with several different meanings) it’s logically necessary to provide a stipulative definition, in which the term is given a specific or specified meaning for the local purpose of this argument or discussion. A narrower version of the fallacy, called the the fallacy of four terms, will use the term in two different ways within the same argument. This is one of Aristotle’s linguistic fallacies.
    False dilemma, false dichotomy, either-or fallacy, excluded middle fallacy, bifurcation, or the back-and-white fallacy will remove all of the in-between or gray areas from the universe of discourse and assert or imply that there are only two conditions or options. The number of names this has is an indication of how widespread it is. It may be the most common fallacy, or else it’s a close runner up to false cause or specious correlation. Used in argument, it’s often used to press for the lesser of two evils. Where political two-party systems dominate, the universe of discourse is often constrained to only two mutually-exclusive sets of talking points. And in criminal court, the verdicts may be reduced to guilty or not guilty, with mitigating circumstances accounted for only where allowed in sentencing.
    Figure of speech, as one of Aristotle’s linguistic fallacies, originally concerned confusion due to semantic morphology, as of case, gender, or tense, but the meaning has expanded now to include confusion due to inconsistencies in and exceptions to morphological rules. According to the rules, inflammable should be the opposite of flammable, but they have the same meaning. And de-ceased doesn’t mean to come back to life. Janus words may also appear here. Other accounts conflate this with the fallacy of equivocation or semantic ambiguity.
    Ignoratio elenchi, is literally an ignoring of a refutation, but it’s more often conceived as irrelevant conclusion, or missing the point, drawing conclusions from an argument that arent the ones to which the argument leads. This is one of Aristotle’s logical fallacies.
    Loaded questions will limit the range of responses to those supporting the conclusion. This is similar to “many questions” but is simpler in form.
    Ludic fallacy, introduced in 2007 by Nassim Nicholas Taleb, is ”the misuse of games to model real-life situations.” It’s a version of the appeal to analogy that might be expanded to include all mathematically derived models that are accepted as representing reality without experimental verification.
    Meaningless questions can never be more than rhetorical. Sir, may I ask a rhetorical question? To what degree does this question represent nonsense? What happens when an irresistible force meets an immovable object? How many angels can dance on the head of a pin?
    Misguided focus will concentrate on one aspect or a larger picture, or a particular example, either of which may have alternative meanings outside of the current context. The Yuppie Abbie Hoffman provided an example: “The headline of the Daily News today reads BRUNETTE STABBED TO DEATH. Underneath, in lowercase letters: ‘6000 killed in Iranian earthquake.’ … I wonder what color hair they had.” Yet another example comes from the school shooting and gun control debate in the twenty-teens USA. It should be clear the cause isn’t guns, and that prohibition doesn’t work, except to organize crime. That doesn’t mean background checks aren’t a great idea. But the real causes of the shootings can be traced to (so-called) political leadership feeding on frustration, insecurity, hatred, and the friction between in- and out-groups. This wrong-dimensional thinking, along with its misdirection and red herrings, is of course a well-used tool in political maneuvering.
    No true Scotsman is the insistence claim that all members of a general class must share more than the absolute minimum requirements of belonging to that class. You are not a true citizen of country X if you question the government of X. The definition of a class may be redefined to exclude specific members of that class. All X are Y. But it seems not all X are Y. Well then, at least all true X are Y.
    Non-sequitur, it does not follow, or not-in-sequence, is the drawing of a conclusion that fails to proceed in a clear chain from its premises. It takes a discontinuous leap somewhere along the chain of argument. The eyeball is so amazingly complex that it suggests the work of a designer. Therefore, the eyeball had a designer.
    Petitio principi, begging the question, also called circulus in probando, or circular reasoning, will assume the truth or a conclusion, or a truth about the conclusion, within the premise itself. Such an argument only admits evidence that agrees with the assumption that the conclusion is true. This is another of Aristotle’s logical fallacies. All arguments regarding the historical timeline and facts in the life of Jesus beg the question of whether or not there was such a person in history, for which there is still no evidence other than hearsay from several decades after the time he was supposed to have lived.
    Plurium interrogationum, many questions, the fallacy of presupposition, or complex question, is an assertion that contains implicit assumptions or variables requiring one part to be addressed before another. Until then, there can be no one answer or conclusion. It may implicitly assume something questionable to be true. The classic example is the question “Have you stopped beating your wife?” Or, “Where did you hide the money you stole?” It’s one of Aristotle’s logical fallacies. Some have argued that this refers more to questions than to arguments, but it still refers to a line of questioning providing answers that are meant to lead to a conclusion.
    Proof surrogate will employ a rhetorical claim to assert that a proposition has already been established, but where no real proof is specified. We have every reason to believe this. As any idiot knows, … .
    Prosecutor’s fallacy, or argument from rarity, will use specious statistical reasoning by employing a misunderstanding of conditional probability. It’s named after a prosecutorial claim that the probability of a random match is equal to a probability of guilt. If only 1% of people will match the suspect’s description, then the suspect is 100 times as likely to be guilty as anyone else. You can’t deduce a conclusion strictly on inductive evidence. It’s suggested by the joke about a guy bringing his own bomb onto a plane for safety, because the odds of there being two bombs then becomes astronomical.
    Questionable classification entails a faulty comparison and common entry into a questionable category or level of categorization. “Knowledge says the tomato is a fruit. Wisdom says not to put it in a fruit salad.” Classifications themselves are proposals or propositions, and are not necessarily facts if they depend on any arbitrariness or subjectivity used in developing the taxonomy. Normally, the taxa that survive make sense, but that’s not a rule. Studies may suggest that GDP stands alongside leisure hours as a fundamental component of happiness. Monroe Beardsley introduced the cross-ranking fallacy, using more than one basis of division in classifying, such as making a set out of picked eggs, sweet pickles, sour pickles, and olives.
    Red-herring fallacy, irrelevant reason, arguing beside the point, or fallacy of distraction, is similar to ignoratio elenchi, but the irrelevant conclusion it will draw from its premises appears more as a deliberate attempt to change the subject, and is likely even more of a non-sequitur. The mind’s train of thought will be temporarily derailed in trying to discover a connection that doesn’t exist, or the hearer will not want to admit to an inability to follow the logic. Politicians at press conferences always carry pockets full of these little red fish.
    Referential fallacy is a form or reification that assumes that all words, verbal expressions, grammatical constructs, and mathematical models refer to actual things. This may not always be the same thing as mistaking the map for the terrain, since the terrain here may not even exist. Lightning is not an it that flashes but a dynamic process that lacks the boundaries that things have. If consciousness is made into a thing instead of a process, then the stuff that it’s made of must somehow be conserved after death.
    Straw man fallacy, or false attribution, will misrepresent, exaggerate, distort, caricature, or redefine a position to make it seem ludicrous or easier to attack. The actual argument, issue, point, or position will be ignored in favor of the misunderstood version. Conclusions to the Libet experiment may assert that free will or conscious agency has been disproven because it’s not in evidence in brain scans that span a few hundred milliseconds. These arguments have redefined free will to fit the results of the experiment and thus disallow any recursive loops between the cognitive and affective centers of the brain over longer spans of time. This particular example is also a Texas sharpshooter fallacy.
    Tautology, as used in logic, is closely related to begging the question, and uses or misuses terms in ways where they cannot be other than true. A term may be used to define itself. Conclusions restate the premises. Arguments can’t be negated owing to the definitions used. Pantheists will argue: we think of God as existence itself. Existence certainly exists. Therefore, God exists. Elsewhere, the Bible is the word of God, because it says in the Bible that it is the word of God. Another example is seen in the contention that marijuana should not be legalized because it’s harmful, and that its chief harm is the serious possibility of being arrested if one uses it, which is certainly harmful.

Formal Fallacies 

The most common forms of formal argument follow just a few basic patterns:

Conditional arguments take the “if (antecedent) … then (consequent)” form:
If A, then B. A. Therefore B (called modus ponens).
If A, then B. Not B. Therefore, not A (called modus tollens).
Fallacies:
Affirming the consequent flips this around. If A, then B. B. Therefore A. This was identified by Aristotle as a logical fallacy.
Denying the antecedent: If A, then B. Not A. Therefore not B.
Affirming a disjunct: A or B. A. Therefore not B.
Commutation of conditionals: If A, then B. Therefore, If B then A.

Categorical syllogisms are comprised of four kinds of two-term categorical statements, expressed in three propositions with two premises and a conclusion. They are often illustrated in Venn diagrams.
All A are B, or the universal affirmative (Symbolized as A); No A are B, or the universal negative (Symbolized as E); Some A are B, the particular affirmative (Symbolized as I); Some A are not B, the particular negative (Symbolized as O). The very use of the verb to be here does have its detractors, notably Alfred Korzybski and the E-prime fellowship.
Example: All A is B. C is A. Therefore, C is B. Or: No A is B. All C are A. Therefore, No C is B.
Fallacies:
Illicit negative, or affirmative conclusion from a negative premise: No A are B. No B are C. Therefore All A are C.
Illicit affirmative, or negative conclusion from an affirmative premise: All A are B. All B are C. Therefore, Some C are Not A.
Undistributed middle, or non distributio medii: All A is B. All C is B. Therefore, All A is C.

Disjunctive syllogisms, or modus tollendo ponens, has a disjunctive or either-or statement as one of its premises.
Example: Either A or B. Not A. Therefore B.
Fallacies:
Affirming a disjunct: Either A or B. A. Therefore Not B.
Denying a conjunct: Not Both A and B. Not A. Therefore, B.

These are just some of the simpler, more common, or more obvious examples. There are many more. Refer to the Logical Fallacies in the Links.



Bibliography and Links

Anderson, Michael L. “Neural Reuse: A Fundamental Organizational Principle of the Brain.Behavioral and Brain Sciences 33 (2010) 245-313. PDF.

Ariely, Dan. Predictably Irrational: The Hidden Forces That Shape Our Decisions. Harper Collins, 2009.

Ariely, Dan. The Honest Truth About Dishonesty: How We Lie to Everyone--Especially Ourselves. Harper Perennial, 2013.

Asma, Stephen T. The Evolution of Imagination. University of Chicago Press. 2017. Gist: Imagination is Ancient. Aeon, 2017.

Baillargeon, Renée. “How Do Infants Learn About the Physical World?” Current Directions in Psychological Science, (1994) 133-140.

Balkin, J. M. Cultural Software: A Theory of Ideology. Yale University Press, 2003.

Barrett, Lisa Feldman. “The Theory of Constructed Emotion: An Active Inference Account of Interoception and Categorization.Social Cognitive and Affective Neuroscience (2017) pp. 1–23. PDF.

Baumeister, Roy F. Escaping The Self: Alcoholism, Spirituality, Masochism, Other Flights From Burden Of Selfhood. Basic Books, 1991.

Beardsley, Monroe C. Thinking Straight; Principles of Reasoning for Readers and Writers. 4th ed. Prentice Hall. 1975.

Bergstrom, Carl T. and Jevin West. “Calling Bullshit in the Age of Big Data.” Hyperlinked course syllabus.

Brown, Roger. A First Language: The Early Stages. Harvard University Press, 1973.

Cequea, Alex. Why Facts Don't Convince People (and what you can do about it). A short YouTube video that covers some ground for its length.

Christiansen, Morten H. and Nick Chater. “Language as Shaped by the Brain.Behavioral and Brain Sciences 31 (2008) 489-509. PDF.

Cialdini, Robert. Influence: The Psychology of Persuasion. Harper Business, 2006.

Correia, Vasco. “Biases and Fallacies: The Role of Motivated Irrationality in Fallacious Reasoning.Cogency 3-1 (2011) 107-126.

Csibra, Gergely and Gyorgy Gergely. “Natural Pedagogy as Evolutionary
Adaptation.
Philosophical Transactions of the Royal Society 366 (2011) 1149-1157.

Csikszentmihalyi, Mihaly. The Evolving Self: Psychology for the Third Millennium. Harper, 1993.

Damasio, Antonio. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Mariner Books, 2000.

Elder, Linda. “Critical Thinking and Emotional Intelligence.Inquiry: Critical Thinking Across the Disciplines, 16-2 (1996) 7 pp.

Falkenstrom, Fredrik, “A Buddhist Contribution to the Psychoanalytic Psychology of Self.International Journal of Psychoanalysis 84 (2003).

Fausey, Caitlin, et al. “Constructing Agency: The Role of Language.Frontiers in Psychology (2010).

Fiedler, Klaus and Momme von Sydow. Heuristics and biases: Beyond Tversky and Kahneman’s (1974) Judgment Under Uncertainty. Eysenck Groome, Ch 12 (2002) 146-161.

Fiske, Susan, et al. “Universal Dimensions of Social Cognition: Warmth and Competence.Cognitive Science 11-2 (2006) 7 pp.

Flavell, John H. Cognitive Development. 4th ed. Pearson, 2001.

Frimer, Jeremy, et al. “Liberals and Conservatives are Similarly Motivated to Avoid Exposure to One Another’s Opinions.Journal of Experimental Social Psychology 72 (2017) 1-44.

Gallagher, Shaun. “A Pattern Theory of Self.Frontiers in Human Neuroscience 7 (2013) 13 pp. PDF.

Gardner, Howard. Frames of Mind: The Theory of Multiple Intelligences. Basic Books, 1983, 1993.

Genç, Erhan, et al. “Diffusion Markers of Dendritic Density and Arborization in Gray Matter Predict Differences in Intelligence.Nature Communications 9, Article 1905 (2018).

Gigerenzer, Gerd, and Reinhard Selten (eds). Bounded Rationality: The Adaptive Toolbox. MIT Press, 2002. Chapter 3 pdf.

Gilovich, Thomas. How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. Free Press, 2008.

Gilovich, Thomas, Dale Griffin and Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press, 2002.

Goleman, Daniel. Emotional Intelligence: Why it can Matter more than IQ. Bantam, 1995.

Goleman, Daniel. Social Intelligence: The New Science of Human Relationships. Bantam, 2006.

Goleman, Daniel, Vital Lies, Simple Truths: The Psychology of Self- Deception. Simon & Schuster, 1996.

Gray, Heather, Kurt Gray and Daniel M. Wegner. “Dimensions of Mind Perception.Science 315 (2007) 619.

Greenburg, David, et al. et al. “Mentalized Affectivity: A New Model and Assessment of Emotion Regulation.PLoS ONE 12(10): e0185264. 18 pp.

Haidt, Johnathan. “The Moral Emotions.” In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of Affective Sciences. Oxford Univ. Pr., 2003. pp. 852-870.

Hamrick, Phillip, et al. “Child First Language and Adult Second Language are Both Tied to General-Purpose Learning Systems.PNAS (2018)

Henrich, Joseph. “Culture and Social Behavior.Current Opinion in Behavioral Sciences (2015) 84-89.

Henrich, Joseph. How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. Princeton University Press, 2015.

Heyes, Cecilia. “New Thinking: the Evolution of Human Cognition.” Philosophical Transactions of the Royal Society 367 (2012) 2091-2096.

Hood, Bruce M. SuperSense: Why We Believe in the Unbelievable. Harper One, 2009.

Jasanoff, Alan. “The Cerebral Mystique.Aeon, May 8, 2018.

Kahane, Howard. Logic and Contemporary Rhetoric: The Use of Reason in Everyday Life. Wadworth, 1988.

Kolbert, Elizabeth. “Why Facts Don’t Change Our Minds.New Yorker, Feb 2017.

Korzybski, Alfred. Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics. Fifth Edition. Institute of General Semantics, 1933, 1994. PDF, Complete.

Kovach, Bill and Tom Rosenstiel. Blur: How to Know What's True in the Age of Information Overload. Bloomsbury USA, reprint, 2011.

Kuhn, Deanna. “Metacognitive Development.” Current Directions in Psychological Science, 9-5 (2000) 178-181.

Kuhn, Thomas S. The Structure of Scientific Revolutions. International Encyclopedia of Unified Science, 1970.

Lakoff, George and Mark Johnson. Philosophy in the Flesh: the Embodied Mind & its Challenge to Western Thought. Basic Books, 1999.

Levy, Neil. Neuroethics: Challenges for the 21st Century. Cambridge University Press, 2007.

Lindeman, Marjaana. “Does Poor Understanding of Physical World Predict Religious and Paranormal Beliefs?” Applied Cognitive Psychology, 30-5 (2016) 736-742.

Lotem, Arnon, et al. “The Evolution of Cognitive Mechanisms in Response to Cultural Innovations.PNAS 114-30 (2017) 7915-7922.

Ly, Calvin, et al. “Psychedelics Promote Structural and Functional Neural Plasticity.Cell Reports 23-11 (2018) 3170-3182.

Marzano, Robert J. Dimensions of Thinking: A Framework for Curriculum and Instruction. PDF.

McCready, Amy. The Me, Me, Me Epidemic: A Step-by-Step Guide to Raising Capable, Grateful Kids in an Over-Entitled World. TarcherPerigee, 21016.

McManus, John. Detecting Bull: How To Identify Biased, Fake and Junk Journalism In the Digital Age. CreateSpace Independent Publishing Platform, 2017.

McManus, John. “Don’t Be Fooled: Use the SMELL Test To Separate Fact from Fiction Online.” Online cribsheet.

Moore, Brooke Noel and Richard Parker. Critical Thinking. McGraw-Hill Humanities, 9th ed., 2008. PDF.

Moseley, Daivi, et al. Frameworks for Thinking: A Handbook for Teaching and Learning. Cambridge University Press, 2006.

Nagel, Thomas. “What is it Like to be a Bat?The Philosophical Review, 83-4 (1974), 435-450.

Neisser, Ulric. “Five Kinds of Self-knowledge.Philosophical Psychology (Jan, 1988) pp. 35-59. PDF.

Nguyen, C Thi. “Escape the Echo Chamber.Aeon, 09 Apr, 2018.

Nishida, Hiroko. “A Cognitive Approach to Intercultural Communication Based on Schema Theory.International Journal of Intercultural Relations, 23-5 (1999) 753-777.

Novella, Steven. Your Deceptive Mind: A Scientific Guide to Critical Thinking Skills. Course guidebook, Yale School of Medicine. The Great Courses, 2012. 238 pp.

Oakley, David A. and Peter W. Halligan. “Chasing the Rainbow: The Non-conscious Nature of Being.Frontiers in Psychology, 2017. 27 pp.

Panksepp, Jaak, et al. “The Philosophical Implications of Affective Neuroscience. Cognitive Science Society (2010) 43 pp.

Peterson, Michel Bang. Public Opinion and Evolved Heuristics: The Role of Category-Based Inference. Journal of Cognition and Culture 9 (2009) 367–389.

Pinker, Steven. The Blank Slate: The Modern Denial of Human Nature. Viking, 2002.

Postman, Neil and Charles Weingartner. Teaching As a Subversive Activity. Delta, 1971. PDF.

Pratkanis, Anthony. The Science of Social Influence. Psychology Press, 2011. Summarized by Philip Zimbardo here: PDF

Presseisen, Barbara Z. Thinking Skills: Meanings, Models, and Materials. National Institute of Education, 1984. PDF.

Previc, Fred H. The Dopaminergic Mind in Human Evolution and History. Cambridge University Press, 2009.

Ricciardi, Emiliano, et al. “How the Brain Heals Emotional Wounds: the
Functional Neuroanatomy of Forgiveness.
Frontiers in Human Neuroscience (Dec, 2013) 15 pp.

Richards, Blake A. and Paul Frankland. “The Persistence and Transience of Memory.” Neuron 94-6 (2017) 1071-1084. Summary by Eva Voinigescu

Romer, Daniel, et al. “Beyond Stereotypes of Adolescent Risk Taking: Placing the Adolescent Brain in Developmental Context.Developmental Cognitive Neuroscience 27 (2017) 19-34.

Sagan, Carl. The Demon-Haunted World: Science as a Candle in the Dark. Random House, 2011. Baloney Detection Kit.

Saltz, Gail.
Liberal Brains Are Different from Conservative Brains.” Big Think video, transcript, undated.
 
Sapolsky, Robert M. Behave: The Biology of Humans at Our Best and Worst. Penguin, 2017.

Schoenemann, P. Thomas. “Evolution of Brain and Language.Language Learning 59-1 (2009) 162-186.

Searle, John R. The Construction of Social Reality. Free Press, 1997.

Shermer, Michael. Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time. Holt Paperbacks, 2002.

Slack, Gordy. “I Feel Your Pain.Salon (11-5-07). An online essay on mirror neurons.

Stanford Encyclopedia of Philosophy or SEP. Online
Selected Entries: Belief, Cognitive Science, Cultural Evolution, Culture and Cognitive Science, Embodied Cognition, Emergent Properties, Emotion, Evolutionary Psychology, Fallacies, Informal Logic, Innateness and Contemporary Theories of Cognition, Modularity of Mind, Philosophy for Children, Qualia, Self-Deception, Sociobiology.

Tarvis, Carol and Eliot Aronson. Mistakes Were Made (But Not by Me). Mariner Books, 2015.

Taylor, Edward W.  “Transformative Learning Theory: A Neurobiological Perspective of the Role of Emotions and Unconscious Ways of Knowing.International Journal of Lifelong Education 20-3 (2001) 218-236.

Tetlock, Philip E., et al. “The Psychology of the Unthinkable: Taboo Trade-Offs, Forbidden Base Rates, and Heretical Counterfactuals.” Journal of Personality and Social Psychology 78-5 (2000) 853-870.

Thagard, Paul. “Critical Thinking and Informal Logic: Neuropsychological Perspectives.Informal Logic 31 (2011) 152–70. PDF.

Tversky, Amos and Daniel Kahneman. Judgment Under Uncertainty: Heuristics and Biases.  Science 185 # 4157 (1974) 1124-1131.

Tyborowska, Anna, et al. “Early-life and Pubertal Stress Differentially Modulate Grey Matter Development in Human Adolescents.”  Nature, published online, June, 2018.

Wang, Jing, et al. “Predicting the Brain Activation PatternAssociated with the Propositional Content of a Sentence: Modeling Neural Representations of Events and States.Human Brain Mapping 38 (2017) 4865-4881. Repr. Wiley Periodicals, 2017.

Weisman, Kara, et al. “Rethinking People’s Conceptions of Mental Life.
PNAS Early Edition (2017) pp. 1-6.

Whiten, Andrew, et al. The Extension of Biology Through Culture." PNAS 114:30 (2017).

Whorf, Benjamin. Language, Thought, and Reality. MIT Press, 1956. PDF.

Willingham, Daniel T. “Critical Thinking: Why Is It So Hard to Teach?American Educator (2007) 8-19. PDF.

Wilson, David Sloan and Edward O. Wilson. “Rethinking the Theoretical Foundation of Sociobiology.The Quarterly Review of Biology, 82-4 (2007) 327-348. PDF.

Wilson, Robert A. and Frank C. Keil, eds. The MIT Encyclopedia of the Cognitive Sciences. MIT Press, 1999. 1097 pp. PDF. A bit dated now, but coverage through the 20th Century.

Zimbardo, Philip. The Lucifer Effect: Understanding How Good People Turn Evil. Random House, 2007.

Statue on cover: Cain, by Henri Vidal, Tuileries Garden, Paris, 1896.


Links for the Toolkits

1. Media Savvy and the Smell Test
Carl Sagan's Baloney Detection Kit
The CRAAP Test, from CSU's Merriam Library
The SMELL Test, from John McManus

2. Evolved Heuristics
Wikipedia, Heuristic
Wikipedia, Heuristics in Judgment and Decision-Making
Wikipedia Category:Heuristics
Wikipedia, Modularity of_Mind

3. Emotions and Affective States
The Positive Lexicography Project
The Dictionary of Obscure Sorrows
Wikipedia, Emotion
Affective Neuroscience

4. Cognitive Biases
Wikipedia List of Cognitive Biases
Rational Wiki, List of Cognitive Biases
Cognitive Bias Codex (big list, but mixes heuristics and biases)
58 Cognitive Biases

5. Coping Strategies
Wikipedia Coping (Psychology)
Changing Minds, Coping Mechanisms
Schema Therapy, Common Maladaptive Coping Responses

6. Defense Mechanisms
Wikipedia Defense Mechanisms
Defenses from DSM IV
Internet of the Mind, List of Defense Mechanisms
Psychologist World, 31 Psychological Defense Mechanisms Explained

7. Logical Fallacies
Wikipedia List of Fallacies
The Fallacy Files Taxonomy of Logical Fallacies
The Fallacy Files: Taxonomy of Logical Fallacies
A Taxonomy of Fallacious Arguments
Hogeye Bill's Dictionary of Logical Fallacies


Experiments referenced
The Milgram Experiment
The Stanford Prison Experiment
The Libet Experiment
The Asch Experiments

A Few Childhood Education Resources
A Mighty Girl
American Institute for Learning and Human Development
Hoagies’ Gifted Education Page
The Malala Fund
See "Children and Education" and "Online & Self-education"