In this here part I'd like to propose an abstract model of human cognition. The model consists of several elements:
1. (Rule based) deductive cognition (reasoning), such that tells us that ‘1+1=2’ or ‘objects in the air fall to the ground.’
2. Pattern recognition, such that enables us to map sensations into conceptions (known patterns and categories) and thereby to map them into known rules (per 1), e.g. recognize an apple and an apple as a unit and a unit that gives us two units of apples; recognize each of the coins as its monetary value to deduce the total sum money in a heap of change; recognize the apple as the object in the air, the grass as the ground, to predict its immanent motion (fall).
3. Inductive cognition (learning), by which new rules and patterns are learned out of phenomena perceived to be unprecedented.
And 4. a mechanism by which the mind switches spontaneously from deductive cognition to inductive cognition.
The model is abstract in the sense that I claim no pretense of having a notion as to how it may be ‘implemented’ biologically in the brain. As far as I can tell, while the rest of the items are nothing new, item #4 and its relation to #3, is, and it is what I hereby wish to promote.
One note of terminology. I used the terms ‘to sense’ and ‘to perceive’ (& ‘sensation’ and ‘perception’) differentially to refer to the direct reception of signals (light, sound, touch) with the former and to their interpretation with the latter. For example, you are familiar with this optical illusion, below. The two squares are the same colour, in the sense of being rendered by the same combination of red, green, blue amplituded pixels on your monitor. Their colour is sensed the same, but they are perceived —due to a conception in your mind of the rules of light and shade dynamics— as different.
Summary
In this here second part I start by making a comparison between machine learning and human learning. I discuss the process of marginal learning/ conceptual drift, by which established conceptions of the world change over one's lifetime without involving inductive learning. Next I present the challenge of the new; how can anything be recognized as novel when our perception is robust against noise and everything is perceived in terms of the old? After some examples of learning of the new I introduce the notion of emergence and attempt to answer the question, discussing both the relearning of a concept that was found to be erroneous and the learning of a completely new concept. Next I discuss the emotions accompanying novelty perception. I end by discussing language from two directions, commenting on the Sapir-Whorf Hypothesis and alluding to common words that refer to the stages of novelty perception.
Machine learning analogy
Principally, the ‘rationality’ of economic theories, discussed on the first part of this essay, is all deductive reasoning. The theories assume that human have wants, that they are aware of their wants in such a way that they can project expected utility from commodities and services offered in the market, and then, by applying deduction based on a simple rule, allocate their resources to maximize utility. That humans actually behave, even in the market, otherwise, that rationality is arguably more than arithmetic of utility, was —rather conveniently— ignored by economists is not so much a naïveté from their part, but a pretense at worst, a sacrifice taken in order to construct a mathematical model such as computations could be based thereon. It is no coincidence therefore, that later computers, machine bequeathed with this faculty of rule based derivation which for centuries humans held to be uniquely theirs, could lend so much assistance to economics, as in econometrics for example. Human agents were conceived not unlike physical particles, obeying simple rules, only that the properties of their world was not mass and velocity but wealth and utility. In other words, humans were conceived as a kind of equation solvers, which computers, it being their expertise —such that, to the degree that such a comparison could be made, they are nowadays responsible for the greater part of such computations that occur in the world— could easily simulate.
There has been a long tradition of describing the psyche in terms of the material word. In antiquity, when the universe was made out of four elements, the human mind and body's working was determined by the four bodily humours, from which we get such words to describe one's mood as sanguine, melancholic, bilious, choleric. With the industrial revolution we got ‘to blow off steam’ and later Freud's engine-like model with mental apparati that press on each other in mechanistic dynamics. At the same time, long before it became an engineering miracle that provided its own benefit, the neural network was a model for the functioning of the brain, rising at a time when train- and electrical distribution networks were the wonders of the age. As it happened, these artificial neural networks would succeed where traditional compute architecture had failed. While the latter were capable of solving in split seconds problems that are hard for humans, such as systems of equations, they could not tackle easier cognitive tasks that even invertebrates could handle, namely, pattern recognition. It is such neural networks that I'll use as a model for the sake of illustration of my proposed novelty perception mechanism.
Machine learning methods can produce pattern recognition models through either supervised or unsupervised learning. In the former the human conducting the machine provides it with data —the samples— and with its annotation, denoting what patterns are to be found there —the labels— effectively instructing the model, ‘here are one million images of cats, one million images of dogs, learn to distinguish them.’ In unsupervised learning the model receives two million images and is instructed to find patterns by itself. In either case, it should be said, the technology began with recognition of patterns over samples (which would be equivalent to human ‘over time’) rather than within the space of the images. That is, the model could say that an image is of a cat but not where the cat in the image is.1 Image segmentation, whereby a model would indicate several objects within a single image as well as their pixel location, came later.
Human learning and machine learning are so different that it is hard to draw analogies and stay accurate without getting too verbose, so I'll get a little sloppy. With regards to human learning, we can make the distinction between formal learning —as happens when we are informed by teachers, parents, peers— and informal learning — as happens when we experience the world. If analogically we take a human to be a unit like the machine learning model, both the formal and informal kinds of human learning are ‘unsupervised’ in the sense that they derive all of their information from perception, i.e. the samples, as there are no external inputs analogous to ‘labels’; our teachers are part of our perception. Nonetheless, human knowledge is hierarchical: we know that Berlin is part of Germany is part of Europe; that a potato is a root vegetable is perishable food is food, as well as a yellow, that is visible, object, as well as is a plant is a living being. Therefore when we learn something about living beings as a class, or even about one living being, we learn at the same time, with some uncertainty, about potatoes. Same thing with perishable food and yellow objects. And so, once we have learned language (through unsupervised2 learning proper) it can be used to instruct us about patterns —rules and objects— outside our immediate perception or any experience we had had, which in a way is analogous to supervised machine learning. For example, without ever having been there ourselves, we might learn in the classroom about Antarctica, such that if we were to go there we would make correct predictions: the travelling would be long, once we arrive there it would be very cold, the air would be clear, the sun would not rise and set but circle above, at or below the horizon &c. Or, indeed, learn about phenomena that will never be part of our experience, such as details of fictional works. The teaching as an activity is an experience, but the teaching as imparted information is transcendent to, resides beyond, the phenomenon taught; the information ‘Berlin is the capital of Germany’ relates Berlin to other known phenomena —capital cities and Germany— but the uttered words themselves are not part of the phenomenon that Berlin is.3
Human perception is robust against noise and transformations. By both I refer to a sample's4 deviation from former samples of the pattern. By ‘noise’ I mean roughly information theory noise, random fluctuations of the signal. Had we been sensitive to it, we could scarcely perceive a steady world, composed of independent but related parts, around us. First, because our senses themselves are noisy,5 but in addition reality's phenomena constantly vary and mutate. If we failed to recognize an object because it was slightly too big or some detail of it had changed, we would scarcely recognize anything at all.
What I mean by ‘transformation’ is not quite a deviation, but the novel recombination of patterns. Since our knowledge is hierarchical, the patterns we perceive are recognized as patterns of patterns. To illustrate what I mean by robustness to transformation, I bring the example of Sad John. After a decade long acquaintance with Sad John we meet him and, for the first time, we perceive him smile. We have never seen that exact face before, but we fail to neither recognize our friend John nor recognize that he is smiling, thanks to our knowledge of faces, namely, their ability to put on a smile. That is the first time we see these two patterns —Sad John's face and a smile— combined in a single visual pattern, but it does not baffle us.
To put it shortly, our perception, i.e. capacity to recognize, is robust against minute sensual deviations and against substantial deviation that correspond to our knowledge about how objects (and patterns in general) of the recognized's class may transform.
Marginal learning/ conceptual drift
Machine learning models go through two phases: training and prediction. During the training phase the model is modified by an algorithm fed training data, while in the prediction phase it classifies new samples while itself remaining static.6
Culture has created similar social organization around human development, whereby children are educated and adults work, but neither do adults stop learning nor do children first start forming expectations of the world when they finish schooling. In humans, prediction and learning, perception and conception, are concurrent. One process of learning does not involve the novelty perception described below, does not engender new categories within the mind, and relies only on deductive reasoning: ‘conceptual drift.’7 Phenomena might gradually change. Since our perception is robust against noise, the altered form is still recognized, is not perceived as ‘something else.’ However, while ‘real noise’ is a random deviation of a signal, a fluctuation around a mean, here the marginal difference is not a fluctuation but a part of a trend, such that over time the mean itself shifts to a new value. Since learning is concurrent with perception, the conceptual mean (what we recognize as typical) moves after the drifting real mean (what really is typical). For example, people close to a child will neither stop recognizing him nor perceive that he had grown and developed from one day to the next over twenty years. When he's 25, they would recognize his photos as a five year old as of ‘his younger self,’ while those who had not known him then might recognize him at all only with difficulty — and in either case it would be recognized that it is, if only in a temporal sense, a different person (unlike a mountain, which looks and remains the same over such a time span). Or, a friend who used to be shy might imperceptibly become more outgoing, a change that is recognized rather retrospectively than in real time. And so, too, fashion changes,8 bikes wear down, gardens grow. These I contrast with ‘real,’ i.e. trendless, noise; we encounter variously sized bananas without our idea of bananas' size changing, or a person in her various moods without our idea of her personality changing.
‘Transformations’ can lead to a similar shift of conceptions, one we might call ‘conceptual morphing’ if we have to name it. Here the changes are not imperceptible the way the marginal, ‘noisy,’ deviations are, but they are such —adhering to conceptions of a greater order— as to not prevent recognition. For example, having only known yellow and red bellpeppers, when we arrive to the supermarket and see at the stand elongated peppers or orange bellpeppers, we would recognize that they are ‘different kinds of peppers’ but still ‘kinds of peppers’ —and with time, if we continue to encounter them, they would shift our idea of ‘the typical pepper’— as opposed to ‘not peppers.’ This is because this change of shape or colour corresponds to a conception of a greater order, namely, of vegetables and their variety of shade and shape. A non-vegetal colour or shape, such as cyan or a perfect straight edged cube of a pepper, would at least give us a pause.
The challenge of the new
This kind of ‘marginal’ learning cannot be the only kind that human are capable of. For one thing, we come into the world knowing nothing —not just about the world, but about our body and how to control it— so there must be a way in which new categories are conceived for us to know anything at all. For another, the phenomena of the world do not merely change, but new ones come into existence, as our generation knows too well. How do we recognize phenomena as ‘new,’ i.e. unprecedented? Our cognition is so intuitive to us that it might not seem a challenge, but I claim that it is. New phenomena do not emerge in our lives apart of the world, labeled ‘this is new’; indeed, that they are new at all is not a matter of their inherent properties, physical or otherwise, but of human understanding. And since our perception is based on old conceptions of the world, on the familiar, so does the new is perceived, its deviations explained away as anything from misperception and confusion to hoax, sleight of hand, trickery. I recall a family dinner where one member showed us on his phone a video of a robot dancing, and another member wondered if it was not merely an animation.9 Natural discoveries are as likely as artificial inventions to be thus subsumed, as when West European expeditions to America thought they had arrived to India, or at least Asia.
Much of the novel along our lives and along history was similar to preceding known phenomena. How alien would be a television to a medieval person? Is it very different from a window, theater stage or, say, ‘magic mirror’? To us, being familiar with windows as well as with electric appliances, digital media &c and, in particular, televisions, the distinction is not at all a challenge. If we found ourselves in the unlikely scenario of being unsure whether something was a screen or a window, we'd do informed steps to reach our conclusion: does the sound seem to come from the outside or from speakers? Have we seen that video or ones like it before? Are we in a room that interfaces the outside? Does the imagery have cuts/ jumps? Are there visible pixels? And so on. But the investigation conducted by the medieval man would depend on his own perception and conception of the world and would take very different steps.
And this conception based investigation and conclusion applies not only when interpreting the world, but our own selves as well. The 1981 Nobel Prize in Medicine winner Roger Sperry pioneered research on split-brain patients, i.e. patients who, usually due to epilepsy, had their corpus callosum, the main connection between the two brain hemispheres, severed. On a video from the cognitive neuroscientist Michael Gazzaniga's lab we can observe an experiment in motion. The split-brain subject is shown two words —‘music’ and ‘bell’— one to each hemisphere.10 Subsequently he is shown four pictures —a trumpeteer, a belfry, an organ, a couple of marching band drummers— and asked to point at what he had seen, i.e. at the appropriate pictorial representation of the word. He points with his right hand at the belfry. When asked why he picked that one, he answers ‘music’, the word that was flashed to his ‘responsible for speech perception and production’ left-hemisphere. When asked to elaborate he says ‘it was “music,”’ and ‘“bell”’ and proceeds to explain that the last time he has heard music, a few minutes ago, was ‘coming from the bells out here. Banging away.’ That is, the hemisphere that got ‘bell’ chose the appropriate picture, and the hemisphere that got ‘music’ explains his immediate past behaviour —the choice made by the other hemisphere— as if it was its own choice, ignoring the arguably more musical other choices.11 He gives, misguided, the most plausible explanation —as far as he understands the world and himself— he can conjure up. Needless to say, intact-brain persons do the same. What a mystery we would have been to ourselves had we known we knew so little.
Learning the new through experience
The variety of the perceptible, that which could be perceived, is limited. First, because the universe is finite, limiting the scope of realizable phenomena. Second, because, on the one hand, our senses are limited —we cannot see infrared or ultraviolet, or hear sounds outside a range— and on the other hand, our brains are limited, only a portion of real phenomena could be perceived by us.
Of the perceivable, I claim, everything is recognized in terms of the already known. We cannot see a ‘color we had never seen before.’ On experiments conducted on the rare few people who had electrodes inserted into their brain they reported, after their brain was stimulated, feeling something, seeing colors or even an object,12 recalling something, always the familiar. Oliver Sachs wrote about a case of an old lady who had complained of hallucinations, whom he found to suffer from poor visual acuity — like seeing monsters in the dark, her perception morphed her sensory input into recognizable forms, affected by her expectations.
And yet, we do form new conceptions over our lifetime. For the sake of discussion, I bring a couple of examples. Identical twins would often be indistinguishable to those who first meet them, yet not to their acquaintances; they have learned to recognize their telling features and to attend to them. By the time I graduated high-school the twins in my class had become as distinct in my mind as any two persons, yet when I first joined I couldn't tell them apart. On the morning of a good friend's wedding I spoke at length with his twin brother, whom I had met only a few times, innocent until I was explicitly informed of the mistaken identity; for the remains of the day I told them apart by the color of their ties and the chumminess towards the bride. A kiosk I frequent since a couple of years is owned by twins; as only one of them is manning the shop at a time, I wouldn't have even known that this person was really two had they not been friendly and chatty, especially for German standards of shopkeepers, allowing for the factoid to be known.
I want to differentiate this here example of twins from acquaintance making with any person in general. The latter, as far as this discussion is concerned, is not ‘novelty’ in the same sense. A new acquaintance, when you are already acquainted with people, is merely an addition of a member to a class not unlike, in a strict sense, stumbling upon an individual stone or tree you've never encountered before; it has its own particular properties, yes, but it is one of a known kind. It's possible that the relationship that would emerge from this acquaintance, and with it the perception of the person, would become a novelty, but that's another matter. In the case of the twins, however, it is not merely the addition of a member. Through the recognition of the twinness, perception has been altered, even if only as far as it is concerned with each of the two siblings. Such a developement of perception can also happen on a class, as in the recognition of the distinct existence of honey-bees and bumble-bees (these have separated in my mind at quite an old age) or dolphins and sharks — on the face of it similar to each other but distinct and distinguishable once your attention knows where to look.13
This is increasingly not the case any more, but until about a year ago, say, even the best realistic machine generated ‘photographs’ could be distinguished by slight giveaways: the ears, hair line, background, side depended collar colour and other sartorial oddities. While in some such images the aberration from ‘real photographs’ was blunt, in others they were minute and it was the trained eye that had learned to spot them. Were they shown to people half a decade earlier, I doubt any would have thought twice before concluding these were photographs, while nowadays our knowledge of the phenomenon gives us second thoughts. The same goes for non-photographic images and whether they were machine generated or made by human artists.14
Formal learning
One way to learn of the novel is through formal teaching, i.e. by being told about it. Much of the news about the world nowadays indeed comes by way of words of others. This would be akin, in our machine learning analogy, to the introduction of samples with new labels. But this cannot be the only source of novelty, since such words of news could be always traced to the first person expressing them who was not informed about them from others.15 Regardless, I present this route tentatively; language as a communication tool relies on the understanding of the recipient, and therefore can only construct new combinations of old ideas, not unlike those that ‘transformation’ engender, but with greater freedom. We can imagine a cubic cyan pepper and would recognize it in the market had we been told about it, though we might have hardly recognized it as ‘pepper’ had we not been informed, and might still feel skeptical until we experienced it as pepper and not merely as a visible and tangible object — just like one might have a notion of agile androids from science fiction but doubt the appearance of one in reality. Had we sent a philologist back in time to explain a Bronze Age man, after she had come around to a firm grasp of spoken Hittite, about life in the modern era —with computers, internet, smart-phones, supermarkets, fiat money, offices, tickets— he could scarcely, it seems to me, get any good sense of the basic conditions that we experience by simply being alive at this now time. Nonetheless, I do believe that it is within the power of imagination to let mere words to elicit something akin to an experience.
The distinction of twins is something that could have been taught formally only with difficulty. This has to do with the relative rarity of identical twins, and therefore with our language (and, as already said, perception) which is not equipped to point to these distinctions. Languages suffice for distinguishing a person among a crowd of people —we can refer to the person's stature, age, sex— but not to distinguish the person from his or her near copy. Often we would refer to circumstantial signs, such as ‘the person in the red tie,’ or ‘near the door,’ but not for a moment do we mistake to think that it is more than that, circumstance; as if had the twins switched ties they had thereby also switched their identity. Sometimes a twin would differ from the other less obviously than ‘the one with the beard’ but more obviously than ‘the one with the more aquiline nose,’ such as by a prominent mole, but the potential teacher, who can distinguish them, might only have an intuition —the ability to recognize something without being aware of the source of the distinction— and not be able to give that proper hint.
Indeed, simply being pointed the fact that this is one thing and that is another does not guarantee that you would be able to distinguish the two. Until a new conception has developed, the two indistinguishables are two instances of a single same old. I can say that the effort of a flatmate, albeit short, to make me aware of the three different kus —plain, tense, aspirated— in Korean, did not work.
Of course, it is partially through language that the distinction is eventually learned. It is by being told about it that I came to know that there were two kiosk owner and not one, began to pay special attention to the facial features, and if hence on every visit I'll inquire or make a guess as to which one is before me, I might learn over time to tell them apart. But this ‘over time’ describes how long it takes, not how it happens.
Emergence
Before I turn to how novelty is perceived and the novel learned, I'd like to discuss emergence which takes a central part in the dynamics of perception.
Emergence is an affair more conceptual than real, whereby a coincidence of phenomena gives rise to a phenomenon on a higher abstraction level: a pattern of patterns. Reality is molecules bouncing around but our world —and the objects our minds is occupied with— consists of people, nations, -isms and -ations, music, happiness, doom, trees.
A face is a combination of facial features, a forest is a combination of trees. But the sum, the emergent phenomenon, is greater than its parts. When we recognize a forest by the concentration of trees, we perceive more than the trees: a location that might harbour wild animals; in a hot day, an area that is cooler than the plain; quieter than the city; a place of mushrooms after the rain.
The emergent phenomenon, as a perceived phenomenon, is resistant to noise. If you cut down one tree, the forest is still there. If you cut all of the trees the forest would disappear, but if you cut down one half of the trees, or the other half, the forest would remain — perhaps less wide or less dense, but still it is there. If you take any ancient forest, it is likely that, like Theseus' Ship or Heraclitus' River, all of its trees, flora and fauna have been replaced since its first emergence, yet the forest is the self-same forest. The same goes with individual living beings, humans included, whose most cells if not all get replace several times over their lifetime.
The point is not to bring up a philosophical quandary —there's nothing to it; the emergent phenomenon is greater than its parts and therefore to a degree invariant to their change— but to demonstrate a property of our perception, namely, that when our attention is on the emergent phenomenon, we are less aware, if at all, of the properties of the constituent phenomena. Some examples:
Some years ago I had two friends visiting; one was, for that matter, neurotypical, the other had prosopagnosia or ‘face blindness.’ Upon my return from the bathroom I decided to run a little experiment. I came to the former and asked her to tell me what had changed in my face. She gazed at me and made a few false guesses. While we were at it, our prosopagnosiac appeared over her shoulder, looked at me for a short moment and asked if I had not shaved. Indeed. While the first always saw but the forest, the other saw the trees.
Typos are easily missed. A beginner, unfamiliar with the language, might get stuck upon what seems like a novel unfamiliar word, while you are likely to not even notice the misspelling. On the other hand, csidneor taht wlhie a non-Eglinsh sapeekr wloud ntioce how ecxtaly a wrod was (mis)spleled werhe you wlnodu't, a binenegr wulod not be albe to raed a sncetnee scuh as tihs one wilhe you eilasy cluod.
Inductive reasoning
It's high time to say something about inductive reasoning. Though it is applied concurrently with deductive reasoning, especially when operating on separate levels of abstraction, their function is somewhat opposite.
Deductive reasoning is used to derive predictions and interpretations. It takes a conceptualization of the world —objects and rules— and outputs projections.16 It is applied when we operate within the familiar, it is applied when scientific experiments —in behavioural economics or otherwise– are designed and, especially when they have gone as expected, when they are interpreted.
Inductive reasoning, on the other hand, is used to learn, from current and remembered experience, a conceptualization of the world — again, its objects and rules — through regularities of phenomena, repetitions of various kinds.
To take the analogy of games, deductive reasoning is applied when one considers what move to take next, or interprets the intentions of other players from their moves. Inductive reasoning is applied when one watches an unknown game unfolding and tries to comprehend it. Deductive reasoning takes in the state of the game and the rule book and outputs an action. Inductive reasoning takes the consecutive states of the game, one's and others' actions and reactions, and outputs the rule book. Also in these cases, neither of them is the sole reasoning in operation. When observing a game, trying to comprehend it, deductive reasoning is applied. For one thing, on the most basic level, the colors and noises received by the senses are interpreted deductively to produce vision and audition. Higher up, deductive reasoning produces various interpretation, so long as they are not refuted, based on the assumption that what lies before is indeed a ‘game’ and not ‘dance’, ‘battle’, ‘theater piece’ or otherwise, that each player has a goal they are trying to achieve with their moves, that if its a boardgame the players' bodily movements are immaterial so long as they do not manipulate the pieces of the game &c. On the other hand, when playing a game, another player's movement might be so unlikely that inductive reasoning would be prompted in order to divine what sort of strategy might be behind it.
Novelty perception
If everything is perceived in terms of the already known, how can anything new be perceived? The crux of the matter is that perception is not the same as comprehension. By the latter I don't mean ‘correctly comprehending’ but merely the sense of having comprehended, of having made sense of the perceived phenomena.
We constantly make prediction errors — the real unfolding phenomena deviate from our expectations. Most of these deviations are either not at all perceived —in the sense of not coming to our conscious attention— or merely ignored, and do not shake our sense of having correctly comprehended our environment (or our self). Nonetheless, their accumulation around any single phenomenon, even if individually they were not marked, I claim, comes to the fore of our mind and unsettles it once it surpassed a threshold. Under the threshold the error is perceived as noise, above it — as misrecognition.
I see two, possibly three, cases of novelty perception. One, when order is first perceived in disorder; two, when an order is perceived to be mistaken and is replaced by a new one; and, possibly three, when order is perceived to be mistaken and is removed without being replaced.
By ‘order’ I mean ‘emergent phenomenon’; ‘disorder’ is the lack thereof. If this term needs motivation, I'd say that emergent phenomena are often a particular combination of lower level phenomena, not their mere aggregation. This :) as well as (:-)) are faces, :: or ()) are not.
Reconceptualization
Let us start with the second case, of the sense of mistaken recognition. Imagine you come across an odd looking animal. Had it been in the picture book, you would classify it as an imaginary ‘fantastical beast.’ But as it happens, you come across it, moving and breathing, in the forest. As you regard it, your attention and behaviour is guided by your perception, by your understanding of the world. You wonder: Is it a mutant? Is it a movie studio run away robot? Is it a marsupial? Is it a congenially deformed animal? Is it a sick animal? Is it an alien? The answer to any of these might either settle the matter or merely raise more questions. If the questions were settled, then you have perceived the familiar — an instant of the known morphed by noise or a transformation; otherwise, you have perceived the novel.
Depending on how important the question feels to be —and therefore how unsettling its non-comprehension— your perception would or would not be greatly affected. If it's important enough, and this is my here claim, you would enter an altered cognitive state —perhaps not as binary or drastic as sleep/wakefulness but graded as the degrees of excitation and valence that differentiate states of emotion: being sad, happy, overjoyed, content, angry &c— a state in which your mind switches from deductive reasoning to inductive reasoning in which you stop making predictions and begin to learn.
Imagine that you go to visit a friend abroad. Perhaps not a big sports fan yourself, still you go with your host and his friends to watch the match between St. Peter's Sharks and Union's Burning Angels — say football or basketball. Or so you think upon sitting down. Actually it's, unbeknownst to you, a similar but different game. Upon the first inexplicable event you dismiss it thinking you have seen wrong. The second time you think the player committed a foul, or that you've missed something, or that perhaps you don't know the rules of football or basketball as well as you have thought. Your friend is at the other end of your group and your neighboring new acquaintances don't speak your language. If you are engaged with the game enough —if you do not simply enjoy the company or the dancing mascots and cheerleaders— and its proceeding is dissimilar enough to your expectations, at some point, the deviation that breaks the camel's back, it clicks in your mind that ‘this is a different game.’ At that point the game before you transforms. From being ‘a match’ it turns —like a forest into trees— into people in colorful uniform playfully running around, strategically situating themselves, jumping, panting and kicking, a gorilla dancing through, over a field whose boundaries expand, passing a ball, exclaiming in joy and seizing their hair in frustration, as you try to distinguish a pattern, comprehend the rules governing their behaviour. You now know that you don't know, don't understand, and so begin to pay attention to what had been perceived as irrelevant, to what you had effectively been blind to, until a moment ago.
De novo conceptualization
In this last example novelty perception was triggered by a known order being disordered, but it would also be triggered when, as it were, expected disorder is ordered. Both are instances of breached expectations, but given the apparent opposite relationship between the expected and the real I thought it merited a special mention. The one case is of ‘correcting mistaken identity’ (you thought it was one thing but really it's something else) the other is of ‘forming a new identity’, that is, recognizing an emergence.
In a manner, randomness is the null hypothesis. A little teleological, but: if we don't expect anything, we expect randomness. For formality I'd say that the way that zero is a number, so is lack of order a kind of order. If we walk in the forest, we might encounter stones lying around. We don't know how they got there anymore than we know whence the trees, bushes and trees did. However, if suddenly we encounter a stone on a stone on a stone, that's an order, that is, a breach of the expected disorder. It grabs our attention and poses us a question, which we are likely to solve by surmising human agency. Or aliens, depending on our world view. If it's a cairn, or large and geometrically arranged like the Stonehenge, the sense of wonder is yet bigger. It's not just more of the same old, but something new.
How much order qualifies as a breach of disorder is a matter of circumstance, attention and of the threshold hyperparameter discussed below, but it seems like the apparently magical quantity of fairy tales, three, to be one candidate. When people knock on the door, they usually do it thrice. This is not an arbitrary signal the way non-onomatopoeic words are. A single knock is noise, could have been somebody moving furniture in the stairwell who knocked accidentally against the door. Two knocks are far less probable as a random event, but still could be a fluke. Three evenly spaced knocks are a clear sign of agency and intention. Anything beyond that is redundant, and from a certain point on, when it transforms away from a series of knocks to a constancy, the phenomenon transforms again, no longer a ‘person knocking at the door’ but perhaps ‘somebody hammering nearby’ or ‘a fleeing person urgently seeking shelter.’
Allow me to add a personal anecdote. Recently I read Blood Meridian. Somewhere in the latter half of the book I encountered a missing period/ full-stop mid paragraph. As far as typos go, it was an unusual one, but I took it to be that — a typo. However, shortly thereafter, perhaps in the same chapter, I encountered a second instance of the same typo. That gave me a stronger pause, since, indeed, it seemed like a pattern, plausible in particular given the text's sparse usage of punctuation. I thought that perhaps it was not an error but a meaningful — I lack noun for it. Having finished the book I suppose that indeed these were typos, but perhaps not completely coincidental; perhaps they were related to each other, rising somehow in the process of editing.
Disenchantment
In the one case a new emergent phenomenon is conceived, in the other an emergent phenomenon is discarded and replace with a new conception. I formulated the third case —whereby an emergent phenomenon is discarded without a new conception to replace it— out of a completionist generalization.
I could only think of one example. Imagine a person who had been madly in love who fell out of love or merely left the honeymoon phase. Thitherto the other person was a shining figure; thence all that person's ‘merely human’ features suddenly become perceived (love is blind, yes?).
If this is indeed ‘disenchantment,’ the mere removal of the perceived higher emergent phenomenon (not a forest but merely trees), what distinguishes this case from that of ‘reconceptualization’ is the lack of motivation to form a substitute emergent phenomenon. Nonetheless, I'm not sure that this is the case — and that this case at all is real. I think, rather, that the venerated person is perceived not as ‘more than a human’ but ‘other than a human,’ that is, ‘not human’, so that this disenchantment is merely a reconceptualization whereby the former ‘idol’ is conceived as ‘human.’ This is why the image of the queen or the Pope sitting on the toilet bowl is comic, why their failures are overlooked, disbelieved or deemed irrelevant, or why when these are taken to be true they are so scandalous. The king is naked kind of thing.
The mechanism of novelty perception
Disregarding the third case, the other two, though they might differ in their details, are principally the same. The course of novelty perception looks thus:
1. Signal to switch attention from higher to lower abstraction phenomena
2. A switch of attention
3. The (re)conceptualization of a higher-abstraction phenomenon through inductive reasoning
In both cases the signal to switch attention was accumulated error. In the one the expectation was for a lack of order, refuted with the perception of order; in the other the expectation was for one kind of order, refuted by the incongruities of reality with it. However, while these two cases constitute what I regard as ‘novelty perception,’ spontaneous (re)conceptualization, these are not the only cases where the sequence above might arise.
As already hinted, the signal for the switch might be an instruction, an explicit communication given to us by another. For example, had we been told in advance that the game we are going to watch was not football but something else, we would have started at phase 2 or 3 from the beginning of the game, looking for patterns and regularities in the movement of the players, never having experienced error.
Or take Magritte's La Trahison des Images, the ‘this is not a pipe’ painting. Its caption directs our attention (as many of his paintings, more implicitly, do) across the levels of abstraction. Of course, visiting the museum, we do not take the portraits for real people, but they all transcend their medium. Transcendence —the reciprocal of emergence, to the latter what ‘giving’ is to ‘receiving’— is at the heart of art. We don't mistake the portraits for real people but we don't walk and look around at canvas smeared with paint, either. The paint, its arrangement, transcends itself and landscapes emerge; the noise of a hollowed piece of wood transcends itself and becomes music.17 Magritte's painting shakes our perception of the emergent and redirects our attention to the mere oil-paint-on-canvas-ness of the image before our eyes and thereby to the magic —or treachery, as he titled it— of art.
A decade ago I came across a series of academic publications which, unfortunately, I cannot fish out again. What they found was that when people, through conversation, were led to shift their attention across abstraction levels, otherwise resistant ideas became malleable. This worked with a shift in either direction, such that, for example, a person's idea of the forest could be changed if her attention was drawn down to the idea of trees, or up to the idea of nature. This suggests that it is the shift of attention itself which prompts reconceptualization, through inductive reasoning. When it occurs spontaneously, the threshold-surpassing accumulated error merely triggers the shift in attention but is otherwise not a condition of relearning.
Novelty perception & emotion
The function of the brain is to comprehend the world, or, more exactly, the impressions of the senses. This is something that we, or our brains, do not choose to do; it's too automatic to even be called instinctual, yet there are emotions that guide our behaviour to facilitate the accomplishment of this task.
The ‘error accumulating past the threshold’ is an unpleasant experience to which an imperative is inherent. Like the itch that calls for the scratch, the unsettling feeling of incomprehension calls for investigation. We might actively, consciously resist the itch, but the motivation to address it is usually too automatic to do so. And the satisfaction of this itch elicits joy. The itch serves a fitting metaphor by its immediacy and automatic reaction, but in other ways hunger fits better. Hunger —felt incomprehension— feels unpleasant, eating —the act of conceptualization— is gratifying, but even when not hungry, a delicacy —spontaneous insight— brings its own joy. Next are some examples, followed by the discussion of the vocabulary attached to the emotions involved.
Clickbaits
Descriptions of clickbaits as typically being deceptive or misleading are somewhat off; if clickbaits could at all be characterized as ‘deceptive’,18 it is not what characterizes them. Indeed, whether a title is a clickbait or not has little to do with the titled content. Nonetheless, it is for a reason that people feel deceived by, as well as a general dislike towards, clickbaits. First, they elicit that uncomfortable itch. Second, by being seduced to click, one finds oneself in a video or blog post about a topic they are not the least interested in. Third, there might be disappointment involved —there's a discrepancy between the bafflement they felt before clicking and the lack of a sense of discovery after they had read/ watched the content— but the clickers are foremost fooled, one could say manipulated, rather than properly deceived.
Clickbaits achieve this itch of incomprehensibility (with the promising lure of a solution a click away) by a combination of two components.
1. The titles lack closure. Good old titles are complete, whether it is the short ones of movies and books that could be used as a proper noun within a sentence, or the longer one-sentence summaries of newspaper articles. You may heard much, little or nothing about Hamlet or Romeo and Juliet, but you understand that the plays ‘Hamlet’ and ‘Romeo and Juliet’ are about them, whoever they are. ‘Blood Meridian’ or ‘The Human Stain’ are more elusive, but each expresses a concrete even if vague idea, evokes an imagery. On the other hand, clickbaits are in some way incomplete. Often they use exophoras, thereby implicitly making the content part of the title. It's akin to a book whose cover is ripped off and you have to peer inside to know what it is about. You open or go watch Hamlet in order to find out what's his story; you open the clickbait to find out what the title at all means. ‘British diver and Thai youth athlete he rescued from cave in 2018 reunite’ or ‘Extreme heat in Italy takes workers' lives as temperatures pass 40C’ are informative; ‘They reunite after his predicament so long ago’ or ‘Workers are dying in this southern country’ are not. Not uncommonly clickbaits are additionally incomplete in a more direct way by being too long to be displayed fully on the page, providing thus but half of the title. When there's a thumbnail, the same technique is employed visually, for example by having a circled element in a picture or an arrow pointing at it; these share the properties of the exophora by seeming meaningful and, at the same time, being meaningless (or at least incomprehensible).
2. Clickbaits suggest perplexing news that defie our expectations, our understanding of the world. Often this is achieved by the incongruent combination of the title and thumbnail. Thereby, they induce a sense of accumulated error.
The two are similar, yet distinct. The latter often relies on the former, and each on the other to achieve the itch of discovery effect. An incomplete/ incomprehensible phrase is irritating, but not as irritating as one that suggests we might be missing a surprising truth. On the other hand, many surprising things that do not merit particular attention happen in the world each day, but the clickbait's role is to turn the quotidian into the amazing, a merely surprising event into an important or profound one, which is easierly achieved with linguistic incompleteness.
The sense of a click away imminent wonder can also be achieved with paradoxical linguistic incompleteness alone, though its variants demonstrate a close relationship between the linguistic and the epistemological, between language and our understanding of the world. Allegedly only schizophrenics think that the television speaks directly to them, but I think many feel addressed by the second person pronoun used in clickbaits, as in ‘You wouldn't believe [...]’. The second person, in this context, is somewhat unique among pronouns by being still an exophora —it refers to something outside the text of the title— and yet well defined; there's no question as to whom it refers to. However, a stranger's video containing truth about yourself is an unexpected, therefore curious, phenomenon, and it thus solicits attention. I call this usage paradoxical because it's an element that discredits itself. Nobody wants to read or watch something that is literally unbelievable,19 i.e. bullshit, and would avoid accessing titles that promise exactly that. However, the defiant ‘you wouldn't believe it’ shifts the attention from the story to you, from the matter of the veracity of the story to the veracity of your disbelief and/or the author's claim of knowledge about you.
This induced reflection does not rely on the second person. With a title as ‘She couldn't believe it when [...]’ the subject, being unspecified, is substituted in our mind with our own self. It's not a matter of sympathy, but of mere comprehension. When we are given a general statement that includes us, e.g. ‘everyone is X,’ our first turn in assessing the truthfulness of the statement is by assessing whether we ourselves are ‘X’. ‘Obama couldn't believe it when [...]’ doesn't elicit the same effect as it's concerned with a particular other. Another variant has the form ‘Only one in 542 people could guess [...]’20 which elicits the curiosity about whether one is indeed provenly so very special (which is a contradiction of our expectation of a world where we are so very normal or a world which doesn't recognize how special we are). As far as ‘going viral’ is a concern, such a title has the additional advantage that once they have clicked and discovered —like many others— how special they were, viewers would share the video with their friends to confirm or boast. Flattery is an effective detractor off a gaping falsity.
Gangnam Style
This is admittedly somewhat of a just so story. I believe that the popularity, international appeal, of Psy's Gangnam Style's music video, has much to do with the novelty perception's mechanism, even with the techniques of the clickbait, in whose terms we can describe it.
First of all, it's well produced and the music is catchy. Needless to say, this is not that which makes a video be watched billions of times, though its lack would have turned people away. Its appeal, to put it crassly, is the same one as the clickbait's. The reason that the video is enjoyed while the clickbait is hated is that the former stands for itself while the other is but a mean to something else — your click; that there was nothing to it is revealed the moment one actually clicks, a ‘the king wears no clothes’ kind of moment. The music video remains unexplained and therefore a tantalizing mystery. If the clickbait was a poem and not a link, it would have similar qualities.
He thought It was Bigfoot's Skull, but then experts told him (THIS)
That the the song is in Korean, a language spoken by a small percentage of the world's population, is at least somewhat crucial. The text conveys most of the actual meaning of the music video; the absence of that understanding, to the non-Korean speakers, allows the attention to concentrate on the imagery, which is a series of the incomprehensible. These aren't merely contradictions of expectations —such as Mr. Bean failing to execute a common action, or coming up with an unusual solution to an ordinary task— but contradictions of expectations on a higher abstract level, rendering the whole situation incomprehensible.
The first seconds already contain a succession of contradicted expectations. It's a plane. No: It's you looking at a plane, being cooled by a fanning lady. No: it's actually a reflection on a guy's shades. The guy is on vacation on a beach. No: not a beach, a playground. Another contradicted expectation later on: the build up of the quick back and forth between Psy on the platform and the smooth young lady inside the metro car who approach each other bodes closing doors between them (especially as this scene was preceded by the elevator's closing doors) — but instead of their movement being frustrated, next they are together. And: singing — sitting on the toilet. And then the entire video is full of inexplicable details: horses and horse-riding dance moves; drowning inside a pool; a kid dressed like a thug dancing like Michael Jackson; dancing on a roof; being blown not just by confetti, but trash and foam to the obvious displeasure if not of the singer, then his wing ladies; shopping-cartful of tennis balls; disco ball decorated bus with dancing elderly women; explosion; backward moving old joggers; horse dancing on a motor boat; the guy thumping his crouch above lying Psy;21 a dance-off with a rival.
Whether a music video tells a story, or merely expresses a relationship between people, or not, many of its elements are arbitrary. Nonetheless, they carry symbolic meaning. Heaps of cash & guns, common especially in hip hop videos, symbolize success and wealth & rebelliousness, aggression, power. Yachts and cars, too, symbolize wealth and freedom of movement. Horses, on the other hand, symbolized wealth and freedom a millennium ago, and are nothing but horses now. Other common tropes are distorted such as to void their symbolic meaning. The spa, potentially a symbol of shining convenient life, is used as an indoor, dark, almost deserted swimming pool only to the effect of drowning in it. The common trope, especially with male musicians, of doting young women, is distorted in several variations. The ones that stride at Psy's elbows are blown by debris; the fanning lady in the beginning is but a mirage; the dancing ladies in the bus are retired. The scene with the toned girls training at the quay has Psy not possessing but being tantalized by one of their plump behinds, exhibiting an expression so exaggerated that it defies its recognition as a particular emotion. The sauna, the closest to an erotic scene in the video is, if anything, homoerotic, as the touch is between men; it is not Psy who is being touched (and therefore yearned after) but the one does the touching; and it is after all not an erotic touch for it is not intentional, perhaps not even conscious, as Psy falls asleep on the man's shoulder, who remains passive. The almost naked person dancing before them is a skinny man. (notice that these are not a mere inversion, i.e. these are not ‘doting females’ turned into ‘rejecting females,’ which is too not an uncommon trope, but altogether something else).
Nonetheless, the music video and its imagery is not ‘random.’ Randomness is boring. The most extreme and literally random equivalent would be the analog television's static with white noise for background. Less literally random would be something like Ron Fricke's movies, which are characterized by their style and mood and whose engagement of the viewer comes about not through drama or even any principle unifying the scenes, but from the individual parts and the recognition, not quite explicit in them, of the earth's oneness. On the other hand, Gangnam Style in its parts is unified by the song, by the familiar format of the music video, and by the character of the singer, Psy, as well as by the reoccurrence of the horse motive (the animal, the dance, merry-go-around ride) and Psy's interaction with others. That is, every scene is not an ‘and now for something completely different’ but another —albeit inexplicable— piece of Psy's story, serving to satisfy the viewer's curiosity; more exactly, the drive of the curiosity rather than the curiosity itself, for it remains unsatisfied to the end.
Humour
The best demonstration of (the successful result of) conceptualization's glee is humour's inducement of laughter. A while after I came up with a general theory about ‘what is funny,’ I turned to see whether it had been expressed by another before. In an interview I can no longer find, John Cleese (of Monty Python fame) expressed a similar idea but without the formalization. I cannot say that my review was exhaustive, but scanning the ‘Theories of humor’ Wikipedia page I find that published theories at most come close or formulate an inaccurate special case.
Put shortly, we are amused by (the result of) reconceptualization, by the recognition of a new emergent concept. A perception (phenomenon perceived as adhering to a concept) is recognized as false and replaced by another perception. This is not merely the movement from seeing the trees to seeing the forest, but of having seen a forest and suddenly seeing that it is something else. I believe all humorous phenomena could be explained thus. Why it is with laughter that we express it is another matter.
Higher (abstraction) level perceived phenomena consist of lower level perceived phenomena. That is, as we perceive lower level phenomena, higher level phenomena emerge. When —either by an additional low level phenomenon or by a simple redirection of attention— we perceive an alternative —as or better fitting— higher level phenomenon — that's funny.
These are not the most hilarious, but they serve well as a first example since their elements are arranged spatially rather than temporally. Comic ‘can't unsee’ images present images of the familiar and provide a cue for seeing the image in a new way, i.e. as something different. When our attention is drawn to the stick-figure like appearance of KFC man's bowtie, the whole image transforms from a bust to a little person with a big head. A more total transformation occurs to The Hunger Games' logo —for those familiar with both— that looks like Johnny Bravo. On the other hand, noticing the silhouette of a bear —a symbol of Bern, where it is produced— in Toblerone's logo elicits no comic effect, since no reconceptualization occurs: there had been a mountain before, and there is a mountain still.
There are plenty of ‘fail compilations’ on the internet, collected videos of people going through mild accidents. To judge by the laughter of the victim's company, they find the situation funnier than we, the viewers of the video, do. There are two reasons for that. The more obvious one is that watching such a compilation, we expect accidents; we are aware of watching a reel of things going wrong, and when they do it's of little surprise. Second, the company has a more sophisticated notion of the victim. For them it's not ‘a person having an accident’ but ‘John trying to show off and fails.’ An incompetent person acting incompetent is not funny because the action conforms to our perception of the situation. A competent person failing at something trivial is funny because it transforms our understanding of the situation. An athlete failing at a competition —not merely losing but stumbling or having an accident— is tragic rather than funny because we are aware from the getgo that the possibility of failure is part of the competition. It would be funny if the usual procedure got frustrated by a factor wholly outside the premises of the sport or the capabilities of the competitors, for example if a herd of sheep barged into the race tracks or the ball got deflated. Intentionality —related to the expression of laughter, discussed below— also has to do with whether something is funny or not. A person transforming her manner of speech —altering her voice, vocabulary— in imitation of another person, to the effect of transforming herself, to a limited degree, to that person, might be funny; a person speaking in that same manner which is his own, unintentionally as it were, is not.
Meta-jokes derive their humour from their recognition as jokes by the listener. Among them are long winded jokes where the punchline delivers that which one would expect from the preceding narration had it been merely a narration and not a joke, i.e. the expectation for the unexpected is defied. Or jokes such as ‘A priest, an imam and a rabbi enter a bar. The bar tender says, “What is this, a joke?”’ If this is funny it is because the punchline shifts attention from the described situation to the description itself, prompting a recognition of a commonality of bar jokes.
People vary in their response to any one situation; what one might find funny another will not. A big part of it is has to do with the persons' conceptualization of the world, and therefore of their respective perception of the leading situation and of the punch line (and how it transforms the situation). The possible responses of a joke or witticism range from ‘I don't get it,’ through laughter to a mocking ‘“ha ha”’. In the first case, the receiver does not experience a transformation of the understanding of the situation by the punchline. In the last case the receiver understands what the punchline was supposed to do, but had already expected it as a possible conclusion, that is, the ‘transformed understanding’ was not new to her (alternatively, the situation was transformed but so according to a world view perceived as wrong). This range is not a line but a circle, as the two ends touch each others; the ‘transformed understanding’ might be so obvious to the receiver that it is not only not new, but not transformed, i.e. in a manner of speaking it was how she understood the situation to begin with.
Though not expressed with the same kind of out loud laughter necessary, this kind of mirthful response, with its spectrum of responses, can be evoked by ‘non-humorous’ works and determine the aesthetic pleasure one gets from them. The transcending nature of art has been already discussed. Avantgarde is taken to be a particular kind of art, but I think that art, or Art, is always at the front of innovation, of form and content. Art is amazing when it involves innovation; the transcendence by itself is not enough. That a piece of string stretched over wood can produce music becomes eventually a familiar phenomenon. A work that is new —not a reproduction— which is in no way innovative is generic, and therefore more craft than art. It doesn't make it necessarily uninteresting or bad, and how generic it is varies along a spectrum. Few works are sui generis; most adhere to at least broad genres, such as ‘poetry‘ or ‘novel’ in the case of textual works. On the other end are trope filled works that are more imitative of other works than of life which deprives them, other than in the case of satire, of the possibility to amaze, since they would in no way reach anything that rings true, to say nothing of surprisingly true. Works that inspire awe are those that transcend their familiar medium in innovative ways, whether it is the first person to draw a violin, or a person putting the Western to unprecedented use.
A piece of narrative work —novel, movie, series— could elude one's comprehension and evoke (particularly when it is a well regarded piece, rejecting the otherwise first response that it is thrash) an ‘I don't get it’ response and with it a sense of stupidity. It might also defy rather than elude comprehension, again evoking a ‘it doesn't make sense’ response and with it, in the case of well regarded work, a sense of anger. And it is works that are ‘brilliant,’ that do not only make sense but an original sense which we enjoy the most.22
Narrative art is but a special case of communication, the artificial phenomenon perhaps most bound with emergence; signs and meaning, signifiers and signified, are the low and high level phenomena, respectively, which constitute any system of communication as such. Comprehending language is not generally funny; there's an emergent understanding but it is not surprising, i.e. previously unexpected. The words mean what they have always meant. Humour can be derived when this is not the case, i.e. when words mean something new while adhering to the rules of the language, as in Abbott and Costello's Who's on first sketch, but this can also occur when the words are used in their most usual way, but the language itself is new. This is not a phenomenon I have ever heard being talked about, so I'm sharing my own experience. When learning a foreign language and experiencing a first of its kind comprehension —a whole sentence in the wild, a passage in a book, a video— the glee is strong. I have a strong memory of such an occurrence that was followed by a disillusioning realization about my undue excitation. A German friend who knew about my efforts to learn his native language shared a link with me to a music video. It was the song Schokolade by Deine Freunde. I rewatched the video several times and for a short hour I became a big fan of this band which stringed rhyming words together to give expression to transcendental ideas, until I turned to search a live performance and —can you guess it?— saw that their audience consisted of children (and their chaperons).23
Laughter
Why laughter? It is said that laughter reduces stress and that it signals group belonging. That stress is reduced I speculate is not due to a cause-effect relationship but because both are effects of a common cause — sudden comprehension leads both to joy (which is stress reducing) and to laughter. As for the group effect, this has to do with what the laughter expresses — comprehension. An individual discrepancy in laughter within a group, in either direction, would lead to awkwardness:24 a person laughing alone, continuously, in a social situation might be thought less of (and perhaps feel stupid), while a person alone not laughing, in a situation where this could be noted by others, would similarly feel out of place. Nonetheless, I think it's the latter case, of remaining silent in a laughing crowd, that is the graver one. If the others laugh at what seems to one to be banal, one might feel that one is among simpletons and could do whatever one wants with it. If, on the other hand, one doesn't get the humour, one might feel in trouble. I have recounted in the Saxon of Shaked's feeling, when sitting at a table that did not oblige her with speaking in a language she understood, that she was invited out of spite. I myself had recently a comparable experience: I went alone to a poetry slam evening at a pub in my neighborhood. My level of German comprehension varied from slammer to slammer, but though I laughed a few times, I had certainly missed most of that which made the crowd break in mirth. At some point in the evening a feeling grew in me; I knew it was absurd, but I couldn't help but feel that my presence as a dumb member among a crowd, who must have been mostly stranger to each other but were conjoined in their common understanding, was obvious, accompanied by a feeling that is perhaps best described by the word ‘paranoia,’ a vague sense of being a target of the environment's animosity.
Novelty perception & language
A note on the Sapir-Whorf Hypothesis
As a side note, I'd like to comment on the so called ‘Sapir-Whorf Hypothesis.’ I have not read any of the surrounding literature and am not attempting a serious addition to the debate, but I wanted to shed a casual light on the general idea from the point of view of this here essay. Wikipedia defines it as ‘a principle suggesting that the structure of a language influences its speakers' worldview or cognition, and thus individuals' languages determine or shape their perceptions of the world.’
Relevant in this debate are three main components: 1. The sensed world, 2. the conceived world, 3. language. The second is the organization of the first in our cognition. The third is the communication of the second to others. The second, our conceptions of the world, is not arbitrary but arises from our experience.25 Had we lived all of our lives in the equator and never communicated with anyone who lived outside of it, we wouldn't have a conception of ‘seasons,’ ‘ice,’ or ‘the north pole,’ and naturally neither words for these. The third, language, is not completely arbitrary either. It is a code, and though language does evolve over time, a person cannot quite start using made up words and expect others to understand her. Regardless, to the extent that concepts are relevant in interpersonal interactions, they will have words to refer to them.
Colour terminology seems a particular occupation of investigators of this field, I suppose because colour —as opposed to ‘agriculture’ or ‘health’— is a sensory phenomenon. Experiments found that people were better at distinguishing and remembering colours for which their language has a name. Does it mean that language effects cognition? No.
It is perception which effects language (i.e. available vocabulary) rather than the other way around. I cannot adduce a definite proof, but I'll try to convince you nonetheless. Let us take the person whose language has only one word to refer to blue and green and who has a harder time distinguishing these two colours. We could say that he has a single idealized conception of this colour and, using the terminology above, say that all the various shades of green and blue are noisy instances of it; though he would perceived a blue and a green next to each other as different as we would two shades of purple, appearing one at a time he is in a manner of speaking blind to the variation. I claim, however, that this blindness came first and the poorer vocabulary came second.
First, notice that the capacity to distinguish does not rely on availability of language. As mentioned earlier, we are good at distinguishing faces but have very little by way of language to describe them. Many people go by way of describing eye and hair colour, perhaps the most incidental features of somebody's face. How many faces would you fail to recognize if they dyed their hair or closed their eyes? Then there are words to describe more defining features; a nose might be aquiline or snub, lips full or thin, eyes round or almond shaped &c, but much of the face is in the proportions and arrangement rather than the detail. Much more informative are words that put the face into categories as sex, age, ethnicity, but still the description is not exact; we might know many old Bavarian women or young French men and never mistake one for another. The most accurate signifier would be of the form ‘Alice's face,’ but this is not a description, but a reference; telling somebody that Alice has such a face is essentially telling nothing, though saying that Alice ‘looks like Marilyn Monroe’ does.26 But in this latter case it is our ability to communicate, our language, that is dependent on the existence of the comparable face, not our ability to recognize.
Second, availability of vocabulary does not confer the capacity to distinguish. The most trivial example is of individuals who are sensually deprived; no words would make a blind person distinguish phenomena based on their apparent features. Further, a hearing person, like myself, might be familiar with the term ‘aspiration’ and its meaning and still not necessarily be able to distinguish between aspirated and unaspirated consonants, especially if presented with them, as with the colour experiments, out of context. Could you? You might demure and say that, unlike basic colours, this is a technical term and/or that you have learned the term too recently to having developed the necessary perceptional capacity. But both of these point to the same matter, namely that the distinction is largely outside one's experience. Depending on the education system you have gone through, you might have learned many a thing that you forgotten soon after the relevant exams, as they served you in life in no way than to pass those exams. That a word is a ‘technical term’ only means that it has a precise meaning in a given context that is outside the experience of most members of the language community.
Notice, it is the distinction that is outside the experience, not the phenomena distinguished. Each English consonant is pronounced either aspirated or unaspirated, depending on the consonant's context, so it is not like one or the other is like snow to our equator dweller. Rather, in English, unlike some other languages, un/aspirated pairs are allophones (a term coined by the very same Whorf) rather than phonemes — someone using the wrong aspiration would sound to have an accent rather than be uncomprehended. And just as to you, unless you speak some of those other languages, aspiration might be a technical distinction, so would ‘blue’ and ‘green’ be for somebody in whose experience the distinction plays no part. This was the lot of most of humanity for most of history. Nowadays screens display a wide and precise spectrum of colours, many products come in a variety of colours and some might be customized, but until the emergence of the modern synthetic dye industry in the 19th century, colour, as a distinguishable almost arbitrary property of an object, was largely unfamiliar. Colour was so exotic that in medieval Europe the highlighting of words in red in illuminated manuscripts, rubrication, was done by a dedicated post, certain pigments were used in paintings not to express an idea or confer verisimilitude but to show off wealth, and sumptuary laws forbade the plebs from wearing clothing of certain colours. It's no coincidence that while the English words black, white, red have ancient Germanic roots, the word colour, originally meaning ‘complexion,’ was brought over rather late by the Normans.27 That is, while today a car or a pen can come in one or another colour, some centuries ago most objects had had their unique proper colour so that colour could not be used to distinguish.
Third, the rise of an experience of a new phenomenon shared among a language community is followed by words to describe it. Language was not handed to Men by God. Its contours limit its users' ability to communicate, not to perceive, and this limit doesn't stop the users from expressing themselves but leads to the further development of the language once the need arises.
That being said, language of course affects cognition; this is what it is for, for one person to operate on the mind of another. It confers knowledge and directs attention. Beside speaking of the material reality, language is also used to create an intersubjective world, by ascribing properties that define relationships that are carried out by people, such as designations of class, debt, responsibility and so on. As this world is enacted by people and its perception is mediated through language, here the language in a way precedes perception, but really the language and the perceivable phenomena have a mutual relationship, such that without the language the phenomenon could not uphold itself, but without the enactment, or at least stories about it, the language to describe it could not be learned. That being said, a foreigner to a society might still observe this enactment, though she might misperceive it and misunderstand its exact mechanism. This misperception might be akin to blue-green confounding, but this is anyhow a very special case.
The vocabulary of novelty perception
Though I believe this mechanism of novelty perception has not been yet described this way, the idea should not feel all too unfamiliar. The shift from deductive to inductive reasoning is a shift in the state of consciousness, a distinct experience common to us all; a vocabulary has arisen to speak of it, including both stages of novelty perception —the trans-threshold accumulation of error and the reconceptualization of abstracted phenomena— and of the phenomena that prompt it.
For the first phase of the process, we have the noun ‘awe’, describing the emotion associated with the experience of that which leads to accumulation of error, i.e. that which defies the expectation of our conception of the world. This is nonetheless not a general term; one would hardly use the word to describe the observation of an interesting but incomprehensible game, or of watching the music video of Gangnam Style. Since people's conceptions of the word differs, what is inexplicable to one would be trivial to another, and so it seems like the word is reserved to phenomena that are universally inexplicable, beyond comprehension by their very nature, such as the idea of a transcendental god or of the vastness of the universe. It's this general disagreement and therefore vagueness of the term that led, I believe, the colloquial meaning of the associated adjectives to shift.
The word ‘awesome’ used to denote ‘awe-inspiring’ (from ‘awe’ + ‘-some’, as in winsome, fearsome, fairsome or buxom).
Wiktionary suggests, without citing reference:
The oldest meaning of awesome is of “something which inspires awe”, but the word is now also a common slang expression. It was originally so used in the United States, where it had featured strikingly in the 1970 film Tora! Tora! Tora!, as used by Japan's Admiral Isoroku Yamamoto to describe the "awesome" industrial potential of the United States. Consequently, as the word popularly became an expression for anything superb, in its original meaning it has tended to be replaced by the related word, awe-inspiring.
I tend to believe this account. First of all, the movie, though apparently badly received, was popular enough to garner reviews, win an academy award and to become the ninth highest grossing film in the US in the year of its release. It was watched by enough people to potentially influence language.
Second, based on one clip I saw, the Japanese cast of the movie spoke, appropriately, Japanese, with the English appearing in subtitles. That is, the word entered the movie not as part of English speech and adhering to the latter's convention, but as a translation of a foreign expression; thereby it also crossed out of its thitherto formal context and entered colloquial, i.e. spoken, language. That is, making an appearance in English through another language offered an opportunity for a word to appear in an unusual context.
Third, the shift of the word's meaning seems to have begun after the release of the movie. Inspection of the word's frequency in English print, with Google N-Gram, shows that it has been increasing ever since the beginning of the 20th century. Towards the end of the century there's a kink in the graph —between 1982 and 1992 (to trust Google N-Gram) there was a transient decrease in the frequency— which —this is rather a cute theory than an important observation— I suspect to be a superposition of the decrease of the word's usage as meaning ‘awe-inspiring’ and of its increase as meaning ‘superb.’ In book titles up to 1970 the word was used in that older sense: This Awesome Challenge: The Hundred Days of Lyndon Johnson, The Awesome Power of the Listening Ear, 30 Awesome Photos of Mother Mary, The Awesome Responsibilities of Revolutionary Leadership; as well as when appearing in the context: ‘This dilemma of how to teach, educate and bind together these diverse elements is compounded by the awesome problem of how to cope with huge masses of persons displaced in their own country [...]’ (from the Congressional Record, Volume 116, Part 1) and ‘The awesome is where Existence makes its power felt in the world of common sense’ (from the journal Philosophy in Process, Volume 1). From 2000 on, the word in titles is most with the latter sense: The Book of Awesome (2010) (the word printed colourfully), Awesome! (2018) (a children's picture book), A Is for Awesome!:23 Iconic Women Who Changed the World (2019), On Being Awesome: A Unified Theory of How Not to Suck (2017) and so on. In between these year ranges the word appears in titles in both senses.
The adjective ‘awful,’ with an originally similar meaning, has undergone, together with ‘terrible’ and its derivatives, a shift much more in common with its equivalents in other languages, to mean either ‘bad’ or ‘excessive’ as in ‘an awful amount,’ often with a negative connotation. As far as it related to ’awe,‘ the adjective lost the sense of incomprehensibility, but retained both the sense if not of grandeur, then of size (higher abstract phenomena are greater than their components), and the sense of negativity (the incomprehensible is an unpleasant experience).
Two more adjectives are used to describe phenomena that elicit the experience of error accumulation: uncanny and eerie. The word uncanny, etymologically derived from ‘un’ + ‘canny’, the latter related to ‘can’ in the sense of ‘to know,’ is today most associated with —or so it seems to me— with the expression ‘the uncanny valley,’ referring to the dip in an otherwise rising positive response towards ever more realistically human like humanoids (robots or computers imagery of) which occurs with phenomena that look almost just like humans, which almost fool us, but don't. Unlike cartoons or butter-robots whose similarity to human beings is symbolic or analogous and therefore judged by their own standards, ‘realistic animation‘ and ‘realistic androids’ are similar enough to humans that they are recognized as such and therefore judged by the standard of ‘human appearance’, to the effect that if the imitation is not perfect, the deviation is perceived as error. Therefore the uncanniness: they are perceived as something they are not —as video of humans when really animated, as a human when really a robot— thus prompting a negative ‘unfair’ judgment. The word uncanny is also used in other contexts, always denoting a phenomenon that is recognizable but which somehow defies our understanding of it. ‘Eerie’ has a similar meaning but applied to phenomena that are more directly perceived as or indicative of personal danger.
The active and successful reconceptualization stage is described by the word ‘wonder.’ There's the noun ’wonder’, a phenomenon that had defied expectations and changed people's conception of what is possible, and the equivalent adjective ‘wonderful’. There's the verb ‘to wonder’ which denotes the open seeking of answers in a state of awareness of one's ignorance (as opposed to ‘to question’ — to cast doubt; ‘to think’ or ‘to ponder’ — to analyze or report analysis; ‘to ask’ — to bequest missing information), as well as the not-quite-verb ‘to be amazed’ — to acquire new knowledge or understanding that contradicts previous expectations. ‘To marvel’, too, connotes a lasting attention at that which begins to unfold its comprehensibility.
We already discussed awkwardness, when an occurrence breaches the conception of a social situation, but there's a sense of the word that is also used for situations when one is alone, perhaps mostly adverbially, such as ‘he fumbled with the cords awkwardly.’ Regardless, though not social, and I hope this does not sound too glib, it is the presence and judgment of a witness, namely the active person, that makes the behaviour awkward. If there are high expectations of success —which keep the perception of the activity as a familiar phenomenon— that get frustrated due to lack of comprehension, the actions and their effects are perceived as awkward —stage 1 of novelty perception— with their sense of error and unpleasantness. However, a person fumbling with the unfamiliar without expectations but merely with a mind curious to see what might happen, the activity falls in stage three, conceptualization, and is experienced as joyful. The former, especially when committed in public, is characterized by high self-consciousness, the latter by high inductive-reasoning activation, and both can be contrasted with the state of ‘flow’ where inductive-reasoning is virtually off and self-consciousness disappears altogether with the perception of anything irrelevant to the task at hand. The person turns into a kind of automaton specialized in the activity.28
for example, it could be the case that the background was most indicative. If a model is trained to distinguish pets from wild animals, the apartment or forest are as telling as the animal itself.
Through a concurrence of lingual expressions and other phenomena, by itself far from trivial. I'm thinking about how while in Europe one points to one's chest when indicating oneself, in Japan, as far as I know, one points to one's nose. It's only through many repetitions —and with time, through known properties of the language— that it becomes clear what part of phenomena the signifier in question points to.
Such distinction is not necessarily the case. The sentence, ‘This sentence is 36 characters long.’ is self-referential —it teaches about itself— and informative. Nonetheless, little teaching has this form.
By ‘sample’ I mean a single instant, a representation, of the pattern, such as an individual cat which is a sample of the phenomenon of cats
at the perception level, we are not merely insensitive to noise, but insensible to it, i.e. we do not perceive it and cannot draw out attention to it. Our visual perception, vision, is far from video camera like but highly informed by what we have learned about the world. This can be hinted at by optic illusions or by the fact that when you cover one eye you do (not not) see what's in the field of the other eye's blind spot; your mind sees what the eye doesn't; in a sense, you hallucinate.
This is, as they say, a feature and not a bug. With limited computation resources, reasoning could only be accomplished with heuristics and biases. The same kinds that Kahneman among others described. The alternative to employing such usually correct sometimes wrong heuristics is to be momentarily dumbstruck —like a hanging computer— before responding to a stimulus, with the overall effect that, since reality is not going to wait for us but continue to flow, we would find ourselves permanently paralyzed.
This is best practice rather than a necessity. On the one hand, an initiated, pre-training, model could already make predictions, albeit random ones. On the other, a model making predictions could at the same time ‘learn,’ modify its weights by assuming that its prediction was correct, i.e. treating the sample as yet another data point of the training set.
I've been tempted to call it ‘memetic drift,’ only that unlike ‘genetic drift’ which it plays on, it doesn't involve a random process, at least not in the same sense, such that the analogy is not sufficiently there to merit the term.
A set of clothing that was ‘daily wear’ in another decade might be perceived as ‘dressing up’ if worn today, or even, if from a very distant time, as a ‘costume.’
The progress in robotics that Boston Dynamics ushered in was truly a leap. I happen to remember watching an unedited footage of an obstacle-course competition for androids, circa 2013. As far as its dynamism went, it lay somewhere between daisies growing and a chess match. The androids spent most of the time standing, and more often than not movement was an unbalanced collapse, sometimes spontaneous. The sudden appearance of a robot dancing footage was more plausibly, though falsely, explained as animation.
the way it is done is by instructing him to concentrate his gaze on a cross centered on the screen, each word flashing on each side of it; the eyes are innervated such that information from the eyes' left field goes to the right hemisphere, and vice-versa. The body is also contralaterally enervated, i.e. the left hemisphere controls the right side of the body
Supposedly. I'm left somewhat confused, since it was his right hand (left hemisphere), the one that got ‘music,’ which chose the belfry. The paper where the results were published offers no elucidations. Other experiments, anyhow, demonstrated that information reaching the separate hemispheres of the corpus callosum deprived individual stay there, and when it affects the behaviour of the body, the other hemisphere justifies the behaviour based on what it knows and under the assumption that it has the relevant information — when in fact it doesn't
presumably akin to how ‘deep dream’ —remember how amazing it seemed at the time?— could accentuate the most dog like in an otherwise dogless image
Though, strictly speaking, an individual is also a class of perception — we encounter him over time, over various moods and states.
We have indeed reached a point that a pair of side by side shown videos, one face swapped, are presented and it be a serious challenge to tell which is the ‘real’ one. So was my experience, anyway, with this video:
Strictly speaking, really it could have been the only source. I'd say error —like in biological evolution— is certainly one way by which novelty arises in the world: the speaker meant one thing, the hearer understood something different, gave it credit, and thus a new idea came into the world. Whether it persists depends on whether it would become true (i.e. accepted as the case by consensus, to distinct from “real”), that is, whether it would gain traction in the minds of people. The only example of an occurrence of this that comes to my mind —though I imagine error in understanding has brought about invention many times along history— has to do with Christianity. A prophecy in the Hebrew Bible's Book of Isaiah, which is referred to in the Christian Gospel of Matthew, speaks of a divine sign: an almah, a lass, a young woman, who would bear a son. The Septuagint, the Greek translation of the Bible, rendered the word as parthenos, a maiden, which under Matthew's words became ‘a virgin,’ and hence the idea of the immaculate birth was born, with whatever theological and thus ideological consequences it might have had. Needless to say, that Mary was married or that the prophecy named the son ‘Immanuel’ didn't bother Matthew or his followers.
These can be projections in time, whether predicting the future or ‘predicting’ the past (‘the street is wet, therefore it was raining’), as well as conceptualized projections, such as perceiving a forest when perceiving a concentration of trees. Notice that in this case the ‘forest’ is an existing, known concept, and not a newly derived one, as would be output by inductive reasoning.
And, likewise, voice becomes speech, ink – text, interactions a relationship, a population transcends itself and society emerges.
By which I don't mean whether it is true that clickbaits are deceptive or not, but whether it could at all be true or false. A sound can neither be ‘red’ nor ‘not red’, and therefore whether a particular sound is red could not be stated, other than metaphorically. URL titles as description can be deceptive, but the format of clickbaits is such that they are not even descriptive, and not being a description they cannot quite be deceptive in a straightforward sense.
Fiction, even more so fantastical fiction, is literally —ho, ho— unbelievable. Nonetheless, beside hoaxes, it doesn't advertise itself as true, and yet, when it's good, it constitutes a kind of credible ‘hypothetical truth’.
Mind the similarity with gambling. The slot-machine player knows that the odds are worse than poor, but feels he is special and seeks to find that out.
In general, the movements of dance are arbitrary, deriving their aesthetics from emotional expression & demonstration of control over the body. Dance is ubiquitous in music videos, including sexy and suggestive dances, directed either at the viewer or at another person in the video. They're meaningless in the sense that they express nothing greater than themselves, but they are not inexplicable for the same reason — they are recognized as dancing. What makes the elevator ‘dance’ inexplicable is that it defies danciness, more a lewd movement than a dance. Moreover, while the guy looks directly at the viewer, he is situated in a spatial relationship with Psy, standing above him, thrusting, inside the narrow and functional space of the elevator, while the two ignore each other. These movements therefore beg an explanation which fails to appear.
To give another example of something similar from elsewhere: The YouTube channel Without Music presents music videos as they would have sounded like had they been stripped off the music, which has a comic effect (the sounds are both unexpected, it being a music video, and ring true). However, there's an additional effect when the source material is not a standalone music video but a number from a musical. In such films the presence of music separates the parts adhering to conventions of drama, verisimilitude, and the parts adhering to conventions of opera. The removal of the music removes also the transition, such that what we get, in the first few seconds, is not the usual ‘a music video without music’ but an alteration of the situation's meaning. In this case we get ‘an overly confident person executes a silly motivation speech.’
As I said, laughter is not something usually associated with this kind of enjoyment, but I can testify about myself that when I watch something that I find wonderful —perhaps only when I watch alone at home, pause the video for whatever reason which gives me a brief opportunity to take the piece as a whole— I would break into a joyful laughter.
Nonetheless, enjoyed while a foreign language is being learned, guilty pleasures might feel more excusable.
Wikipedia lumps awkwardness with embarrassment even though it's a separate phenomenon. Since it's related to this here essay, it won't be out of place to say a few words of it. Awkwardness arises when a low level action disagrees with the higher level perception of a social situation. A potential employer inviting a candidate during an interview to sit down on the couch for a session of video gaming or asking them about their love life or even hobbies is awkward by virtue of not being pertinent. If, say, the company were a video game developer the former invitation could seem congruent with the idea of the interview. Whether or not the candidate enjoys playing video games has little to do with it. Asking a bank teller to offer an opinion about an article of clothing, offering money to a friend for a favour or to a spouse after a good time in bed are all awkward as they are suggestive of an alternative relationship than the one having been established. Awkwardness' unpleasant inconvenience arises from the incomprehensibility rendered, and thereby a behavioural impasse (hence I think the etymological roots of the word). Embarrassment arises from the recognition of having unintentionally breached the situation's rules and thus betraying one's ignorance about it (shame, a special case of embarrassment, arises when the rule broken pertains not to the situation but to society as a whole). Embarrassment might be shared to the degree that others have sympathy for the actor, but otherwise it is the latter's. An opponent might make a competitor feel awkward during a chess tournament when moving a rook diagonally —the flow of the game must be interrupted to pinpoint the illegitimate move— but the embarrassment, once the misstep is realized, would be his alone.
It is not arbitrary in the sense that people do not choose how to perceive the world. There's a prevailing ‘knowledge is possession’ metaphor. People talk of ‘acquiring skills‘ and ‘language acquisition.’ The metaphor, however, functions only at a narrow angle. Yes, we can gain knowledge like we do gain things, as well as put it into our advantage the way we utilize tools. But, unlike possession, we do not lose knowledge when we share it (quite the opposite), nor can we actively get rid of it or have it be stolen from us. This is not merely sloppiness of expression, when one thing is said but another is correctly understood by all; this metaphor also affects our thinking (and thereby affirms the suspicion of the Sapir-Whorf Hypothesis). It muddles our thinking about important issues such as ‘intellectual property’ and gives rise to dangerous misguided notions such as ‘cultural appropriation.’ More to the point, unlike tools, knowledge is not something that we can use, but something that we must use. Or, more exactly, knowledge is not something that we have but something that, in a way, effects what we are. We often distinguish between what we think and what we do, and for the most part this is a meaningful distinction. Nonetheless, I want to make it clear, perception is an unconscious action; we are conscious of its result but we do not will the process. Beyond directing our attention —to the extent that even that is willful— we do not decide what we perceive or comprehend. As in the ‘can't unsee‘ meme, once we know a language we cannot not understand it when we hear or see it. We are of course well aware of it. Lying can be difficult. Some people prefer not to know —avoid taking a test of a disease or hear damning secret about a loved one— because once they do, unlike an item that can be thrown away, it cannot be undone.
Some colour words indeed rose this way, such as pink and orange, named after the flower and the fruit respectively.
Don't quote me on this; not because it's not true, but because it does not necessarily imply what I suggest it does. The German word for colour, Farbe, has apparently similarly ancient roots, and it might be that the Norman term simply displaced a commonly used Germanic word.
Mihaly Csikszentmihalyi, who came up with the term ‘flow’ and described it, identified ‘flow’ as an area on a two dimensional characterization of activities where both ‘challenge’ and ‘skill’ are high. I think he had gotten it slightly wrong. First, the relevant dimension is not ‘skill’ but ‘familiarity,’ i.e. correct conceptualization of the activity, including the understanding of the relationship between action and reaction, how one can execute them, what the goals are and so forth. The person must be able to execute the action, but she doesn't have to be able to execute them well — only well enough not to feel like she is wasting her time and should do something else. Second, and here Csikszentmihalyi's error is greater, the relevant dimension is not ‘challenge’ but the task's ‘extrinsic subjective importance’ (or perhaps importance divided by the expected time/effort required to accomplish the task). By ‘subjective’ I mean ‘as judged by the actor,’ by ‘extrinsic’ I mean ‘as judged outside the activity‘. What makes a task boring is not a low challenge level, but its perception as unimportant and therefore potentially unnecessary. It might be said that what I'm describing is a state different than Csikszentmihalyi's flow, but I think I speak of the same hypnotic like state that he alludes to, and which is experienced over a broader spectrum of experiences, occurring not just when one plays basketball or paints well, but even when engaged at a as mundane a task as shopping for groceries.