How people seek consistency among their attitudes?

Vocational interests as personality traits

Gundula Stoll, Ulrich Trautwein, in Personality Development Across the Lifespan, 2017

The secondary constructs

Holland postulated secondary constructs that moderate the predictions and explanations suggested by his theory: Consistency, Differentiation, and Congruence. These secondary constructs help to describe interest profiles and can be applied to both individual interests and environmental models. Consistency reflects the extent to which interest types are related within a person or an environment. Individual interest profiles can be more or less consistent. According to the calculus assumption, some pairs of interest domains are more closely related than others. High consistency means that a person’s highest interests—or the most important interests in an environment—are closely related (oriented adjacently; e.g., R and I). Research has suggested that people are more likely to have consistent interest profiles than inconsistent profiles. Differentiation reflects how clearly a person or environment is defined. Profiles with clear peaks are more differentiated than flat profiles. Congruence reflects the fit between an individual’s interest profile and the interests reflected by a specific environment. Holland postulated that different interest types require different environments and that people strive for congruence because high congruence is associated with greater stability in career decisions as well as better performance and higher satisfaction in the work environment (Holland, 1997).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128046746000259

Social Psychology, Theories of

S.T. Fiske, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3.1 Understanding, Within Individuals

Derived from gestalt psychology, three types of theory focus on processes within the social perceiver: attribution, impression formation, and consistency theories. Other attitude theories and self theories build indirectly on these origins, but still emphasize understanding as primary.

Heider's theories of social perception focused on harmonious, coherent wholes: invariance in perceived personality. Heider's social perceiver, portrayed as a naive scientist, searches for consistencies in behavior, to make coherent dispositional attributions (inferring stable, personal causes). Other attribution theories developed: Jones's theory of correspondent inference describes how perceivers impute dispositions that fit an actor's behavior, attributions increased by a behavior's unique (‘noncommon’) effects and low social desirability. Kelley's covariation theory likewise notes a behavior's distinct (i.e., unique) target and its degree of consensus (i.e., desirability) across actors, but adds the observation of consistency over time and circumstances. Extending fundamental theories, stage models (Quattrone, Gilbert, Trope) converged on automatic categorization or identification of behavior, followed by dispositional anchoring or characterization, followed by controlled situational correction or adjustment. These recent theories bring dual process perspectives to attribution theories that emphasized controlled processes.

A second line of person perception theories also originated in gestalt ideas and eventuated in dual-process models. Asch proposed a holistic theory of impression formation, in which the parts (most often personality traits) interact and change meaning with context. The alternative, an algebraic model that merely summed the traits' separate evaluations, matured in Anderson's later averaging model of information integration. Asch's most immediate heirs were modern schema theories, examining impression formation as a function of social categories that cue organized prior knowledge. Following Taylor and Fiske's cognitive miser perspective, social perceivers were viewed as taking various mental shortcuts (below), schemas among them. Eventually, a more balanced perspective emerged, describing perceivers as motivated tacticians, who sometimes use shortcuts and sometimes think more carefully, depending on their goals. One such dual-process model, the continuum model (Fiske and Neuberg), holds that perceivers begin with automatic categories (e.g., stereotypes), but with motivated attention to additional information, perceivers may individuate instead. Brewer's dual-process model posits automatic identification processes and controlled categorization, individuation, and personalization.

Other mental shortcuts emphasize relatively automatic processes, as defined by Bargh to include being unintentional, effortless, unconscious, and unstoppable. Illustrative theories address influences by stimuli that are arbitrarily salient in the environment (Taylor and Fiske) or accessible in mind (Higgins, Bargh). In Kahneman and Tversky'sheuristics, people estimate probabilities by irrelevant but convenient processes: ease of generating examples (availability), ease of generating scenarios (simulation), and ease of moving from an initial estimate (anchoring and adjustment). Norm theory (Kahnman and Miller) posits that people retrospectively estimate probabilities via the simulation heuristic, bringing to mind similar but counterfactual scenarios. Although not a theory, a catalog of inferential errors and biases follows from people's limited capacity (Nisbett and Ross; see Sect. 4.1).

A third line of theories follows from gestalt approaches. Heider's balance theory posits that perceivers prefer similarly evaluated people and things also to belong together. This emphasis fits other consistency theories. Most prominently, Festinger'scognitive dissonance theory holds that people seek consistency among the cognitions relevant to their attitudes, including their cognitions about their own behavior. Attitudes change more easily than behavior, especially behavior based on counter-attitudinal advocacy, forced compliance, free choice, unjustified effort, and insufficient justification (see updates emphasizing self-esteem (Aronson), accepting responsibility for an aversive event (Cooper and Fazio's new look), and self-affirmation (Steele). Other consistency theories hold that people seek harmony within and between their attitudes.

Some attitude theories focus on understanding dual processes, downplaying consistency motives. The elaboration likelihood model (Cacioppo and Petty) describes the object-appraisal (understanding and evaluating) function of attitudes, but depicts two modes, depending on motivation and capacity: The peripheral mode processes persuasive communications based on superficial, message-irrelevant cues (such as communicator, context, or format), whereas the central mode processes message content, generating cognitive responses pro and con, which predict persuasion. Chaiken's heuristic–systematic model similarly proposes a rapid, simple process that contrasts with a more deliberate, in-depth process of attitude change.

Focusing on understanding via deliberate control, far from gestalt perspectives, two subjective expected utility models predict attitude–behavior relations. Fishbein and Ajzen's theory of reasoned action posits that behavior results from intention, which in turn results from attitudes toward a behavior (evaluating the behavior's consequences, weighted by likelihood) and from subjective norms. Ajzen's updated theory of planned behavior adds a third component to predict intentions, namely perceived behavioral control.

Like theories of attitudes and social perception, theories of self-perception emphasize coherence. Self-schema theory (Markus) describes few, core dimensions for efficiently organizing self-understanding. Self-concepts may be more or less elaborate, resulting in respectively more stable and moderate or volatile and extreme self-evaluations (Linville's complexity–extremity theory).

People learn about themselves partly by looking at others. Festinger's social comparison theory describes how people strive to evaluate themselves accurately, by comparing their standing relative to similar others, on matters of ability or opinion. Schachter extended this work in exploring the fear-affiliation hypothesis, whereby people under threat affiliate with similar others, perhaps to gauge the appropriateness of their emotional reactions. From this came Schachter's subsequent arousal-cognition theory of emotion, positing that unexplained physiological arousal elicits cognitive labels from the social context. Although the outside world may anchor self-understanding, other theories describe how the self anchors understanding of the world outside, as in Sherif and Hovland's social judgment theory, explaining how people assimilate nearby attitudes within a latitude of acceptance and contrast far-off attitudes within a latitude of rejection.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B008043076701648X

Effectiveness in Humans and Other Animals

Becca Franks, E. Tory Higgins, in Advances in Experimental Social Psychology, 2012

2.1.3 Establishing reality from cognitive consistency

In his introductory chapter to the landmark book, Theories of Cognitive Consistency: A Sourcebook (1968), Newcomb described the remarkable emergence of scientific attention to cognitive consistency motives (see Newcomb, 1968, p. xv): “… So it was a decade or so ago when at least a half dozen of what we shall call ‘cognitive consistency’ theories appeared more or less independently in the psychological literature. They were proposed under different names, such as balance, congruity, symmetry, dissonance, but all had in common the notion that the person behaves in a way that maximizes the internal consistency of his cognitive system; and, by extension, that groups behave in ways that maximize the internal consistency of their interpersonal relations.” To provide just a flavor of how people work to establish realities that make sense to them, we will briefly consider Festinger's (1957) cognitive dissonance theory.

Dissonance theory is concerned with resolving cognitive inconsistencies in order to make sense of what has happened. Importantly, the theory of cognitive dissonance was conceptualized by Festinger in terms of truth, in terms of establishing what is real. According to Festinger (1957, p. 260), “the human organism tries to establish internal harmony, consistency, or congruity among his opinions, attitudes, knowledge, and values.” When people fail to do so, they experience dissonance, which gives rise to pressures to reduce that dissonance. Importantly, he states (1957, p. 3): “In short, I am proposing that dissonance, that is, the existence of nonfitting relations among cognitions, is a motivating factor in its own right.”

A classic example of people trying to make sense of an event that produced dissonance is described by Festinger, Riecken, and Schachter (1956) in their book, When Prophecy Fails. The study was inspired by a headline they saw in the local newspaper: “Prophecy from planet Clarion call to city: flee that flood.” Here was a group of people expecting that alien beings from planet Clarion would arrive on earth on a specific date and take them away on a flying saucer (thereby saving them from the great flood that would then end the world). Festinger and his colleagues predicted that this expectancy would be disconfirmed, which would create dissonance especially because many members of the group had made sacrifices like quitting jobs and giving away possessions in preparation for leaving the earth. And, indeed, it was disconfirmed.

One solution to this truth problem would be to try to make sense of what happened by establishing some new reality. This solution would involve creating new truths that are consistent with their previous beliefs and actions. This happened. New judgments about the present and predictions about the future were made that were consistent with the original belief, with the disconfirming event being treated like a bump in the road. After disconfirmation, for example, there was a sharp increase in the frequency with which group members decided that other people who telephoned them or visited their group were actually spacemen. They tried to get orders and messages from the “spacemen” for a future reality that would be consistent with their original beliefs.

Another way to make sense of what happened is to maintain the same belief about being taken away in a flying saucer but just change the date. This would justify the sacrifices that were made by increasing the value of their original belief. To strengthen the belief, new converts would be needed, which requires proselytizing. Indeed, this also occurred, with some group members proselytizing their beliefs after the disconfirmation. Notably, this proselytizing solution reflects not only effort justification but also the motivation to create a shared reality with others that their beliefs are true. This is yet another way to establish what is real that we discuss next.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123942814000064

Computational Psycholinguistics

R. Klabunde, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Computer Models of Human Language Processing

Although most computer models in production and comprehension simulate processes around the level of lexical items (cf. Norris 1999), there are also several models at the sentence level. Computer models of the mechanisms in sentence comprehension simulate how people obtain a particular syntactic and semantic analysis for a sentence (e.g., Crocker 1996, Crocker et al. 2000). Computer models of sentence production simulate how people construct, depending on retrieved lexical information, the syntactic structure of a sentence while simulating typical speech errors that occur on the sentence level (cf. de Smedt and Kempen 1987). Computer models that go beyond the sentence level to the production or comprehension of discourse turn out to be much harder to develop, since it is very difficult to determine all discourse-related parameters in experimental studies. From a broader perspective, the phenomena to be considered on the discourse level are all related to the notion of inference; they range from the interpretation of referring expressions to listener modeling.

The computer models that are presented in this section are models of various stages in language processing. The list is far from being exhaustive: It does not include all computer-implemented models that have been developed for one processing task, nor do the models completely cover all tasks in human language processing. The models presented should give an impression of the advantages of computer modeling in psycholinguistic research.

Differences in the architectural basis and the techniques used depend on the theoretical background and the system's task. Each of the computer models presented below will be characterized along the following dimensions: first, the task in human language processing, which the model simulates, will be outlined. The architectural basis for the processing mechanisms will be characterized and the technique used for the simulation will be presented. Finally, relations between the system's behavior and empirical data will be outlined.

3.1 Computer Models of Language Production

Since models of human conceptualization are in the early stages of development, elaborated computer models of conceptualization are currently not on the market. Issues related to the topic of how people organize their pre-linguistic knowledge in order to put it into language have been addressed primarily in Artificial Intelligence research, especially in Natural Language Generation, but the models developed in these disciplines are not intended to simulate human processing. Conceptualization is still very underdeveloped in computational psycholinguistics.

Computer modeling starts at the interface between conceptualization and formulation, viz. the retrieval of lemmas from the lexicon, given pre-linguistic information that needs to be expressed. Models of lemma retrieval come in two forms, symbolic and connectionist approaches, but only two connectionist models shall be characterized that are based on spreading activation networks, because they show the merits of computational modeling in psycholinguistics very well.

The decompositional model of (Dell and O'Seaghdha 1992) aims at the selection of appropriate lemmas so that the relevant conceptual features can be linguistically expressed. The system's architecture is non-modular and interactive. Hence, direct feedback is possible. The system uses a network consisting prima facie of nodes for conceptual features and nodes for the lemmas. Bidirectionality of the links between these two layers allows this model also to be used for language comprehension. After activating the conceptual features, activation spreads towards the lemma nodes and back. Depending on the chosen parameter values for the spreading formula, the model makes several empirical predictions with respect to the activation of semantically related lemmas and the speed of activation. However, the model also incorrectly retrieves hyponyms because all features that activate a word will also activate the word's hyponyms. Hyponyms are the semantically subordinated lemmas of a given lemma. For example, if the conceptual features activate the word ‘pet,’ they will also activate the words ‘dog,’ ‘cat,’ etc. These hyponyms can receive a higher activation value than the target word ‘pet.’

The nondecompositional model of (Roelofs 1992) also aims at the selection of lemmas, but in contrast to the previous model, concepts are represented as one node, not as conceptual features, i.e., the model assumes that concepts cannot be decomposed into conceptual primitives. In this model, activation spreads towards the lemma nodes. The lemma will be selected which receives an activation level exceeding that of other nodes by some critical amount. The model correctly describes the activation of lemmas given some activation of concepts, and it does not suffer from the hyponym problem. Furthermore, the model makes some predictions on activation speed, which have been tested empirically and turn out to be correct.

Since an evaluation of both models along all dimensions is almost impossible, only the hyponym problem is discussed further. The missing hyponym problem of the non-decompositional activation model, together with the resulting correct empirical predictions, are strong arguments for the assumption that concepts do not consist of conceptual primitives. Whether mental representations of conceptualized objects, events, times, etc. are built up from conceptual primitives or not, is a topic that has been intensively discussed for a long time in linguistics and the philosophy of mind. Since the successful implementation of a cognitive function is the proof of the theory's consistency, the results argue against conceptual decomposition.

Two systems simulating the next step in language production, namely the construction of syntactic structures of the sentences to be expressed, are now presented.

The incremental parallel formulator (ipf) (de Smedt 1990) simulates how syntactic structures are constructed in a piecemeal way, if conceptual increments are given as input at specific time points. These increments are mapped onto single words or thematic roles. The system is based on a modular architecture and uses graph unification as operation. Graph unification is the standard operation in computational grammar formalisms. It allows the combination of two informational units to a third one if the information is not contradictory. The ipf formulator simulates parallelism in formulation. By means of the time-point and accessibility of incoming conceptual fragments the system also explains specific word order variations.

The flexible incremental generator (fig) (Ward 1992) is a connectionist model with an interactive architecture. The task of incremental sentence construction is realized on the basis of an associative network. The input for the formulation process is conceptual information that is provided with an activation value. Contrary to the previous model, this input will not be offered incrementally but as a whole. However, the sentence production process is incremental because sentences are uttered word by word. Activation spreading through the network results in the activation of words; those words with the highest activation will be uttered. The model accounts for speech errors, because non-intended concepts might receive an activation.

Although the architectural basis and processing strategies of the models are completely different, they show that incremental processing is psychologically plausible. In the first model the time span between incoming fragments corresponds directly to incrementality effects. The second model describes the incremental selection of words. However, no predictions can be derived directly from either model. They differ in their assumption about where incremental processing is involved, namely on the level of conceptualization and formulation or during formulation only. With respect to the processing mechanisms that are used, both models are equally simple, because each system uses only one operation.

It has been claimed that connectionist approaches are more robust than symbolic ones. However, for incrementality effects during sentence construction, this statement is too general. The ipf model shows that a symbolic processing strategy does not necessarily impose constraints on the data to be explained.

3.2 Computer Models of Language Comprehension

Since comprehension begins with the recognition of words, this section deals first with models of spoken word recognition. In spoken word recognition, the models proposed must answer two main questions. First, they must describe how the sensory input is mapped onto the lexicon, and second, what the processing units are in this process.

The Trace model (McClelland and Elman 1986) is an interactive model that simulates how a word will be identified from a continuous speech signal. The problem with this task is that continuity of the signal does not provide clear hints about where a word begins and where it ends. By means of activation spreading through a network that consists of three layers (features, phonemes, and words), the system generates competitor forms, converging to the ultimately selected word. Competition is realized by inhibitory links between candidates.

The Shortlist model (Norris 1994) is based on a modular architecture with two distinct processing stages. It uses spreading activation as well, but in a strictly bottom-up way. Contrary to Trace, it generates a restricted set of lexical candidates during the first stage. During the second stage, the best fitting words are linked via an activation network.

Both models account for experimental data on the time course of word recognition. Assumptions on the direction of activation flow and its nature lead to several differing predictions, but the main different prediction concerns lexical activation. While Trace assumes that a very large number of items are activated, Shortlist assumes that a much smaller set of candidates is available so that the recognition of words beginning at different time points is explained differently.

The last model is a model of sentence processing. In sentence processing, one of the fundamental questions is why certain sentences receive a preferred syntactic structure and semantic interpretation.

The Sausage Machine (Fodor and Frazier 1980) is a parsing model that assumes two stages in parsing with both stages having a limited working capacity. The original idea behind the model is to explain preferences in syntactic processing solely on the architectural basis by means of the limitations in the working memories.

The Sausage Machine is a quasi-deterministic model, because one syntactic structure for each sentence is generated. Only if the analysis turns out to be wrong, is a reanalysis performed. Since reanalyzing a sentence is a time consuming process, the system tries to avoid this whenever possible. The model accounts for garden path effects and the difficulty in understanding multiple center embedded sentences (like ‘the house the man sold burnt down’). Furthermore, the model explains interpretation preferences by means of the limitation in working memories. However, it is now understood that the architecture of a system cannot be the only factor that is responsible for processing preferences, but additional parsing principles must be assumed (Wanner 1980). Newer computational models of sentence processing show that an explanation of several phenomena in sentence processing requires the early check of partial syntactic structures with lexical and semantic knowledge (Hemforth 1993).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767005428

An integrative model of leadership behavior

Peter Behrendt, ... Anja S. Göritz, in The Leadership Quarterly, 2017

External consistency and parsimony

The criterion of external consistency refers to a theory's consistency “with observations and measures of real life”. The criterion of parsimony refers to a theory's minimal complexity to accurately account for real life phenomena (Filley et al., 1976, p. 22). When integrating existing models into a new framework, the challenge is to stay consistent, while at the same time creating a higher level of parsimony. As the criteria of external consistency and parsimony go hand in hand we discuss them jointly in this section. Being based on established psychological theories, IMoLB is consistent with a large body of research outside the core leadership community. In addition, the model integrates existing models and meta-analytical findings on leadership behavior (see sections above; Avolio et al. (1999); Burke et al. (2006); DeRue et al. (2011); Judge et al. (2004); Marrone (2010); Spreitzer (1995); Yukl (2012)). Given that Yukl's taxonomy provides the most comprehensive and integrative overview of current leadership behavior research, it will serve as the gold standard to which IMoLB is compared.

IMoLB has reduced the number of meta-categories suggested by Yukl (2012) from four to two. Yukl's meta-categories of change-oriented and external leadership behaviors are integrated by introducing two continuums: (1) Task-oriented behaviors can be oriented towards tasks that are change- vs. routine-related, depending on the type of objective that is to be accomplished. We argue that IMoLB's three task-oriented leadership behavior categories cover Yukl's change-oriented leadership behaviors. ‘Advocating change’ enhances the understanding of the current situation and of prevailing risks and further strengthens the motivation for change. ‘Envisioning change’ directly strengthens the motivation for new behaviors in a change-situation and can thus be classified as ‘strengthening motivation’. Finally, ‘encouraging innovation’ and ‘encouraging collective learning’ describe behaviors that enhance a new understanding or facilitate new implementation plans. (2) Relations-oriented behavior can be directed towards individuals who are internal vs. external to the team. Indeed, many leadership endeavors include a core team of individuals, more distant in-house team members who are engaged in the endeavor to varying degrees as well as external individuals such as core customers. Thus, Yukl's external behaviors are accommodated in IMoLB: ‘Networking’ and ‘representing’, for example, promote cooperation and coordination with more external individuals to synchronize their actions with internal needs. Finally, ‘external monitoring’ is essentially task-oriented and enhances the understanding of the situation.

Taken together, IMoLB possesses high integrative power and meets the criteria of external consistency and parsimony. IMoLB integrates a broad set of fundamental theory and research within as well as outside of the leadership behavior literature while at the same time reducing the number of meta-categories from four to two and the number of behavioral categories from 15 to six.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1048984316300479

Self-esteem and adolescent sexual behaviors, attitudes, and intentions: a systematic review

Patricia Goodson Ph.D., ... Sarah C. Dunsmore M.S., in Journal of Adolescent Health, 2006

Spanning more than a century of social-psychological studies, various theoretical perspectives—psychoanalytical theory, existentialism, symbolic interactionism, self-consistency theory, self-identity theory, and self-esteem theories—have described self-esteem’s origins and development [5,24]. Stemming from studies of the “self,” and fitting within the general framework of attitude research [3], definitions of self-esteem vary but are anchored in the notion that self-esteem is a central dimension of self-concept [2]. Broadly defined, self-concept is “the totality of an individual’s thoughts and feelings having reference to himself [sic] as an object” [1] (p. 3). Specifically, self-esteem refers to

… the evaluation which the individual makes and customarily maintains with regard to himself [sic]: it expresses an attitude of approval or disapproval, and indicates the extent to which the individual believes himself to be capable, significant, successful, and worthy [3] (pp. 4–5).

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1054139X05002946

Leadership Quarterly Yearly Review

John W. Fleenor, ... Rachel E. Sturm, in The Leadership Quarterly, 2010

4.1.3 Performance improvement

Smither et al. (1995) found that leaders who provided low self-ratings did not improve their performance after receiving low ratings from their direct reports. As indicated by self-consistency theory (Korman, 1976), it appears that leaders are satisfied with feedback that is consistent with their self-perceptions, even if these self-perceptions are negative.

In an investigation of whether agreement among raters influences performance improvement, Johnson and Ferstl (1999) examined how self-ratings change after feedback. Self-ratings and direct report ratings were collected from 1888 managers at two points in time one year apart. Using polynomial regression, the authors found that leaders who over-rated themselves relative to how others rated them tended to improve their performance from one year to the next, while under-estimators tended to decline. Self-ratings tended to decrease for over-estimators and increase for under-estimators, but this effect was not constant throughout the range of self-ratings. According to Johnson and Ferstl, these findings are consistent with the predictions of self-consistency theory (Korman, 1976); however, self-enhancement theory may be a viable alternative explanation (Dipboye, 1977).

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1048984310001438

All Thinking is ‘Wishful’ Thinking

Arie W. Kruglanski, ... Karl Friston, in Trends in Cognitive Sciences, 2020

Motivational Substrate of Epistemic Behavior: Some General Implications

The formal consilience between epistemic motivation and active inference brings several opportunities to the table. For example, it offers a way of articulating social psychology and ethological constructs in terms of Bayesian computations [30,31]; of the sort that could be installed into artificial intelligence. From a neurobiological perspective, the neuronal process theories that accompany active inference enable empirical predictions about epistemic motivation during belief updating, as manifest in things such as event-related potentials and phasic dopamine responses in the brain [32,33]. Furthermore, this convergence delineates a general process at work across all the manifold contents and instances of knowledge formation (see Table 1 for examples). Given such breadth, one would expect it to yield both theoretical and practical implications. We briefly exemplify one such application in the context of cognitive consistency theories.

Table 1. Epistemic Choices: The Nature of Uncertainty, Ambiguity, and Risk

CertaintyEpistemic motivationExampleAmbiguity (expected inaccuracy)Risk (expected complexity)
Nonspecific Approach Finding out the departure gate of one’s flight Minimized
Avoid Avoiding knowledge of the end of a film Minimized
Specific Approach Hoping for a clean bill of health on the annual check-up Minimized
Avoid Avoiding listening to a TV commentator opposed to one’s views Minimized

Relevant to the role of motivation in the epistemic process is the recent theoretical debate about the affective reactions to cognitive consistency and inconsistency [34–36]. The notion that people universally prefer cognitive consistency to inconsistency, and that they react to inconsistency with negative affect, has been a mainstay in the field of social cognition and the staple of the cognitive dissonance theory [37], one of the most impactful and highly cited frameworks in all of psychology [38]. However, our present portrayal of the inference process questions the assumption of a universal human need for cognitive consistency.

Briefly, cognitive ‘consistency’ and ‘inconsistency’ between prior and posterior beliefs corresponds to the degree of Bayesian surprise: Perfectly consistent information is information that reaffirms prior beliefs. By contrast, inconsistent information requires revision of one’s prior beliefs, creating a discrepancy between prior and posterior beliefs. Yet, and this is the crucial point, the affective reactions to such updates are completely determined by nature of the epistemic motivations that drive active inference in a given instance. For instance, if (consistent or inconsistent) information increased uncertainty about a given states of affairs, this would be upsetting to a knower who was motivated to approach nonspecific certainty (i.e., reduce ambiguity). By contrast, if (consistent or inconsistent) information resolved uncertainty, the (nonspecific) certainty questing knower should be happy. A failure to make (subjectively) precise inferences; either about states of affairs or the policies that ‘I am currently pursuing’ can be readily associated with emotional constructs such as stress, anxiety and the like. This formulation of self-consistency in self-evidencing terms can, on one account, be unpacked in terms of psychopathology: leading to fairly detailed models of psychiatric conditions [39,40]. In short, there appears to be a tight coupling between affective aspects of epistemic behavior, stemming from irreducible uncertainty if the knower craved nonspecific certainty about a given topic.

However, they might not. They might prefer a state of ignorance (uncertainty) on this issue that would leave their ‘options open’ and foreclose binding commitment to a judgment. (See [41] for a worked example and [42] for an empirical demonstration.) In such a case, the knower would be pleased rather than upset by cognitive inconsistency. They might be similarly pleased if the ensuing uncertainty prevented the formation of an undesirable certainty; that is, preventing risk that the latter certainty would entail. In addition, they might be unhappy if the uncertainty precluded the formation of a specific pleasing certainty that they were motivated to pursue.

All these affective reactions are moderated by the relative strength or precision of the epistemic motivations involved. In these terms, the voluminous research on cognitive consistency and inconsistency has confounded: (i) the epistemic impact of belief updating; and (ii) the affective value of those beliefs to the knower: Reduced belief strength in a proposition that denotes a positive state of affairs (for the knower) will induce negative affect proportionately to strength of the desire to have that state of affairs come true. Increased belief strength in such proposition will induce positive affect, again proportionately to the desire to have this state materialize. Similarly, increased belief strength would induce positive affect for someone who desired certainty on a topic, and negative affect for someone who shunned such certainty, again proportionately to strength of those desires. As the foregoing implies, the affective consequence of belief updating derives from the degree to which the knower’s epistemic motivations were served or undermined by consistent or inconsistent information; rather than by consistency or inconsistency as such.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1364661320300796

Enhancing social interactions for youth with autism spectrum disorder through training programs for typically developing peers: A systematic review

Allison M. Birnschein, ... Theodore S. Tomeny, in Research in Autism Spectrum Disorders, 2021

4.3.1 Anti-stigma training programs

Ten of the included studies aimed to reduce TD peer stigma of ASD through the provision of information (Campbell et al., 2004, 2019; Dachez & Ndobo, 2018; Engel & Sheppard, 2020; Gillespie-Lynch et al., 2015; Ranson & Byrne, 2014; Scheil et al., 2017; Silton & Fogel, 2012; Staniland & Byrne, 2013; Swaim & Morgan, 2001). Descriptive information (i.e., information about ASD that highlights similarities between children with ASD and TD children) was included in eight of the included studies. Dachez and Ndobo (2018) argue that the inclusion of descriptive information is theorized to improve peer perceptions of ASD as the cognitive consistency theory states that when peers believe a person to be similar to them, they are more likely to perceive interactions with that individual as positive (Heider, 1958). Swaim and Morgan (2001) were the first to assess the impacts of descriptive information alone on peer perceptions of ASD. Eight studies that followed Swaim and Morgan's (2001) included explanatory information (i.e., information explaining that ASD is a biological and neurodevelopmental disorder) in addition to descriptive information. Campbell et al. (2004) posit that providing explanatory information to peers may reduce negative responses toward individuals with ASD by modifying the TD peers’ perceptions of how much control and responsibility children with ASD have over their diagnosis and behavior, which they argue is supported by the social attribution theory (Weiner & Graham, 1984). Only three studies assessed the impact of providing descriptive and explanatory information to peers through a variety of methods (Campbell et al., 2004, 2019; Scheil et al., 2017).

Other researchers built upon this foundation by adding strengths information (e.g., potential savant or sensory/perceptual abilities in children with ASD; Silton & Fogel, 2012), directive information (i.e., specific strategies for engaging peers with ASD in social interactions; Dachez & Ndobo, 2018; Ranson & Byrne, 2014; Staniland & Byrne, 2013), and an interactive experience with an individual with ASD (i.e., observing a conversation with an individual known to have an ASD diagnosis; Dachez & Ndobo, 2018). Strengths information was included as Silton and Fogel (2012) argue that peer expectations of individuals can influence behavioral intentions which is supported by the affect/effect theory (Rosenthal, 1989). Directive information was included as researchers believe that children with ASD may learn social interaction behaviors through observing the behavior modeled by others. Dachez and Ndobo (2018) posit that this is consistent with Bandura's (1977) social learning theory, and argue that this practice can result in improved relationships with TD peers. Finally, researchers provided an interaction with an individual with ASD to reduce potential bias to peers considered to be unlike the self. Dachez and Ndobo (2018) similarly postulate that this practice is supported by contact theory, which proposes that more time spent with individuals unlike the self will improve perceptions of the other (Allport, 1954).

Two studies utilized the “Understanding Our Peers” curriculum developed by Staniland and Byrne (2013) to provide didactic information including: common impairments in ASD, known causes of ASD, lack of control over behavior, effective strategies for interacting with peers with ASD, impacts of having ASD, similarities and differences in the presentation of ASD in boys and girls, and similarities between the self and those with ASD (Ranson & Byrne, 2014; Staniland & Byrne, 2013). Similarly, Gillespie-Lynch et al. (2015) provided didactic information including: diagnostic criteria of ASD with the DSM-5, early identification signs of ASD, prevalence of ASD, ethnic and gender diagnostic considerations, heterogeneity of ASD and intelligence in ASD, etiology, differentiated effects of ASD on cognitive and affective empathy, intervention approaches, common challenges for adults with ASD, effective ways to teach individuals with ASD, the concept of neurodiversity, and the future for individuals with ASD.

Two separate studies utilized the Kit for Kids curriculum (Organization for Autism Research [OAR], 2012) which introduces TD peers to a character diagnosed with ASD (Campbell et al., 2019; Scheil et al., 2017). The curriculum teaches peers about ASD through descriptions of this character’s symptoms and corrects misconceptions about ASD (e.g., people with ASD do not talk). Additionally, the program provides TD peers with specific strategies to use when interacting with classmates with ASD. Engel and Sheppard (2020) utilized a similar approach, introducing participants to a fictional character with characteristics of ASD after a short Sesame Street episode that described common characteristics of ASD.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1750946721000593

How can I be more consistent with attitude?

Attitude–behavior consistency exists when there is a strong relation between opinions and actions. For example, a person with a positive attitude toward protecting the environment who recycles paper and bottles shows high attitude–behavior consistency.

Is there a consistency between attitude and behavior explain?

Attitude and behaviour are consistent when • the attitude is strong and occupies a central place in the attitude system • the person is aware of her/his attitude • there is very little or no external pressure for the person to behave in a particular way.

What would be the factors that causes consistency between attitude and behavior?

Psychologists have found that there would be consistency between attitude and behaviour when: (i)The attitude is strong and occupies a central place in the attitude system. (ii)The person is aware of his/her attitudes. (iii)Person's behaviour is not being watched or evaluated by others.

Do attitudes consistently predict behavior?

Despite these early views, attitudes do not always predict behavior. Often, people express positive attitudes toward an activity, yet admit that they seldom engage in the activity (Ajzen & Fishbein, 1977, 1980).