Graphic associative test of attitudes as a convenient implicit measurement tool for mass polls

Cover Page

Cite item

Abstract

Several latest elections and referendums were marked by the dramatic failure of electoral forecasts based on mass polls. To respond to the dissatisfaction of the public and politicians, alternative approaches like prediction markets, Implicit Attitude Test (IAT), expectationbased forecasts and so on were developed. IAT proves to be one of the most efficient ways to enrich the forecasting models and improve their accuracy. The problem is that the original form of IAT implies too rigid rules to be applied in the traditional mass poll. As a thorough laboratory-style measurement of nervous reactions to stimuli, IAT requires a special environment, for instance, nothing should disturb or distract respondents from performing experimental tasks. Such an environment is difficult to provide during the mass poll’s fieldwork; thereby, researchers usually implement IAT on small samples. This article presents the Graphic Associative Test of Attitude (GATA) as a tool for mass polls. It is the IAT’s functional analog developed by the author and tested in a wide range of preelectoral mass polls in Russia. GATA is easy to use even with inexperienced interviewers, and its simple and intuitive-clear tasks do not create additional barriers for respondents and do not decrease the response rate. At the same time, in a reliable way, GATA identifies implicit factors of behavior and helps to improve the accuracy of forecast. As a theoretical research, this study proves the ‘dual attitude’ concept of the structural theory of attitude.

Full Text

Problems of electoral forecasting  as an example of the mass polls misfunctioning

Prediction of behavior takes giant domains in almost every sphere of social studies — from small groups to marketing and electoral forecasts. Unfortunately, despite the impressive progress of the pollster industry in the 20th century, its current state is rather problematic [38] (some authors even claim the ‘crisis of sociology’ under the fundamental changes in the very ontology of society as the most probable driver of this situation [42]). Electoral research provides a variety of resonant and instructive examples of forecast failures. An incomplete list of such failures at the level of the industry includes 2014 parliamentary elections in Moldova, 2015 parliamentary elections in the UK, 2015 Knesset elections in Israel, 2015 referendum in Greece, 2015 presidential elections in Poland and Belarus, 2016 Brexit Referendum in the UK and 2016 presidential elections in the USA, 2017 parliamentary elections in the UK. In some cases, failures to predict the elections’ winner led to the industry-level investigations. The reports of the Market Research Society and the British Polling Council in the UK and of the AAPOR in the USA are probably the most notable examples [23; 40]. These reports summarize a wide range of possible sources of errors including sampling bias, late swing effects, deliberate misreporting, etc. However, in general reports do not address the problem of the validity of the ‘intentions’-based approach [see, e.g.: 9–11; 32; 43].

The proposed approach seems quite biased. Most pollsters would agree that voters’ behavior can and often is determined by factors which are poorly recognized by actors and/or are misreported. These methodological problems are well-known and usually referred to as “lack of introspection” and “deliberate misreporting”. The current situation is generally taken as an objective limitation of the methodology which cannot be improved in mass surveys. Rogers and Aida [38] compared the data on voters’ “intentions to vote” from the poll with their actual turnout: they studied only the voters for whom they had data on voting (‘sample’ equal to ‘universe’) and eliminated any possible effect of “sampling error”. Thus, any revealed mismatch of declarative intention and actual behavior was regarded as determined by validity problems. For instance, they showed a dramatic difference between the declared intentions and actual behavior: 13 % of those who declared the ‘almost certain’ intention to vote did not in fact voted, while 55 % of those who did not intend to vote actually voted. Moreover, the authors show that actual behavior can be relatively reliably predicted by the fact of previous voting: as respondents know whether they voted in the recent election or not, they could effortlessly make an accurate forecast for the next one. But they do not. Unsurprisingly, the last decades have brought to life many alternative models [2; 4; 5; 7; 15–18; 25–27; 34; 39; 41].

The same applies to the electoral forecasts’ discussion in Russia; however, the theoretical domain of such debates is quite narrow. Despite the world-level practical experiments (panels, expectation polls, prediction markets, implicit drivers measurement), their findings still lack comprehension and interpretation with rare exceptions [see, e.g.: 6; 30; 45]. In general, the Russian research discourse follows international trends in implementing new concepts and models in the local electoral environment. Many authors admit the urgent need for the further development of the theory and practice of electoral forecasting [6; 10; 30]. The key driver here is not a general inaccuracy of the existing electoral models but rather this accuracy’s instability which manifests in unexpected and unexplained faults.

Thus, according to the electoral forecasts’ discussions both in Russia and abroad, mass polls are still the main method for almost every prediction model primarily due to the potential access to the voting behavior inner drivers as based on the general “single source data” concept. As for now, it seems that there is no real opportunity for mass polls to be dismissed by any ‘alternative’. Unfortunately, mass polls have specific problems:

  • Errors in the sample design and sampling: for instance, voters who live outside the electoral district and vote remotely are not covered by the survey; or, on the contrary, those non-voters who live in the district are interviewed.
  • Access problems: for instance, supporters of the candidate represent an audience that traditionally has a higher/lower response rate.
  • Late shift: at the time of the survey, voters have one opinion, and later change it. A special case of this problem is the so-called “lack of introspection”: if the respondent is not involved in the political process, he may not understand his political sympathies and (quite sincerely) answers incorrectly.
  • Deliberate misreporting: respondents can lie in the survey and later vote according to their real preferences.

One of the most promising attempts to improve the behavior prediction models is to introduce methods that can measure not only explicit attitudes and/or intentions but also the implicit ones. Addressing the unconscious factors of behavior allows to decrease or even eliminate biases of deliberate misreporting, late shift and lack of introspection [3; 8; 20; 37]. The most popular technique in this field is the Implicit Attitude Test (IAT).

Unfortunately, IAT has numerous limitations: for instance, Roccato and Zogmister, whose work is one of the most inspiring in the field, started from the criticism of ‘conventional’ (explicit) methods [37]. Their study had two methodological goals: to test the IAT’s external validity on the data of the ‘conventional’ mass survey and to check an ability of the IAT data to improve the accuracy of electoral forecasts. The representative panel of 1377 respondents were questioned at the pre- and post-election stages during the Italian National Elections of 2006. At the first stage, IAT followed the typical pre-electoral (explicit) questions — respondents passed the test in the computer-assisted personal interview (CAPI). At the second stage, respondents made self-reports on whether they visited voting stations and which candidates they voted for. The study collected data on the explicit and implicit attitudes to candidates, intentions to vote and real (although self-reported) voting. Implicit measurements showed less correlation with actual behavior than explicit ones (especially intentions). At the same time, adding implicit parameters to the ‘conventional’ explicitbased prediction model slightly improved the accuracy of its forecasts. Thus, the authors concluded that they could not rely on IAT as a convenient method for mass polls primarily due to its disproportionate expensiveness and instrumental complexity [37. P. 272].

Based on the measurement of the reaction rate to stimuli, IAT requires a special environment for the participants: nothing should distract them; breaks within thematic blocks are not allowed. Such an environment is difficult to ensure during the mass poll’s fieldwork. Moreover, to conduct this experiment, one needs basic computer skills: stimuli appear on the computer screen, and the participant should press the appropriate button on his keyboard as soon as possible. This creates additional difficulties in the representative survey, for example, a risk of low response rate in some social groups. Finally, the test requires thorough administration by the interviewer, which means possible additional biases due to interviewers’ uncontrollable influence.

Is there a convenient alternative for IAT? If yes, then this alternative should be simple (no additional equipment, applications, etc.; no additional training for interviewers), clear (no additional pressure on the response rate; the test tasks are intuitively obvious and easy-to-do for any respondent in any sociocultural environment) and valid (allows to identify implicit effects in a reliable and distinctive way). The task to find a tool meeting all these requirements will be actual and of practical importance as long as mass poll are a leading method in electoral forecasting and other spheres. The paper methodologically assesses a prospective implicit test which seems acceptable for the ‘common’ pollster.

GATA: design and measurement

Given the practical limitations of the poll methodology, GATA was developed in 2015 as a functional equivalent and prospective substitute for IAT [9–11]. It is the modified Etkind’s Colors Test (ECT) [12] which is a modification of the Lüscher test [29]. Initially, ECT was used to question people with cognitive dysfunctions, who could not understand the verbal constructions, i.e., it focused on addressing the unconscious structures of the mind. In ETC, respondents associate simple concepts like relatives, mates, friends, etc. with colors of the Lüscher’s ‘small’ set; then respondents rank colors as ‘pleasing’ or ‘unpleasing’; thus, an individual preference-rejection scale is developed to measure the participants’ implicit attitude to the tested objects.

In politics, colors and color schemes are often meaningful symbols used for political identification. For this reason, we substituted the stimuli of the original ECT with 8 graphic shapes of the Markert Test [31]. These shapes have no political connotations and can be used to differentiate electoral alternatives (Fig. 1). Thus, the GATA procedure is as follows: field stage — (a) the set of graphic shapes is presented to the respondent on the screen of the CAPI device; (b) the respondent associates graphic shapes with tested objects represented by verbal concepts as the interviewer reads them in random order; (c) the respondent’s mind redirects to extraneous issues (with several typical blocks of ‘explicit’ questions) to ensure he did not memorize the choices of the stage b; (d) the respondent rates graphic shapes from the most attractive (“beautiful shape one wants to gaze at”) to less attractive (“unpleasant shape one does not want to gaze at”); analytical stage — (e) “individual scale” of preference is formed on the basis of the d-stage ranking; (f) according to the “individual scale”, implicit preference score is attributed to every concept based on the b-stage associations.

Figure 1. Examples of K. Markert’s Test stimuli used to measure voters’ implicit attitudes

For example, a respondent chose the shape “C” as the most preferable and shape “D” as the less preferable; shape “C” takes the highest score 1, and shape “D” — the poorest score 8 on the respondent’s “individual scale”. Then the researcher selects all tested objects associated by this respondent with these shapes and ascribes the valency of implicit attitude to the concepts, which counts as “extremely positive” for all concepts associated with the shape “C” and as “extremely negative” for those associated with the shape “D”. This algorithm is repeated for all the shapes and concepts under testing (this is only a scheme which does not limit the number of shapes and tested objects while forming a variety of summative-scale models to estimate the implicit valency scores). As a result, every tested object gets a score on the ordinal scale regardless of the particular shape preferred or rejected by every respondent due to his psychological, cultural, national, gender, age or other factors.

In the field, ranking of graphic shapes (d) takes about 1–1,5 minutes and testing of concepts (b) — up to a quarter of a minute per concept, and this process speeds to few seconds as the stimuli become familiar to the respondent. Thus, the entire set of tested objects (20–30 words or short phrases) takes up to 3–4 minutes of the interview. Instrumentally, this technique is open to the survey, CAPI or online interview but not to telephone survey. There is no need for special training of interviewers and no additional load on respondents who, as we can judge from the recorded CAPI interviews, regard GATA as a game and a chance to rest from the complicated explicit questions. Thus, two out of three attributes of the convenient implicit-factors measurement (clearness and simplicity) are evident. Moreover, we believe that for any practitioner it is quite clear even intuitively. However, if any meaningful discussion will arise on the barriers of the instrumental implementation of GATA, we will join it with enthusiasm. The question is whether GATA provides ‘better’ or at least ‘good enough’ data.

The article aims at introducing GATA as a simple but effective instrument for measuring implicit attitudes in mass polls. Therefore, our main hypotheses were as follows: H1) GATA ensured a considerable discriminative power to identify an attitude to a set of tested objects; H2)This attitude is a true ‘implicit attitude’ that cannot be reduced to the explicit one. Consequently, at the empirical level: H2a) ‘Implicit attitude’ as revealed by GATA will mismatch the conventionally measured explicit one; H2b) Implicit (GATA) and explicit associations with other variables in a basic ‘set of beliefs’ [14] will differ by scale and structure. In methodology, if these hypotheses prove that GATA is a associative test capable of revealing the implicit attitude, it fits perfectly into the structural theory of attitude, supporting the assumption of the separate origin of attitude’s components representing some theoretical contribution. According to Perugini [36], there are three possible models of theoretical structuring for the implicit-explicit aspects of attitude: additive pattern — there is a single attitude, and our perception of its explicit and implicit forms is the result of the measurement’s artificial distinctions; double-dissociation pattern — there are two independent attitudes affecting behavior (spontaneous or intentional); interactive pattern — there are two independent attitudes, and the behavior as a coherent result of their interplay. Thus, H2 — H2a and especially H2b — can additionally support the ‘double independent attitudes’ model. For practitioners, this would mean validation of a new instrument for mass polls.

Methods for examining the GATA’s results include the discriminantpower analysis of the implicit scale and its comparative analysis with ‘feeling thermometer’ and conventional explicit-measurement techniques. The article is based on the data of several national and regional election polls conducted in the 2016–2018 Russian electoral cycle by the WCIOM. Study 1: National panel-based poll in the 2016 Parliamentary election: CAPI, a multistage sampling of households with a randomization procedure in households (N = 2304; the sample standard error — 2.25 %), fieldwork in August-September, ended a week before the voting day. Study 2: Governor elections in one of the regions in 2018: the same method (N = 1604; 3.25 %), fieldwork on September 3–7, ended two days before the voting day. Study 3. Inter-election survey for the 2018 presidential elections: the same method (N = 1606; 3.4 %), fieldwork March 2017, a year before the voting day. Study 4: National poll during the 2018 presidential elections: the same method (N = 1629; 3.4 %), fieldwork on March 10–11, a week before the voting day. Study 5. Four separate polls at the governors’ elections in four regions in 2017: the same method (N = 600–606, 2407 in total; up to 4 %), fieldwork in September 2017, ended two days before the voting day.

Discriminative power

The discriminative power of the GATA technique was tested by comparing the preference-distribution for graphic shapes to the same score for tested objects. The method is based on the assumption that the set of stimuli is neutral for respondents, while the set of tested objects is not. The corresponding data is presented in Tables 1–2 (Study 3).

Table 1. Distribution of positive and negative attitudes to Markert’s Graphic Shapes  (b stage, % of choices)

Graphic shapes

Positive

Negative

H

18,2 %

10,2 %

D

13,5 %

11,8 %

B

12,6 %

11,1 %

A

12,5 %

10,6 %

G

12 %

15,3 %

E

11,7 %

10,9 %

C

10,8 %

12,5 %

F

8,7 %

17,6 %

Mean

12,5 %

12,5 %

Range

9,5 %

7,3 %

StDiv

2,7 %

2,6 %

Values in Table 1 present a relatively smooth distribution — only 2 graphic shapes form extremums of the scale. In the next steps of the GATA procedures (d–f) this data transforms into the valency score of the attitude to every tested object (Study 3).

Table 2. Distribution of positive and negative attitude to the Russian political leaders (d–f stages, % of choices)

Prospective candidates

Positive

Negative

G. Zyuganov

14,7 %

19,1 %

V. Zhirinovsky

16 %

18,3 %

V. Putin

26,4 %

9,3 %

S. Mironov

15,2 %

16,3 %

A. Navalny

11,7 %

20,8 %

D. Medvedev

16 %

16,2 %

Mean

16,7 %

16,7 %

Range

14,7 %

11,5 %

StDiv

5 %

4 %

The comparison of the data in Tables 1 and 2 shows that the technique ensures a sufficient discriminant power to differ tested objects. The distinctions between them are stronger compared to the distinctions between stimuli and, thus, cannot be explained only by the perception of stimuli.

Table 3 presents the aggregated data to compare the relative sensitivity of GATA for well-known and almost unknown objects. We considered candidates for the president office (Study 3), non- and parliamentary parties (Study 1). Preliminary calculations were similar to the data in Tables 1–2.

Table 3. Comparative differentiation for more- and less- familiar objects compared to the differentiation of the graphic shapes

Differentiation of the graphic shapes

Candidates for presidency

Parliamentary parties

Non-parliamentary perties

Positive attitude

 

 

 

Range

9,5 %

10,9 %

10,9 %

StDiv

2,7 %

3,2 %

3,2 %

Negative attitude

 

 

 

Range

7,3 %

8,5 %

8,5 %

StDiv

2,6 %

3,1 %

3,1 %

Differentiation of the tested objects

Candidates for presidency

Parliamentary parties

Non-parliamentary parties

Positive attitude

 

 

 

Range

14,7 %

10,2 %

5 %

StDiv

5 %

4,2 %

2,2 %

Negative attitude

 

 

 

Range

11,5 %

9,2 %

4,3 %

StDiv

4 %

3,9 %

1,5 %

Objects’ value minus shapes’ respective value

Candidates for presidency

Parliamentary parties

Non-parliamentary parties

Positive attitude

 

 

 

Range

5,2 %

-0,7 %

-5,9 %

StDiv

2,3 %

1,0 %

-1 %

Negative attitude

 

 

 

Range

4,1 %

0,7 %

-4,2 %

StDiv

1,4 %

0,8 %

-1,7 %

The data in Table 3 proves the GATA’s ability to differentiate attitudes to objects as affected by the tendency of decreasing for objects less significant for respondents. Thus, the data proves H1: GATA allows to discriminate objects of attitude but is limited by the respondents’ knowledge of these objects. The question is whether the revealed attitude is implicit or GATA results present, albeit in an extravagant form, a common and well-known explicit attitude.

GATA results cannot be reduced to the explicit attitude

Since the non-correspondence of explicit and implicit attitudes is widely recognized [13; 24; 35], it is necessary to identify whether the ‘fraction’ of attitude detected with GATA is the same as detected with the conventional technique of attitude measurement. To find out whether both ‘fractions’ are of the same nature, we measured the attitude of voters to several candidates in the 2017 governors’ elections in Russia. The explicit attitude was measured with the ‘feeling thermometer’ [1; 19; 22; 28; 44], the implicit attitude — with GATA [10]. The feeling thermometer was selected due to its promise to measure the ‘deep’, ‘emotional’, ‘non-reasonable’ attitude, which should be the result of GATA too. At the same time, it is an ‘explicit’ measurement for the respondent’s reaction is under the clear conscious control, i.e., under the array of effects such as deliberative misreporting, lack of introspection, and so on. Some typical results are presented in Figures 2–3 based on the data of Studies 2 and 5.

Figure 2. Mismatch of explicit and implicit attitudes (Governor election 2017)

Figure 3. Mismatch of explicit and implicit attitudes (Governor election 2018)

The data shows quite a common picture for all studied cases (4 incumbents, 8 pretenders): for the incumbent, explicit attitudes shift to the positive end of the scale (arrows down), while implicit attitudes — to the negative end (arrows up); for the pretender, implicit attitudes (30 % of respondents) shift to the positive end, while explicit attitudes do not. The explicit scale is dramatically shifted to the center, which can mean the respondents’ intention to hide in the ‘neutral’ zone due to their deliberate misreporting or lack of introspection.

In general, the study confirms that the incongruity of explicit and implicit attitudes to the same candidate is not unusual. Figure 4 represents the mismatch of GATA’s and feeling thermometer’s data for the most popular opposition leader G. Zyuganov compared to the President V. Putin (Study 3). The opposition leader Zyuganov has a stronger implicit support than is showed by respondents explicitly, while the attitude to Putin is consistent, reflecting the national consensus proved by a range of sources — from mass polls to elections.

The mismatch of the GATA results with the ‘verbal’ explicit attitude is even more dramatic (Table 5). The explicit attitude was measured with the question “Victory of which candidate for presidency suits your interest the most?”. Supporters of three leading candidates in 2018 were grouped by the explicit attitude (columns) and split by the implicit attitude (Study 4). The data in Table 5 shows the consistent attitude only to Putin as a candidate. For Grudinin, 68 % of his ‘explicit supporters’ have a positive implicit attitude to him; for Zhirinovsky even less — 35 %. Quite unexpectedly, 31 % of Grudinin’ and 61 % of Zhirinovsky’s ‘explicit supporters have positive implicit attitude to Putin.

Figure 4. Cases of experimental and control data match/mismatch

Table 5. Correspondence of explicit and implicit attitude  to the most popular candidates, 2018

Positive implicit attitude

Explicit preference

P. Grudinin

V. Zhirinovsky

V. Putin

P. Grudinin

67,7 %

3,8 %

0,4 %

V. Zhirinovsky

1 %

35 %

0,2 %

V. Putin

31,3 %

61,3 %

99,4 %

The aggregated data of the same analysis in Studies 1, 3, 4, 5 is presented in Tables 6-7 and allows to conclude that the mismatch of the implicit (GATA) and explicit attitudes (conventional measurements) is common. Thus, the H2 “implicit attitude revealed by GATA will mismatch the conventional measurement of explicit attitude” was proved.

Table 6. Shares of the consistent attitude in the group of ‘explicit supporters’ in the national surveys (parliament and president elections; study number in brackets)

Objects of attitude

Share of the consistent group

V. Putin (4)

99,4 %

V. Putin (3)

85 %

S. Mironov (3)

72,4 %

P. Grudinin (4)

67,7 %

United Russia (1)

67,7 %

G. Zyuganov (3)

67,2 %

V. Zhirinovsky (3)

64,8 %

LDPR (1)

62,2 %

G. Yavlinsky (3)

57,1 %

CPRF (1)

55,7 %

Just Russia (1)

52 %

V. Zhirinovsky (4)

35 %

Mean

65,5 %

StDiv

16,2 %

Range

64,4 %

Without V. Putin

 

Mean

60,2 %

StDiv

10,9 %

Range

37,4 %

Table 7. Shares of the consistent attitude in the group of ‘explicit supporters’ (governors’ election)

Objects of attitude (Study 5)

Share of the consistent group

A. Didenko

72,7 %

K. Kuvaishev

69,7 %

M. Paramonov

66,7 %

S. Zhvachkin

66,7 %

O. Postnitsov

62,5 %

M. Reshetnikov

62,5 %

D. Mironov

58,4 %

A. Parfenchitsov

50 %

I. Peteliaeva

50 %

I. Filatova

44,4 %

D. Ionin

40 %

Mean

58,5 %

StDiv

10,8 %

Range

32,7 %

The descriptive statistics for different sets of data (outliers omitted) are almost the same, which is probably determined by the natural mechanics of implicit and explicit drivers’ interplay, i.e., H2a gets an additional and strong proof.

GATA and conventional explicit measurements differ by factors

The above-mentioned discrepancy is determined by a quite autonomous origin of fractions of attitude as revealed by GATA and conventional tests, especially by the incongruity of the structure of associations in the GATA’s and control method’s results with the basic ‘set of beliefs’ in terms of TRA/TPB. To prove this, we compared the associations of GATA and explicit attitude (“Victory of which candidate for parliament/presidency suits your interests the most?”) variables with the set of other common ANES-origin variables. 0.05+ Chi-square was set as a threshold for the proved association. The aggregated results are presented in Table 8 (summarized data of Studies 1 and 4).

Table 8. The incongruity of associations’ structure for the true explicit  and presumably implicit components of attitude  (number of associations with ANES variables)

Category of associations

Political and social

Economy

Demography

Total

Only explicit component has an association

20

1

6

27

Both have an association

40

5

2

47

Only implicit component has an association

5

0

0

5

Both have no association

16

1

3

20

Total

82

7

11

99

Thus, 47 out of 99 typical ANES variables have an association with both variables (implicit and explicit attitudes), 27 variables have an association only with explicit variables, 5 — only with implicit ones. It is not surprising that explicit attitudes are deeply rooted in the variables measured with a consciousnessaddressing questionnaire and are related to 74 variables in total. The implicit attitude is associated with traditional ‘cognitive’ variables in a relatively poor way with 52 (47+5) associations. Thus, 32 (27+5) variables represent a domain with mutually exclusive associations, i.e., almost a third of the considered associations is generated by forces acting separately for implicit and explicit attitudes.

The nature of these forces can be assessed with the data in Table 9 presenting the values of the Somers’ D for associations (only for D = 0.05+ for the implicit/explicit variable as dependent versus as independent). The first part of the table represents variables-factors for the implicit attitude, the second part — for the explicit attitude, the central part — for both. The priority of variables was set by the difference between Somers’ D values for implicit and explicit variables; empty cells represent the insufficient statistical significance. The composition of associations revealed by this analysis slightly differs from that of Chi-square due to the different set of variables (only ranked variables are used for Somers’ D) and to the peculiarities of calculations (Study 5). “Ideologically biased” questions (In commas) started with “Do you agree or disagree…”; UR stands for the United Russia — the most popular party headed by the prime minister D. Medvedev. The implicit attitude was measured by GATA, the explicit one by the question “Victory of which party at the upcoming election suits your interests the most?”.

Table 9. Sets of predicting factors for implicit and explicit components of attitudes

Variables

Implicit as dependent,

Explicit as dependent,

Somers’ D

Somers’ D

Do you approve the activity of the Prime Minister

.346

 

“UR is able to ensure the country’s development”

.325

 

“UR is a party of real deeds”

.306

 

Do you approve the activity of the State Duma

.26

 

“UR fights for common people”

.251

 

“Most of the UR party’s members have high moral standards”

.236

 

“Real party’s activists took part in the UR’s primaries”

.137

 

Do you approve the reunification of Crimea with the RF

.077

 

Do you pay attention to the political parties’ position on Crimea’s reunification

.074

 

Do you think that Western sanctions were imposed because of Crimea

-.046

 

Do you approve the activity of President V. Putin (3rd wave)

.426

.203

I have travelled aboard in the last three years

.043

-.052

Do you trust the Minister of Defense S. Shoigu

.132

.103

Do you think the reunification of Crimea brings more advantages or disadvantages

.128

.107

”The state power should be changed only by lawful means”

.111

.1

Do you trust the Minister of Foreign Affairs S. Lavrov

.059

.052

Do you trust President V. Putin

.226

.231

“The head of the state should remain in power as long as possible”

.203

.204

“Most of the UR’s members are ordinary people”

.183

.215

Do you approve the activity of President V. Putin (2nd wave)

.173

.205

Do you approve the activity of President V. Putin (1st wave)

.149

.26

I have discussed political issues in social media

 

-.117

I have discussed political issues on Internet forums

 

-.117

I have read news on the Internet

 

-.092

I have commented news on the Internet

 

-.097

I have read news of culture and arts

 

-.047

In general, do you feel yourself secure or not

 

.09

I support the strengthening of the national legislation

 

.148

The data in Table 9 supports the conclusion that implicit and explicit attitudes have independent sources: not only implicit and explicit sets of independent variables differ but also variables that constitute these sets. Most variables affecting the implicit attitude are indicators of ‘true’ beliefs and dispositions: “UR is able to ensure the country’s development”, “UR is a party of real deeds”, etc. Next to this core set, there are two remarkable variables of approving the activity of the Prime Minister (party’s official leader) and of the State Duma (In which UR keeps majority for years). Surprisingly, the same does not apply to the explicit attitude. Crimean issues are presented in the set of implicit drivers, but it is a temporary factor and most probably supports the general assumption that a stimulus first affects the unconscious sphere and then is (or not) introspected. Unlike implicit attitude factors, the explicit ones are mainly presented by self-reports of behavioral patterns: “I have discussed political issues in social media”, “I have read news of culture and arts”, etc. The main part of common factors is variables of approvement/trust: “Do you approve the activity of President V. Putin”, “Do you trust the Minister of Foreign Affairs S. Lavrov”, and so on. This massive includes several indicators of predispositions like “The head of the state should remain in power as long as possible” and behavioral self-reports like “I have travelled abroad in the last three years”. However, these variables are not typical for a “common set”.

We might assume that variables of assessment represent the true nature of the intermediate sector in which both implicit and explicit attitudes are affected by the same factors. If so, we get a scheme with beliefs and predispositions primarily affecting implicit attitudes, and behavioral patterns — mainly explicit ones. Certainly, we need more studies and proofs of this scheme, but it looks logical and supporting the general theoretical model [14] and potentially important conclusion: in some cases, implicit and explicit attitudes mismatch and are driven by incongruent sets of factors.

Therefore, H2b “Implicit (GATA) and explicit (conventional) associations with other variables representing basic ‘set of beliefs’ as per TRA/TPB will differ by scale and structure” was reliably proven. All the above findings prove that the mismatch of the GATA results and conventional measurements of explicit attitudes is a common phenomenon, i.e., a ‘fraction’ of attitude, which GATA can reveal, is a true implicit attitude that is relatively more secure from the conscious intervention, clearly not reducible to explicit attitude, and has still unclear but specific nature and sources.

***

Thus, GATA provides the data which represents a fraction of attitude with accuracy sufficient to reliably differentiate objects relatively familiar to respondents; this fraction is incongruent to the traditionally measured explicit attitude due to lesser dependence from conscious factors; when comparing the attitude revealed by GATA with the traditionally measured explicit attitude, we see the incongruence of respective sets of associations with a basic ‘set of beliefs’, i.e., these attitudes most probably have their own nature and origin. Therefore, we proved the initial assumption that GATA reveals a true implicit attitude, being limited by the respondents’ knowledge of tested objects (the lack of recognition leads to the expectable unavailability of differentiation). Certainly, more limitations and peculiarities will be identified in further research. Nevertheless, today GATA seems a promising functional tool in situations when more sophisticated analogues are too complex or too expensive (GATA needs neither special equipment and software nor special training for interviewers and provides a lot of data in a reasonable duration of interview at least in mass polls).

Is the development of GATA worth the efforts? No doubt. Table 10 presents the data on the comparative accuracy for explicit-only (‘common’) and two-factors (explicit and implicit combination — ‘experimental’) models:

Vote intention (VI) — the share of respondents who choose a specific candidate or party when answering a direct VI question (the candidate’s future share of votes).

Vote intention confirmed (VI c) — the same as VI but filters out voters who gave a negative answer to the auxiliary question “Is your intention to vote for this candidate unchangeable?” (Y — cannot change, N — can change).

Likelihood to vote — vote intention (LVVI) — the most common approach among basic forecast models, which regards vote intentions only of those who promised to vote when answering the question “Will you vote in the upcoming elections of… or not?”.

Table 10. Experimental and control models: A prediction improvement trend

Models

VI

VIc

LVVI

On average

Control models, average weighted error

State Duma 2016

25.3 %

16.6 %

24.9 %

22.3 %

President 2017

44.5 %

40.1 %

39.1 %

41.2 %

President 2018

24.7 %

25.3 %

9.6 %

19.9 %

Experimental models, average weighted error

State Duma 2016

20.7 %

9.9 %

18 %

16.2 %

President 2017

38 %

25.7 %

40.1 %

34.6 %

President 2018

21.3 %

26.1 %

8.5 %

18.6 %

Improvement, points of average weighted error

State Duma 2016

4.6 %

6.7 %

6.9 %

6.1 %

President 2017

6.5 %

14.5 %

-1 %

6.6 %

President 2018

3.5 %

-0.8 %

1.1 %

1.3 %

On average, points of average weighted error

4.8 %

6.8 %

2.3 %

4.7 %

The set of experimental models — VI, VI c, LVVI — was affected only by an additional filter: the group of voters with the negative implicit attitude to the candidate was eliminated from the subsample of his ‘likely voters’, based on the assumption that such voters with contradictory intentions have relatively fewer motives to invest time and efforts in voting. The data is presented as an average error for a group of candidates with the result of 5 % or higher in the elections; every forecast value was compared with the real result. The general average incremental accuracy effect is quite stable at about 4.7 %: at the level of average values, it was detected for all three models (VI — 4.8 %, VI c — 6.8 %, LVVI — 2.3 %) and for all three forecasting attempts (2016–6.1 %, 2017–6.6 %, 2018–1.3 %). Out of 9 aggregated results, only 2 revealed a slight negative effect, while 6 — a strong positive effect (3.5–14.5 %), and 1 — a slight positive one (1.1 %). Hopefully, this means that we can and most probably should keep trying to use GATA in mass polls in order to get a wider access to the domain of implicit measurements.

×

About the authors

O. L. Chernozub

Institute of Sociology of FCTAS RAS

Author for correspondence.
Email: 9166908616@mail.ru
Krzhizhanovskogo St., 24/35-5, Moscow, 117218, Russia

References

  1. Alwin D.F. Feeling thermometers versus 7-point scales: Which are better? Sociological Methods and Research. 1997; 25 (3).
  2. Anson I.G., Hellwig T. Economic models of voting. Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource. Wiley; 2015.
  3. Arcuri L., Castelli L., Galdi S. et al. Predicting the vote: Implicit attitudes as predictors of the future behavior of decided and undecided voters. Political Psychology. 2008; 29.
  4. Arrow K., Forsythe R., Gorham M. et al. The promise of prediction markets. Science. 2008; 320.
  5. Atanasov P. et al. Distilling the wisdom of crowds: Prediction markets versus prediction polls. Academy of Management Proceedings. 2015. https://doi.org/10.5465/AMBPP.2015.15192abstract.
  6. Baskakova Yu. Techniques and methods of political forecasting in the 2016-2018 elections. Elections after the Crimea. Fedorov V. (Ed). Moscow; 2018. (In Russ.).
  7. Celli F., Stepanov E.A., Poesio M., Riccardi G. Predicting Brexit: Classifying agreement is better than sentiment and pollsters. Proceedings of the Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media. Osaka; 2016.
  8. Choma B.L., Hafer C.L. Understanding the relation between explicitly and implicitly measured political orientation: The moderating role of political sophistication. Personality and Individual Differences. 2009; 47.
  9. Chernozub O.L. The two-component model of behavior factors: Evidences of orthogonality of explicit and implicit factors. RUDN Journal of Sociology. 2022; 22 (1).
  10. Chernozub O.L. Implicit factors and inconsistency of electoral behavior: from a theoretical concept to an empirical phenomenon. Monitoring of Public Opinion: Economic and Social Changes. 2020; No4.
  11. Chernozub O.L. Implicit factors and inconsistency of electoral behavior: From attitude to behavior. Monitoring of Public Opinion: Economic and Social Changes. 2020; 5.
  12. Etkind A.M. The color test of attitude. General Psychodiagnostics. Moscow; 1987. (In Russ.).
  13. Himmelfarb S., Eagly A.H. Orientations to the study of attitudes and their change. S. Himmelfarb, A.H. Eagly (Eds.). Readings in Attitude Change. New York; 1974.
  14. Fishbein M., Ajzen I. Predicting and Changing Behavior: The Reasoned Action Approach. New York-Hove; 2011.
  15. Ganser C., Riordan P. Vote expectations at the next level. Trying to predict vote shares in the 2013 German Federal Election by polling expectations. Electoral Studies. 2015; 40.
  16. Gayo-Avello D. A meta-analysis of state-of-the-art electoral prediction from Twitter data. Social Science Computer Review. 2013; 31.
  17. Graefe A. Accuracy of vote expectation surveys in forecasting elections. Public Opinion Quarterly. 2014; 78.
  18. Graefe A. Political Markets. Sage Handbook of Electoral Behavior; 2016.
  19. Green D.Ph. On the dimensionality of public sentiment toward partisan and ideological groups. American Journal of Political Science. 1988; 32 (3).
  20. Greenwald A.G., Poehlman T.A., Uhlmann E.L., Banaji M.R. Understanding and using the Implicit Association Test: III. Meta-analysis of predictive validity. Journal of Personality and Social Psychology. 2009; 97 (1).
  21. Greenwald A.G., Smith C.T., Sriram N., Bar-Anan Y., Nosek B.A. Implicit race attitudes predicted vote in the 2008 U.S. Presidential Election. Analyses of Social Issues and Public Policy. 2009; 9.
  22. Jacoby W.G. Feeling thermometers. Candidate Evaluation Conference Proceedings. 1994. URL: http://www.electionstudies.org/conferences/1994Candidate/1994Candidate_Jacoby.pdf.
  23. Kennedy C. et al. An Evaluation of 2016 Election Polls in the United States. URL: https://www.aapor.org/getattachment/Education-Resources/Reports/AAPOR-2016-ElectionPolling-Report.pdf.aspx.
  24. Kiesler Ch.A., Collins B.E., Miller N. Attitude Change. A Critical Analysis of Theoretical Approaches. New York; 1969.
  25. Kou S.G., Sobel M.E. Forecasting the vote: A theoretical comparison of election markets and public opinion polls. Political Analysis. 2004; 12.
  26. Leigh A., Wolfers J. Competing Approaches to Forecasting Elections: Economic Models, Opinion Polling and Prediction Markets. IZA Discussion Papers. No. 1972. Bonn; 2006.
  27. Lewis-Beck M.S., Stegmaier M. Economic models of voting. The Oxford Handbook of Political Behavior. Ed. by J. Dalton, H.-D. Klingemann. Oxford University Press; 2007.
  28. Lupton R.N., Jacoby W.G. The Reliability of the Anes Feeling Thermometers: An optimistic assessment. Presentation at the 2016 Annual Meetings of the Southern Political Science Association. San Juan-Puerto Rico; 2016.
  29. Lüscher M. The Luscher Color Test. New York; 1990.
  30. Mamonov M.V., Gavrilov I.V., Vyadro M.A. Imitational features of the 2018 presidential elections and their impact on the next electoral cycle: Results of public opinion polls. Monitoring of Public Opinion: Economic and Social Changes. 2018; 4. (In Russ.).
  31. Markert С. Test Your Emotions. Wellingborough; 1980.
  32. Mercer A., Deane C., McGeeney K. Why 2016 election polls missed their mark? URL: http:// www.pewresearch.org/fact-tank/2016/11/09/why-2016-election-polls-missed-their-mark.
  33. Metaxas P.T., Mustafaraj E., Gayo-Avello D. How (not) to predict elections: Privacy, security, risk and trust. 2011 IEEE Third International Conference on Social Computing. Boston; 2011.
  34. Murr A.E. The wisdom of crowd: Applying Condorcet’s jury theorem to forecasting US presidential elections. International Journal of Forecasting. 2015; 31.
  35. O’Keefe D.J. Persuasion: Theory and Research. Sage; 1990.
  36. Perugini M. Predictive models of implicit and explicit attitudes. British Journal of Social Psychology. 2005; 44.
  37. Roccato M., Zogmaister C. Predicting the vote through implicit and explicit attitudes: A field research. Political Psychology. 2010; 31.
  38. Rogers T., Aida M. Why Bother Asking? The Limited Value of Self-Reported Vote Intention. Harvard Kennedy School of Government. Faculty Research Working Paper Series. 2012. URL: http://EconPapers.repec.org/RePEc:hrv:hksfac:7779639.
  39. Rothschild D., Wolfers J. Forecasting Elections: Voter Intentions versus Expectations. 2012. URL: https://ssrn.com/abstract=1884644.
  40. Sturgis P., Baker N., Callegaro M. at al. Report of the Inquiry into the 2015 British General Election Opinion Polls. London; 2016.
  41. Tumasjan A., Sprenger T.O., Sandner P.G., Welpe I.M. Predicting elections with Twitter: What 140 characters reveal about political sentiment. Proceedings of the 4th International AAAI Conference on Weblogs and Social Media. AAAI Press; 2010.
  42. Vandenberghe F. On the coming end of sociology. Canadian Review of Sociology = Revue Canadienne de Sociologie. 2019; February. https://doi.org/10.1111/cars.12238.
  43. Whiteley P. Four reasons why the polls got the U.S. election result so wrong. URL: http:// www.newsweek.com/polls-2016-us-elections-trump-potus-hillary-clinton-520291.
  44. Wilcox C., Sigelman L., Cook E. Some like it hot: Individual differences in responses to group feeling thermometers. Public Opinion Quarterly. 1989; 53 (2).
  45. Yarygin G., Yarygin O. Modeling of electoral process: From conceptual model to computer simulation. Azimuth of Science and Research. 2016; (1). (In Russ.).

Supplementary files

Supplementary Files
Action
1. Figure 1. Examples of K. Markert’s Test stimuli used to measure voters’ implicit attitudes

Download (35KB)
2. Fig. 2. Mismatch of explicit and implicit attitudes (Governor election 2017)

Download (268KB)
3. Figure 3. Mismatch of explicit and implicit attitudes (Governor election 2018)

Download (217KB)
4. Figure 4. Cases of experimental and control data match/mismatch

Download (270KB)

Copyright (c) 2023 Chernozub O.L.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies