Part 1 here.
Part 2 here.
Study 3:
“Across five studies, priming God increased people’s willingness to take nonmoral risks. In Study 3, we tested our hypothesis that this effect occurs because reminders of God lead individuals to perceive themselves as protected—that is, that the risks present less danger.”
For this experiment they went back to the online survey system and grabbed a hundred people.
“We randomly assigned participants to either the God condition
or the control condition from Study 1d*. In an ostensibly
unrelated survey, participants read three scenarios
that each described a risky decision (motorcycling without
a helmet, wilderness camping, and backcountry skiing).
*Participants in the God condition read a short paragraph
about God; participants in the control condition read a
short paragraph about a non-God-related topic (both
paragraphs taken from Wikipedia).”
Continuing in the methods:
“After reading each scenario, participants completed three items assessing their perceptions of danger:
‘If you did [e.g., go wilderness camping], what is the likelihood that you will get injured?’ (1 = extremely unlikely…)
‘If you did get injured, how serious do you think the injury would be?” (1 = not serious at all…), and ‘If you did get
injured, how well would you be able to cope with the injury?’ (1 = not well at all…).
Finally, participants reported how willing they would be to take each risk (1 = not likely at all…).”
All scales were 1-7.
The specific wording is buried in the supplemental material that I haven’t been able to get my hands on.
And a ton of data analytics that could have been done is left out on this one. I don’t know if the authors were getting tired, or they intentionally omitted it, or the journal told them to cut it out to conserve space (journals demand a lot of brevity; the more articles you can fit in, the better). What’s left is extremely hard to follow, and you lose a lot of ability to parse and sift through data that might be interesting (or uninteresting). They crammed the numbers into a stats program and flagged anything that hit statistical significance.
Based off of their first analysis (via chi squared), they determined that “scenario did not moderate the effect of condition.” That is, the type of scenario didn’t make a statistically significant change on the outcome of whether or not people were more likely to attribute “more risk” in the primed (e.g. read about God) group versus the unprimed group. If I’m interpreting that correctly, which I may not be, because the language is extremely obfuscating and hard to follow.
So because of that lack of “moderation”, they “collapsed [the] analyses across the three risky scenarios.” That is, they blended the scenarios together to cut down on the final outcomes. So instead of analyzing the data separately for each scenario, they just merged them all. My concern with that is the inter-reliability of the scenarios, which they provided (I omitted them for sentence clarity).
“We created an index of the nine subjective-danger-perception items
(with higher scores representing more perceived danger; α = .71).
Finally, participants reported how willing they would be to take each risk,
which yielded an index of risk taking (α = .60).”
The α is called (again, if I’m interpreting it correctly) Cornbach’s alpha, and can be used to determine to some extent how consistent certain sets of data are .71 and .60 indicate a fair amount of internal inconsistency. So on the one hand, if there is substantial diversion from the primed and unprimed group, I believe that could be a contributing factor. But so could substantial deviation in rankings. And they don’t provide us the basic mean and standard deviation for each scenario.
So even if the scenario doesn’t make a substantial effect on outcome, this determination is being made with some vague data to begin with. It may be less to look at, but it’s watering down the data with each step.
In subsequent analysis, they found that
“participants in the God condition reported a higher willingness to take risks
(b = 0.61), and also perceived less danger associated with these risks
b = −0.44) than did participants in the
control condition.”
Those correlations aren’t spectacular, but if you want to throw qualifiers like “slight” around, I’d concede it.
But I can’t tell how they’re determining “perceived danger”. I assume that’s based on the answer to the “what is the likelihood that you will get injured?” question, but it could be to the “how serious do you think the injury will be?” question. Or both.
“When the effects of God condition and
perceived danger were both included in the model, the
indirect effect (i.e., through the mediator) was significant
(95% confidence interval), whereas the
direct effect (i.e., the effect of condition independent of the mediator)
was not significant (b = 0.28, p = .267).”
That is to say, they are using a
mediation model. It’s a way of looking for an indirect effect, or, some mediating variable. From Wikipedia: “Rather than a direct causal relationship between the independent variable and the dependent variable, a mediation model proposes that the independent variable influences the (non-observable) mediator variable, which in turn influences the dependent variable.”
The numbers are b‘s, slopes, indicating level of association.
Tl;dr: people who were less likely to think they would get hurt (or if they did, not seriously; we can’t tell how the authors calculated “perceived risk” because they didn’t say explicitly), a trait more common in people who read about God first, were more likely to say they would take a risk.
They conclude this study with their interpretation of the results thusly:
“…reminders of God increased willingness to
take risks because they evoked a feeling of safety from
potential harm.”
And that’s all that people are going to read from this. I mean, I struggled through the statistics. It’s not just you; it’s not just me. Very, very few people on Earth can just browse that kind of thing and judge the results on a first pass.
This study, assuming we treat all statistically significant results as meaningful (which again, I advise against), demonstrated that having people reading about God tended to get them to think that they would be less likely to be injured (or seriously injured) in a couple unrelated hypotheticals of varying salience to the participants, salience which was never sussed out. What I mean by that: somebody who lives in Florida probably doesn’t do a lot of skiing. Somebody who rides the subway to work probably doesn’t ride a motorcycle too much, with or without helmet, nor for that matter might they go hiking in the wilderness too much.
If we can draw a link between thinking about God and riding a motorcycle without a helmet, that’s worth knowing. But it only matters for people who ride motorcycles. It doesn’t matter what some guy thinks might happen to him if he gets in a motorcycle crash sans headgear if he’s never going to ride a bike in his life. What this is judging is how people respond to hypothetical questions.
Based on this exact same data, allow me to make the argument that reading a paragraph about God will tempt people into talking a slightly bigger game than if they read about “something else”. This doesn’t measure what anybody actually does, but what they think. What some guy who’s never skied in his life and never plans on it thinks about his odds of getting injured in a skiing accident are is pretty much worthless compared to somebody ready to go down a mountain. And even then, if this belief doesn’t influence behavior (e.g. skiers who think they’ll be fine ski more recklessly), then the point isn’t “salient”, it’s worthless.
And again, we don’t have the numbers for change in mean response. If people ranked their likelihood of doing something from 2.0 to 2.5 with the God priming, people still aren’t keen on doing anything. Let alone are we figuring out what they actually do.