Speculating on the role of context in the outcomes of interpretive programs.
Powell, Robert B. ; Stern, Marc J.
Introduction
The interpretive equation suggests that successful interpretation
requires that an interpreter must have knowledge of not only the
resource, but also of their audience (Lacome, 2003). With this
knowledge, interpreters can select and use appropriate techniques to
make meaningful connections for visitors. In other words, interpretation
is not a "one size fits all" prospect; selection and use of
appropriate techniques depends upon the characteristics of the audience
including their age, background, expectations, and motivations for
attendance. Although not explicitly accounted for in the interpretation
equation, setting and other context elements may also meaningfully
influence interpretive programs and their outcomes (Larsen, 2003;
Merriman & Brochu, 2005; Moscardo, 1999). Some suggest that
characteristics of the setting, attributes of the resource, and the
collective characteristics of the audience form integral parts of the
interpretive experience and should be accounted for in the planning and
implementation phases (Larsen, 2003; Merriman & Brochu, 2005;
Moscardo, 1999).
The other articles in this special issue explore which interpretive
techniques are most strongly associated with visitor outcomes across a
wide range of programs. But do certain techniques or approaches work
better or worse in particular contexts and with certain audiences? To
what extent does "context" influence visitor outcomes? This
paper explores interactions between the duration, topic, type, and
setting of programs, the nature of the interpreted resources, the size
and age makeup of the audience, and visitor outcomes. The results of
this study support the idea that context matters. We explore data
collected from 272 programs across 24 diverse units of the U.S. National
Park Service to build speculative hypotheses about which interpreter and
program characteristics may be more or less important in producing
positive visitor outcomes in different contexts.
Interactional theory
Interpretive programs and resulting visitor outcomes can be thought
of as an interaction between the characteristics of the audience, the
site/setting, the interpreter, and the interpretive program (Archer
& Wearing, 2003; Mayer & Wallace, 2008; Merriman & Brochu,
2005; Powell, Kellert, & Ham, 2009; Wearing & Wearing, 2001).
This notion of interactions between humans and their social and physical
environments influencing cognition and behavior is the main premise of
interactional theory (Altman & Rogoff, 1987; Stokols & Altman,
1987). Through the lens of interactional theory, visitor outcomes
associated with attending interpretive programs result from the
interaction of the characteristics of the program, the interpreter,
other audience members, and the setting in which the program occurs
(Archer & Wearing, 2003; Arnould & Price, 1993; Falk &
Deirking, 2000; Wearing & Wearing, 2001). This theoretical approach
acknowledges that interpretive programs are complex and promotes a
holistic view of the relationships between multiple factors that
together produce experiential outcomes (Altman & Rogoff, 1987;
Archer & Wearing, 2003; Brochu & Merriman, 2002; Wearing &
Wearing, 2001).
Potential influences of context: Audience, program, and setting
characteristics
Research and theory suggest that the makeup of the audience should
influence the techniques that are used as well as the outcomes of a
program (Ham, 2013; Larsen, 2003). Although it is assumed that audience
size and the age ranges of an audience will influence the selection of
interpretive techniques, few have examined which techniques work best
for different audience makeups (from all children to all adults) or how
audience makeup may influence outcomes. Coble and others (2013) provide
one exception, finding that the presence of children in an audience
reduced the formation of intellectual and emotional connections made by
audience members in U.S. National Park Service interpretive programs.
The bulk of the research on the effects of group size comes from
the formal education literature and suggests that smaller class sizes in
formal settings tend to produce improved student outcomes (Boozer &
Rouse, 2001; Finn & Achilles, 1999; Glass, 1982). In informal
settings, such as in the case of interpretation and environmental
education, there is less conclusive evidence. Powell and others (2009)
examined visitors who received interpretation while rafting down the
Colorado River through Grand Canyon National Park and found that group
size was negatively associated with knowledge gain. Coble and others
(2013) also found that as group size increased, intellectual connections
decreased in attendees to NPS interpretation. However, Stern and others
(2008) investigated the influence of group size at a residential
environmental education center for elementary school children and found
that larger groups were associated with improved awareness and interest
in discovery and learning.
It is often assumed that the longer someone engages with an
interpretive opportunity, whether an exhibit or a live interpretive
program, the better the outcomes. While some empirical research supports
this assertion, most have studied the influence of the number of
interpretive programs attended or the number of days of a residential
program and not the influence of duration of a single live interpretive
program (Stern et al., in press). For example, Powell and others (2009),
Stern and others (2008), Ballantyne and Packer (2005), and Coble and
others (2013) have all found that greater exposure led to more positive
outcomes. Museum and exhibit visitor studies also support the notion
that the longer one engages an exhibit or collection of exhibits, the
better (Falk, 2004).
We found few studies that examined whether particular types of
interpretive programs were more or less effective in producing positive
audience outcomes. Coble and others (2013) found that interpretive films
were not as successful at producing intellectual connections as other
interpretive program types such as live interpretation, illustrated
programs, exhibits, and other conducted activities; no other trends were
found. Van Winkle (2012) also examined the differences between
electronic audio vs. live interpretation and found no differences in
learning outcomes. We also examined whether particular interpretive
techniques were more effective in programs interpreting natural
resources vs. cultural resources and were unable to find prior research.
Other factors that may influence cognitive, affective, and
behavioral outcomes include park setting, program location, and quality
of the resource (Archer & Wearing, 2003; Mayer & Wallace, 2008;
Merriman & Brochu, 2005; Powell et al., 2009; Wearing & Wearing,
2001). We refer to "park setting" in this study as a
description of where the park unit that provided the interpretation
program falls on the urban to remote spectrum. Different park units in
different settings have different resources and may attract different
visitors, each arriving with different motivations. However, it is still
unclear if certain program practices work better in particular
locations.
Natural environments, as opposed to built or indoor environments,
are thought to enhance affective outcomes such as interests, emotions,
and attitudes; cognitive outcomes such as learning; and psychological
restoration (Crompton & Sellar, 1981; Kahn & Kellert, 2002; R.
Kaplan & Kaplan, 1989; R. Kaplan, Kaplan, & Ryan, 1998; Kellert,
2005; Stern, Powell, & Hill, in press). However, several reviews of
the literature suggest that indoor settings can be more effective than
outdoor settings and other non-traditional settings for producing
certain student outcomes (Zelezny, 1999; Zink & Burrows, 2008).
Therefore the influence of conducting live interpretation in indoor vs.
outdoor locations and ascertaining which program practices work best in
each may be more nuanced than previously thought.
Another aspect of the setting with potential to influence the
outcomes of interpretation includes the quality of the resource and
setting. Larsen (2003) suggests that the basis of most interpretation is
a tangible resource, which has some iconic value that anchors the
program. In fact, research suggests that some resources and settings
with unique iconic or symbolic qualities may have powerful impacts on
visitors affective, cognitive, and behavioral domains. For example,
extreme aesthetic natural and built environments have been associated
with peak, spiritual, extraordinary, and transformative experiences (S.
Kaplan, 1993; Laski, 1961; Otto, 1958; Powell, Brownlee, Kellert, &
Ham, 2012), increased feelings of satisfaction and enjoyment (Arnould
& Price, 1993; Powell et al., 2012), enhanced ethical concern for
nature and commitment to stewardship (Kellert, 1996; Powell et al.,
2012), enhanced emotional and cognitive connections (Kellert, 2005;
Powell et al., 2012), and feelings of awe and wonder (Kellert &
Farnham, 2002; Powell et al., 2012). Expansive, grand, and austere
landscapes also may promote feelings of humility, spirituality, and even
fear (Brown & Raymond, 2007; Galagher, 1993; Heintzman, 2009;
Heintzman & Mannell, 2003; Koecni, 2005; Powell et al., 2012;
Williams & Harvey, 2001). Therefore it seems appropriate to examine
whether the quality of a program's resource influences the
participant's outcomes.
Finally, three intervening variables--the occurrence of accidents
or other negative events; the occurrence of positive events, such as the
sighting of a charismatic animal; and extreme weather--are also consider
in this study because of their potential to influence the interpretive
experience, and because they are considered largely outside the control
of the interpreter and the audience (Powell et.al., 2009).
This study sought to better understand 1) the extent to which the
context variables discussed above influence visitor outcomes and 2)
whether certain forms of program delivery appear to work better or worse
in particular contexts. These forms of program delivery are divided into
interpreter characteristics and program characteristics and are
described in detail in Stern and Powell (article 1, this issue).
Methods
We observed 376 live interpretation programs conducted by the NPS
across 24 different park units. During these programs we recorded the
occurrence and extent of a wide-range of characteristics pertaining to
program practices, interpreter attributes, and context (audience,
program, and setting). Program practices were drawn from an extensive
literature review that identified recommended practices (Skibins,
Powell, & Stern, 2012). Interpreter attributes were largely
identified from a review of the communications and education literature,
although many are also referenced in the interpretation literature (see
Stern & Powell, article 1, this issue). For a complete list, see
Stern and Powell (article 1, this issue).
Immediately after each interpretive program, we administered short
questionnaires to attendees who were over the age of 15 to gauge the
influence of these programs on three dependent variables (Table 1). The
first dependent variable measured program attendees' level of
satisfaction, using a single survey item that asked visitors to rate
their overall level of satisfaction with the program they had just
attended on a scale ranging from 0 ("terrible") to 10
("excellent"). The second dependent variable, "visitor
experience and appreciation," was composed of five survey items.
The third dependent variable, "behavioral intentions," was
composed of two survey items that gauged the program's influence on
attendees' intentions to change future behaviors in the park and at
home. The items comprising the two scales were measured using a
five-point Likert-type scale, with answer choices: Not at all (1), A
little (2), Somewhat (3), A moderate amount (4), and A great deal (5).
Composite scores were created for each of the scales by taking the mean
of all items.
From the 376 live interpretation programs, 64 were eliminated from
analyses because of missing data or low response rates. We then divided
the remaining 312 programs into those that served fewer than five people
(n = 40) and those that served five or more (n = 272) because literature
suggests that small programs are inherently different phenomenon than
larger programs (Forist, 2003; McManus, 1987, 1988). We use the
five-and-over sample in this paper because of the larger sample size,
except for when examining the influence of group size. In this study,
the interpretive program served as our unit of analysis. Therefore, all
dependent variables were aggregated to the program level by calculating
the mean score for each program (Table 1). For further information
regarding sampling, data collection, data cleaning, dependent variable
development procedures, program practices, and interpreter
characteristics see Stern and Powell (article 1, this issue).
The audience, program, and setting characteristics under
investigation included two continuous variables, four categorical
variables, and two ordinal variables (Table 2). The two continuous
variables included group size and program duration. The four categorical
variables included the program topic, the program type, the park
setting, and the location of the program. The two ordinal
descriptors--the ratio of children to adults in the audience and quality
of the resource--were recorded by the researchers in the field. Finally
three intervening variables--the occurrence of extreme weather, the
occurrence of accidents or other negative events, and the occurrence of
positive events--were recorded because of their potential for
influencing the interpretive experience. Table 2 provides a definition
for each variable, an explanation of its measurement, and the mean or
frequency depending upon the type of variable.
Results
How did context influence outcomes?
We first examine whether particular context variables are directly
related to different outcomes. In other words, do certain contexts tend
to produce different results? We also examine whether certain program
characteristics or interpreter delivery styles are more prevalent in
different contexts.
Group Size: The number of attendees to the 312 interpretive
programs included in this analysis ranged from one person to
approximately 600 people. The mean audience size was 48 and the median
number of attendees was 17. When examining the correlation between the
size of the audience and outcomes, we found no consistent relationships
with satisfaction or the visitor experience and appreciation program
outcomes. However, as audience numbers increased, programs tended to
record greater audience intentions to change behaviors (r = .127; p =
.031). As audience sizes increased, interpreters also tended to score
higher in confidence (r = .237; p < .001), organization (r = .167; p
= .002) of their programs, and humor quality (r = .213; p < .001).
However, they also tended to be more formal (r = .346; p < .001) and
provide less physical (r = -.140; p = .009) and verbal engagement
(-.308; p < .001).
Ratio of children to adults: In programs with five or more
attendees, 9% of the programs (n = 25) had mostly children present; 31%
(n = 82) had roughly an equal mix of adults and children; 49% (n = 132)
had mostly adults; and 11% (n = 29) had all adults. The higher the ratio
of children to adults, the higher the behavioral intentions score (r =
.182; p = 0.003). In other words, the more children present, the more
likely adult participants were to report that the program had changed
their behavioral intentions. Programs with higher ratios of children to
adults were more commonly multisensory (r = .143; p = .019) and
contained elements of novelty (r = -.133; p = .029). Interpreters were
more likely to share their own personal stories (r = .151; p = 0.014)
when more adults were present relative to children. Programs with all
adults were more commonly solely fact-based than those where children
were present (Pearson [chi square] = 7.6; p = .006).
Program duration: Advertised program lengths ranged from 15 minutes
to four hours. Actual program lengths ranged from 10 minutes to three
hours. The average program length was just under 49 minutes. No
statistically significant relationships were observed between program
duration and visitor outcomes.
Program focus: One-hundred and seventy (63%) of the programs
focused primarily on cultural heritage; 70 (26%) had a primary focus on
the natural environment. Twenty-nine (11%) had a dual focus. Behavioral
intentions scores were statistically higher for nature-based programs
(means: 3.05 vs. 2.84, t = 2.2, p = 0.026; Cohen's d = 0.33). No
other statistically significant differences were noted in overall
outcomes. In interviews prior to the programs, interpreters were more
likely to express behavioral change as an intended outcome for
nature-focused programs as opposed to culturally focused programs ([chi
square] = 7.4; p = .007).
Program type: Programs included guided walks and tours (n = 161);
talks, slide shows, and multi-media presentations (n = 98);
demonstrations (n = 5); and activities (n = 8). Guided walks/tours and
stationary talks made up 95% of the programs we observed. No
statistically significant differences in outcomes between program types
were observed.
Urban vs. remote: Within our sample of programs with five or more
attendees, 91 (33%) programs took place in urban parks, 50 (18%) took
place in urban-proximate parks, and 131 (48%) took place in remote
parks. There were no significant differences in outcomes based upon
proximity to urban centers.
Indoors vs. outdoors: Seventy-two percent (n=195) of programs took
place outdoors; 20% (n = 55) took place indoors; and 8% (n = 22) used
both indoor and outdoor settings. Visitor experience and appreciation
scores tended to be greater following programs that took place entirely
outdoors when compared to programs that took place entirely indoors
(means: 4.45 vs. 4.33; t = 2.6; p = 0.011; Cohen's d = 0.36) or
programs that had both indoor and outdoor components (means: 4.45 vs.
4.25; t = 2.1; p = 0.039; Cohen's d = 0.55). Indoor programs also
tended to have larger audiences than programs conducted outdoors (means:
171.79 vs. 24.87; t = 8.8; p < .001; Cohen's d = 0.95).
Resource quality: We rated the quality of the resource where the
program occurred. Forty-nine percent of program resources were rated as
iconic or grandiose; 38% were rated as pleasant but not iconic; and 13%
were rated as unimpressive or generic. The mean on the scale was 2.37
(s.d. = 0.69). The quality of the resource did not exhibit any
consistent relationships with program outcomes.
Exceptional events: Thirty-four programs (13%) experienced negative
events such as interruptions, technical difficulties, and accidents.
Nine (3%) of the programs experienced notably bad weather. Only five
programs (2%) experienced unexpected positive events, such as a rare
animal sighting. We combined bad weather and negative events and
conducted a means comparison between these programs and those without
negative circumstances. Programs with negative circumstances (n=43)
exhibited significantly lower satisfaction (means: 8.70 vs. 8.99; t =
2.8; p = 0.006; Cohen's d = 0.33) and visitor experience and
appreciation scores (means: 4.25 vs. 4.44; t = 3.6; p < .001;
Cohen's d = 0.43) than programs without these distractions. The
small number of programs that experienced positive unexpected events
precluded further analysis.
Which programmatic practices and interpreter attributes appear to
work better in different contexts?
To examine whether different programmatic practices and interpreter
attributes influence outcomes better in particular contexts and
settings, we split the sample in the following ways: programs with
larger and smaller proportions of children in the audience, culturally
focused vs. environmentally focused programs, programs conducted in
remote vs. urban parks, and indoor vs. outdoor programs. To ensure
adequate sample sizes, we used the sample of programs with more than
five attendees for each analysis.
We examined the relationships between interpreter and program
characteristics and visitor outcomes within each context. We report only
characteristics that show at least one statistically significant
relationship with an outcome.
When a correlation coefficient for a particular program practice
was significant in one context and not in another, we used Fisher r to z
transformation to assess the significance of these differences. Fisher r
to z transformation compares correlation coefficients of different
groups, taking into account their respective sample sizes. The test
yields a z-score and associated p-value. These statistics provide a more
stringent criteria for distinguishing differences in correlation
coefficients across the subsamples and helped us avoid Type I errors
(cases in which a real relationship is assumed, but sufficient evidence
is lacking to support it). We have bolded and shaded these significant
differences (z-score at p < 0.05) in the subsequent correlation
tables. To further evaluate differences in binary variables'
relationships to outcomes, we only highlight instances where the mean
score in one subsample is significant at p < 0.01 and the other is
not statistically significant (p > 0.05). Our goal in these analyses
is to take a conservative approach to identifying practices that appear
to operate differently in different contexts. Because the sample sizes
shrink rapidly as we split the data into subsamples, we acknowledge that
the emergent patterns are speculative rather than definitive trends.
Adult audiences vs. audiences with children: Tables 3 and 4
summarize relationships between program and interpreter characteristics
and visitor outcomes in programs with different ratios of children to
adults in their audiences. The column labeled "adults"
represents programs in which adults made up a clear majority of the
audience (60% of [programs). The column labeled "children"
represents programs with an equal or greater number of children compared
to adults (40% of programs). Only characteristics showing at least one
statistically significant relationship with an outcome are presented.
While several program practices and interpreter attributes were
consistently important irrespective of audience, there were several that
appeared to be only significant for audiences with a large number of
children and were significantly different from the mostly and all adult
subsample. To determine which of these differences might be the most
meaningful, we conducted Fisher r to z transformations to compare the
correlation coefficients of different groups. We have bolded and shaded
these differences in Table 3 (and subsequent correlation tables) that
yielded a statistically significant z-score at p < 0.05.
These analyses reveal that four characteristics had stronger
relationships to outcomes in programs with more children than they did
in programs with little or no children. Confidence of the interpreter
was more strongly linked with positive changes in behavior intentions in
programs with more children (z = 2.01; p = 0.01). Appropriate for the
audience was more strongly linked with behavioral intentions as well (z
= 2.72; p < 0.01). Appropriate logistics and audibility were more
strongly linked with satisfaction (z = 2.30; p = 0.01 and z = 1.71; p =
0.04, respectively) and visitor experience and appreciation (z = 2.88; p
< 0.01 and z = 2.32; p = 0.01, respectively) in programs with more
children. Humor quality (z = 2.40; p < 0.01) and humor quantity (z =
2.25; p = 0.01) were also more predictive of visitor experience and
appreciation in programs with more children. Differences noted in
t-tests did not meet our threshold.
In short, the results suggest that most of the key best practices
identified in Stern and Powell (article 1, this issue) cut across
contexts. However, certain program characteristics may be particularly
beneficial with audiences dominated by children. These include
exhibiting confidence, using humor, ensuring audibility, gearing program
content and delivery style to the specific audience, and paying careful
attention to appropriate logistics.
Natural vs. cultural focused programs: We ran a similar set of
analyses for nature-focused vs. culture/history-focused programs (Tables
5 and 6). For this analysis, we removed programs with equally balanced
nature-based and cultural-based content because of their small sample
size (n=29). There were 70 nature-focused programs and 170
cultural/history-focused programs. The results suggest a consistent list
of program elements that are significant in both natural and cultural
programs. However, three interpreter characteristics appeared to have
different influences on outcomes according to our criteria. Humor
quantity was positively linked with satisfaction (z = 1.69; p = .04) and
visitor enjoyment and appreciation (z = 2.03; p = .02) in cultural
programs but not in nature-based programs. Making a false assumption
about the audience was negatively related to visitor enjoyment and
appreciation (z = -2.39; p < 0.01) in nature-based programs but not
in cultural programs. Sarcasm (z = -1.97; p = .02) was negatively
related to behavioral intentions in the nature-based programs but not
cultural programs. Differences noted in t-tests did not meet our
threshold.
In summary, it appears that making false assumptions about the
audience and sarcasm may be more damaging to visitor outcomes in
nature-focused programs than in cultural programs. Meanwhile, additional
attempts at humor may have more positive influences on visitor outcomes
in cultural programs as opposed to nature-based programs. Urban vs.
remote parks: Within our sample of programs with five or more attendees,
91 programs took place in urban parks, 50 took place in urban-proximate
parks, and 131 took place in remote parks. Because of the small number
of programs within the urban-proximate park subsample, we dropped this
group from the analysis. We thus explored only differences between
programs occurring in urban and remote park units. When examining the
relationship between location, outcomes and program and interpreter
characteristics, certain variables appeared more predictive of outcomes
in certain areas.
Tables 7 and 8 summarize relationships between program and
interpreter characteristics and outcomes in both urban and remote parks.
Again, most previously identified "best practices" (Stern
& Powell, this issue) cut across park types. However, four
interpreter delivery styles and two program characteristics displayed
potentially meaningful differences in their relationships to outcomes.
Sarcasm showed more positive relationships with satisfaction (z = 2.11;
p = 0.02) and visitor experience and appreciation (z = 2.44; p <
0.01) in urban parks and a negative relationship with changes in
behavioral intentions in remote parks (z = -1.94; p = 0.03). Surprise
exhibited more positive relationships with changes in behavioral
intentions in remote park units (z = 3.15; p < 0.01). Humor quantity
was more positively linked with satisfaction (z = 2.82; p < 0.01) and
visitor experience and appreciation (z = 3.26; p < 0.01) in urban
settings. Multisensory engagement was positively linked to satisfaction
in urban settings (z = 1.01; p = 0.04), and audibility was more
positively linked to visitor experience and appreciation in urban
settings (z = 3.15; p = 0.05). Moreover, t-tests revealed that
appropriate pace was more positively related to visitor experience and
appreciation in remote settings than in urban settings.
In summary, sarcasm appears to be significantly more effective with
audiences who visit urban parks than those who visit remote parks. In
fact, it actually exhibited positive relationships with attitudinal
outcomes (satisfaction and visitor experience and appreciation) in urban
settings and a negative relationship with behavioral intentions in
remote settings. Meanwhile, the element of surprise may be more
effective for audiences who visit remote parks. Maintaining an
appropriate pace may also be a more relevant concern for programs in
remote parks than in urban parks. Focusing more heavily on humor and
multisensory engagement may be more effective in urban settings.
Moreover, audibility may be more of a meaningful issue in urban settings
than in remote settings.
Indoor vs. outdoor programs: We also compared programs that took
place indoors vs. programs that took place outdoors (Tables 9 and 10).
For this analysis, we removed programs that took place both indoors and
outdoors because of the small sample size (n = 22). There were 55
programs that took place completely indoors and 195 programs that
occurred solely outdoors. Six program and interpreter characteristics
showed significantly different relationships with observed outcomes
across the two contexts. Confidence (z = 1.65; p = 0.05), consistency (z
= 2.76; p < 0.01), and organization (z = 2.59; p < 0.01) were each
more strongly related to more positive visitor experience and
appreciation in outdoor programs. Physical engagement exhibited a
significant positive relationship with visitor experience and
appreciation in outdoor programs and a significant negative relationship
in indoor programs (z = 2.86; p < 0.01). Multisensory engagement
showed a more positive relationship with behavioral intentions in
outdoor settings than in indoor settings (z = 1.84; p = 0.03). T-tests
revealed that appropriate pace was more positively related to both
satisfaction and visitor experience and appreciation in outdoor
settings.
In summary, confidence, consistency, organization, and pace may be
more important drivers of outcomes in outdoor settings than in indoor
settings, though confidence and organization appear to be clearly
important in both. Indoor audiences may less commonly feel comfortable
with higher degrees of physical engagement when compared to outdoor
audiences. Multisensory engagement was also more positively linked with
changes in behavioral intentions for outdoor audiences than for indoor
audiences. Finally, maintaining an appropriate pace was a better
predictor of attitudinal outcomes (satisfaction and visitor experience
and appreciation) in outdoor programs than it was in indoor programs.
Discussion and Conclusion
This study sought to better understand 1) the extent to which
context influences outcomes for interpretive program attendees and 2)
which program practices and interpreter attributes may work best in
particular contexts. We first explored the potential influence of
context. We examined the size of the audience and its age makeup,
program characteristics such as duration, topic, and type, and
characteristics of the setting including proximity to urban centers,
program location (indoor vs. outdoor), and resource quality by testing
their relationship to three outcomes, satisfaction, visitor experience
and appreciation, and behavioral intentions. In these analyses, there
were several trends. First, we found that as group size increased,
intentions to perform stewardship behaviors also increased. One
explanation for this trend could be the exertion of normative pressure
from peers or other audience members to change behaviors (see Ajzen,
1992; Ham et al., 2007). However, we did not test this hypothesis.
Second, we found that as the number of children in an audience
increased, intentions to change behaviors increased. One explanation for
this trend may be that an audience with more children may foster
intergenerational learning (Ballantyne, Fien, & Packer, 2001; Duvall
& Zint, 2007). Also, programs that served audiences with more
children tended to be less fact-based and were more commonly
multisensory and novel. Theory and research on behavior change supports
the notion that presenting facts, or attempting to increase knowledge,
has little to do with whether someone will change their behavior (e.g.,
Ham, 2013; Stern & Powell, this issue). We also found that programs
that occurred outdoors produced greater visitor experience and
appreciation in their audiences. This finding supports the notion that
outdoor settings may enhance more emotive and affective outcomes, such
as enjoyment and appreciation in participants (e.g., Kahn & Kellert,
2002; R. Kaplan et al., 1998; Kellert, 2005). These outdoor programs
also tended to have smaller audiences. This combination of a more
intimate social environment coupled with an outdoor setting may further
enhance outcomes.
To investigate and then develop hypotheses about whether certain
practices might work better or worse in particular contexts, we split
our sample of interpretive programs based on four contextual variables:
programs with greater vs. lesser proportions of children in the
audience; culturally focused vs. nature-focused programs; programs
conducted in remote vs. urban parks; and indoor vs. outdoors programs.
We compared relationships between program practices and interpreter
attributes and outcomes within each subsample. We then examined these
differences using more stringent thresholds to determine which might be
indicative of a potentially meaningful trend warranting the development
of a hypothesis. Several trends emerged across these four comparisons.
First, a consistent list of programmatic practices and interpreter
attributes appear important for achieving better visitor outcomes across
most contexts. These include confidence, authentic emotion and charisma,
organization, connection, verbal engagement, appropriate for audience,
clear message, responsiveness, and fact-based messaging (negative).
These findings largely corroborate the results of our analyses in Stern
and Powell, articles 1 and 4 this issue, and Powell and Stern, article 2
this issue. Despite the consistent performance of some program practices
across context, we did identify program characteristics that appeared to
perform differently in particular contexts (Table 11).
While most program and interpreter characteristics performed
similarly in programs containing different adult-to-child ratios,
certain characteristics appeared to be more beneficial with younger
audiences. These included confidence, using humor, ensuring audibility,
gearing program content and delivery style to the specific audience, and
paying careful attention to appropriate logistics. Similarly, few
potentially meaningful differences surfaced between nature-focused and
culturally focused programs in terms of the characteristics most
strongly associated with outcomes. Making false assumptions about the
audience met with less positive attitudinal visitor outcomes
(satisfaction and visitor experience and appreciation) and using sarcasm
exhibited a negative relationship with changes in behavioral intentions
in nature-focused programs. Meanwhile, humor met with more positive
attitudinal visitor outcomes in cultural programs.
We found similar trends with the relative influence of sarcasm and
humor when comparing urban vs. remote parks. Each exhibited stronger
positive links with attitudinal outcomes in urban parks and sarcasm was
negatively related to behavioral outcomes in remote parks. Focusing more
heavily on humor and multisensory engagement may be more effective in
urban settings. Moreover, audibility may be more of a meaningful issue
in urban settings than in remote settings. Our analyses suggest that
maintaining an appropriate pace may not only be more important in remote
settings as opposed to urban settings, but also in outdoor settings as
opposed to indoor settings.
Confidence, consistency, organization, and pace may also be more
important drivers of outcomes in outdoor settings than in indoor
settings, though confidence and organization appeared to be clearly
important in both. Physical engagement was positively linked to
attitudinal outcomes in outdoor programs and negatively associated with
the same outcomes in indoor programs. This suggests that audiences of
indoor programs may have different expectations than audiences of
outdoor programs and may not be as comfortable with physical engagement.
Overall, our analyses suggest that most of the "best
practices" identified in the broader sample (Stern & Powell,
this issue) are important regardless of context. However, some program
and interpreter characteristics may operate differently in different
settings and across contexts. However, we submit that all of the
contextual differences explained herein are speculative and would
require additional targeted investigation to validate. While we are
confident that our overall sample represents a reasonable approximation
of the diversity of interpretive programs across the NPS, we are less
confident in the representativeness of each subsample. As our sample
size is reduced, generalizability is weakened. As such, we suggest that
the results of these contextual analyses should be thought of as
hypotheses that could be further investigated to test their validity.
The results, however, suggest that we can be confident in saying:
context matters! Thus we urge researchers to design studies that can
refine our understanding of how context influences outcomes, and which
program practices and interpreter attributes work best in particular
contexts.
References
Ajzen, I. (1992). Persuasive Communication Theory in Social
Psychology: A Historical Perspective. In M. J. Manfredo (Ed.),
Influencing Human Behavior (pp. 1-28). Champaign, IL: Sagamore
Publishing, Inc.
Altman, I., & Rogoff, B. (1987). World view in psychology:
Trait, interactionist, organismic, and transactionalist approaches. In
D. Stokols & I. Altman (Eds.), Handbook of Environmental Psychology
(pp. 1-40). New York: John Wiley.
Archer, D., & Wearing, S. (2003). Self, space, and interpretive
experience: The interactionism of environmental interpretation. Journal
of Interpretation Research, 8(1), 7-23.
Arnould, E. J., & Price, L. (1993). River Magic: Extraordinary
Experience and the Extended Service Encounter. Journal of Consumer
Research, 20(1), 24-45.
Ballantyne, R., Fien, J., & Packer, J. (2001). Program
Effectiveness in Facilitating Intergenerational Influence in
Environmental Education: Lessons From the Field. The Journal of
Environmental Education, 32(4), 8-15. doi: 10.1080/00958960109598657
Boozer, M., & Rouse, C. (2001). Intraschool variation in class
size: Patterns and implications. Journal of Urban Economics, 50(1),
163-189.
Brochu, L., & Merriman, T. (2002). Personal interpretation:
Connecting your audience to heritage resources. Fort Collins, CO:
InterpPress.
Brown, G., & Raymond, C. (2007). The relationship between place
attachment and landscape values: Toward mapping place attachment.
Applied Geography, 27(2), 89-111. doi:
http://dx.doi.org/10.1016/j.apgeog.2006.11.002
Crompton, J. L., & Sellar, C. (1981). Do Outdoor Education
Experiences Contribute to Positive Development in the Affective Domain?
The Journal of Environmental Education, 12(4), 21-29. doi:
10.1080/00958964.1981.9942638
Duvall, J., & Zint, M. (2007). A Review of Research on the
Effectiveness of Environmental Education in Promoting Intergenerational
Learning. The Journal of Environmental Education, 38(4), 14-24. doi:
10.3200/joee.38.4.14-24
Falk, J. (2004). The director's cut: Toward an improved
understanding of learning from museums. Science Education, 88(S1),
S83-S96. doi: 10.1002/sce.20014
Falk, J., & Deirking, L. D. (2000). Learning from Museums.
Walnut Creek, CA: AltaMira Press.
Finn, J. D., & Achilles, C. M. (1999). Tennessee's class
size study: Findings, implications, misconceptions. Educational
Evaluation and Policy Analysis, 21(2), 97-109.
Forist, B. (2003). Visitor use and evaluation of interpretive
media. A report on visitors to the National Park System. National Park
Service Visitor Services Project.
Galagher, W. (1993). The power of place. New York: Poseidon.
Glass, G. V. (1982). School class size: Research and policy.
Beverly Hills, CA: Sage Publication.
Ham, S. H., Brown, T. J., Curtis, J., Weiler, B., Hughes, M., &
Poll, M. (2007). Promoting persuasion in protected areas: A guide for
managers. Developing strategic communication to influence visitor
behavior. Southport, Queensland, Australia: Sustainable Tourism
Cooperative Research Centre.
Heintzman, P. (2009). Nature-Based Recreation and Spirituality: A
Complex Relationship. Leisure Sciences, 32(1), 72-89. doi:
10.1080/01490400903430897
Heintzman, P., & Mannell, R. C. (2003). Spiritual Functions of
Leisure and Spiritual Well-Being: Coping with Time Pressure. Leisure
Sciences, 25(2-3), 207-230. doi: 10.1080/01490400306563
Kahn, P. H., & Kellert, S. R. (Eds.). (2002). Children and
nature: Psychological, sociocultural, and evolutionary investigations.
Cambridge, MA: MIT Press.
Kaplan, R., & Kaplan, S. (1989). The experience of nature: A
psychological perspective. Cambridge: Cambridge University Press.
Kaplan, R., Kaplan, S., & Ryan, R. L. (1998). With people in
mind: Design and management of everyday nature. Washington DC: Island
Press.
Kaplan, S. (1993). The role of natural environment aesthetics in
the restorative experience. St. Paul, MN: U.S. Department of
Agriculture, U.S. Forest Service.
Kellert, S. R. (1996). The Value of Life. Washington, D.C.: Island
Press.
Kellert, S. R. (2005). Building for life: Designing and
understanding the human-nature connection. Washington DC: Island Press.
Kellert, S. R., & Farnham, T. (Eds.). (2002). The Good in
Nature and Humanity: Connecting Science, Religion, and Spirituality with
the Natural World. Washington, DC: Island Press.
Koecni, V. (2005). The aesthetic trinity: Awe, being moved,
thrills. Bulletin of Psychology and the Arts, 5, 27-44.
Lacome, B. (2003). The interpretation equation. In D. Larsen (Ed.),
Meaningful interpretation: How to connect hearts and minds to places,
objects, and other resources. Eastern National.
Larsen, D. (Ed.). (2003). Meaningful interpretation: How to connect
hearts and minds to places, objects, and other resources. Eastern
National.
Laski, M. (1961). Ecstasy: A study of some secular and religious
experiences. London: Cressett Press.
Mayer, C. C., & Wallace, G. (2008). The interpretive power of
setting: Identifying and protecting the interpretive postential of the
internal and external setting at Copan Archaeological Park, Honduras.
Journal of Interpretation Research, 13(2), 7-29.
McManus, P. M. (1987). It's the company you keep: The social
determination of learning, Aerelated behaviour in a science museum.
International Journal of Museum Management and Curatorship, 6(3),
263-270. doi: 10.1080/09647778709515076
McManus, P. M. (1988). Good companions: More on the social
determination of learning, related behaviour in a science museum.
International Journal of Museum Management and Curatorship, 7(1), 37-44.
doi: 10.1080/09647778809515102
Merriman, T., & Brochu, L. (2005). Management of interpretive
sites: Developing sustainable operations through effective management.
Fort Collins, CO: InterpPress.
Moscardo, G. (1999). Making Visitors Mindful: Principles for
Creating Quality Sustainable Visitor Experiences through Effective
Communication. Champaign, IL: Sagamore Publishing.
Otto, R. (1958). The Idea of the Holy (J. W. Harvey, Trans. second
ed.). New York: Oxford University Press.
Powell, R. B., Brownlee, M. T. J., Kellert, S. R., & Ham, S. H.
(2012). From awe to satisfaction: immediate affective responses to the
Antarctic tourism experience. Polar Record, 48(02), 145-156. doi:
doi:10.1017/S0032247410000720
Powell, R. B., Kellert, S. R., & Ham, S. H. (2009).
Interactional theory and the sustainable nature-based tourism
experience. Society and Natural Resources, 28(8), 761-776.
Skibins, J. C., Powell, R. B., & Stern, M. J. (2012). Linking
interpretation best practices with outcomes: A review of literature.
Journal of Interpretation Research, 17(1), 25-44.
Stern, M. J., Powell, R. B., & Hill, D. Environmental education
program evaluation in the new millennium: What do we measure and what
have we learned? Environmental Education Research. doi:
10.1080/13504622.2013.838749
Stokols, D., & Altman, I. (Eds.). (1987). Handbook of
Environmental Psychology. New York: John Wiley & Sons.
Van Winkle, C. M. (2012). The effect of tour type on visitors'
perceived cognitive load and learning. Journal of Interpretation
Research, 17(1), 45-58.
Wearing, S., & Wearing, B. (2001). Conceptualizng the selves of
tourism. Leisure Studies, 20, 143-159.
Williams, K., & Harvey, D. (2001). Transcendent experience in
forest environments Journal of Environmental Psychology, 21(3), 249-260.
doi: http://dx.doi.org/10.1006/ jevp.2001.0204
Zelezny, L. C. (1999). Educational interventions that improve
environmental behaviors: a meta-analysis. The Journal of Environmental
Education, 31(1), 5-14. doi: 10.1080/00958969909598627
Zink, R., & Burrows, L. (2008). "Is what you see what you
get?" The production of knowledge in-between the indoors and the
outdoors in outdoor education. Physical Education and Sport Pedagogy,
13(3), 251-265. doi: 10.1080/17408980701345733
Robert B. Powell
Department of Parks, Recreation and Tourism Management and School
of Agricultural
and Forest Environmental Sciences, Clemson University
Marc J. Stern
Department of Forest Resources and Environmental Conservation,
Virginia Tech
Table 1. Description and mean score of outcomes.
Outcomes N Mean S.D.
Satisfaction 272 8.94 0.64
Visitor experience and appreciation (Cronbach's 272 4.41 0.32
[alpha] = .89)
* Made my visit to this park more enjoyable 4.55 0.30
* Made my visit to this park more meaningful 4.49 0.32
* Enhanced my appreciation for this park 4.36 0.37
* Increased my knowledge about the program's 4.45 0.34
topic
* Enhanced my appreciation for the National 4.27 0.36
Park Service
Behavioral intentions (Cronbach's [alpha] = .94) 272 2.92 0.64
* Changed the way I will behave while I'm in 2.92 0.67
this park
* Changed the way I will behave after I leave 2.92 0.61
this park
Table 2. Description of context variables.
Context Variable, Definition, and Mean or Frequency
Measurement
Audience: Group size * Mean = 48
Number of total participants Median = 17
Audience: Ratio of children to adults Mostly Children = 25 (9%)
Categorized the ratio of children to Even Distribution = 82 (31%)
adults in the audience using 4 point Mostly Adults = 132 (49%)
scale: 1 = Mostly Children; 2 = Even All Adults = 29 (11%)
Distribution; 3 = Mostly Adults; 4 =
All Adults.
Program: Duration Mean = 49 minutes
Duration of interpretation program
defined by time in minutes.
Program Topic Natural = 170 (63%)
Nature-focused, culturally-focused, Cultural = 70 (26%)
or dual focus. Dual Focus = 29 (11%)
Program Type Guided Walk/Tour = 161 (59%)
Guided Walk/Tour, Activity, Activity = 8 (3%)
Demonstration, or talk/slideshow/ Demonstration = 5 (2%)
presentation Talk/slideshow/presentation
= 98 (36%)
Setting: Urban-Remote Urban = 91 programs (33%)
Parks were categorized as urban Urban-proximate = 50
(within the limits of metropolitan programs (18%)
areas with < 50,000 residents), urban Remote = 131 programs (48%)
proximate (outside urban area, but
within a 60 mile radius), or remote
(60 miles or more from any
metropolitan area).
Setting: Location Indoors = 55 (20%)
Indoors, outdoors, or both. Outdoors = 195 (72%)
Both Inside and Outside = 22
(8%)
Resource quality Mean = 2.37
Degree to which the resource where Iconic or grandiose = 134
the program took place is awe (49%)
inspiring or particularly iconic: 1 = Pleasant but not iconic =
Unimpressive/generic; 2 = Pleasant 104 (38%)
but not iconic; or 3 = Contextually Unimpressive/generic = 34
iconic or grandiose. (13%)
Intervening Variable: Unexpected Bad Weather = 9 (3%)
negative event Negative events = 34 (13%)
Any unexpected interruptions or
emergencies during the program, such
as a sudden change in weather,
medical emergency, technical
difficulties, or hazardous conditions
that detracted from the quality of
the program: 1 = Occurred; 0 = No
Issues.
Intervening Variable: Unexpected Positive events = 5 (2%)
positive event
An unexpected experience that
occurred during the program, such as
seeing charismatic wildlife or other
unique phenomena that added
significantly to the quality of the
experience: 1 = Occurred; 0 = Did not
occur.
* Analyses pertaining to group size used all 312 valid programs.
Because we deemed programs with 5 or more attendees (n = 272) to be
different phenomena from programs with 5 or less attendees (n = 40),
all analyses pertaining to the other context variables used the
sample of programs with 5 or more attendees.
Table 3. Correlation coefficients for programs with mostly adult
audiences (n = 161) vs. those containing an equal or larger
proportion of children (n = 107).
Visitor experience
Characteristic Satisfaction and appreciation
Children Adult Children Adult
Interpreter
characteristics
Audibility .317 ** .104 .290 ** .005
Authentic emotion and .450 ** .403 ** .410 ** .199 *
charisma
Confidence .523 ** .455 ** .386 ** .186 *
False assumption about -.167 -.184 * -.258 ** -.179 *
audience
Humor quality .313 ** .263 ** .382 ** .099
Humor quantity .184 .100 .236 * -.043
Personal sharing .097 -.001 .174 -.068
Responsiveness .302 ** .195 * .267 * .208 **
Program characteristics
Appropriate for audience .404 ** .267 ** .397 ** .313 **
Appropriate logistics .317 ** .038 .396 ** .055
Clear message .312 ** .229 ** .274 ** .101
Connection .403 ** .308 ** .350 ** .180 *
Consistency .374 ** .178 * .316 ** .223 **
Multisensory engagement .182 .240 ** .072 .169 *
Novelty .213 * .080 .090 -.042
Organization .380 ** .359 ** .278 ** .177 *
Physical engagement .075 .078 .214 * .029
Surprise .201 * .101 .193 * .116
Verbal engagement .230 * .227 ** .265 ** .192 *
Behavioral
Characteristic intentions
Children Adult
Interpreter
characteristics
Audibility .215 * .034
Authentic emotion and .203 * .192 *
charisma
Confidence .336 ** .096
False assumption about -.139 -.036
audience
Humor quality .199 * .135
Humor quantity .099 .044
Personal sharing .235 * .101
Responsiveness .000 .087
Program characteristics
Appropriate for audience .365 ** .039
Appropriate logistics .279 ** .104
Clear message .302 ** .167 *
Connection .153 .141
Consistency .028 .064
Multisensory engagement .107 .134
Novelty -.066 .085
Organization .122 .167 *
Physical engagement .187 -.001
Surprise .104 .142
Verbal engagement .162 .170 *
** Significant at p < 0.01
* Significant at p < 0.05
Table 4. T-tests for programs with mostly children vs. mostly
adult audiences.
Satisfaction
Children Adult
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.52 -2.8 ** -0.25 -2.3 *
Appropriate pace 0.73 4.2 ** 0.41 3.1 **
Visitor experience
and appreciation
Children Adult
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.24 -2.5 * -0.06 -1.1
Appropriate pace 0.25 2.6 * 0.18 2.9 **
Behavioral intentions
Children Adult
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.21 -1.6 -0.07 -0.6
Appropriate pace 0.35 2.2 * 0.19 1.3
** Significant at p [less than or equal to] 0.01
* Significant at p [less than or equal to] 0.05
Table 5. Correlation coefficients for natural (n = 170) vs. cultural
programs (n = 70).
Visitor experience
Satisfaction and appreciation
Characteristic Natural Cultural Natural Cultural
Interpreter
characteristics
charisma
Audibility .029 .221 ** .014 .190 *
Authentic emotion and .440 ** .394 ** .294 * .316 **
Confidence .503 ** .437 ** .297 * .270 **
False assumption about -.368 ** -.040 -.273 * -.133
audience
Humor quality .202 .277 ** .150 .248 **
Humor quantity -.024 .217 ** -.093 .198 **
Responsiveness .207 .208 * .319 ** .213 *
Program characteristics
Appropriate for the .458 ** .355 ** .492 ** .351 **
audience
Appropriate logistics .286 * .115 .222 .247 **
Clear message .310 ** .243 ** .212 .201 **
Connection .335 ** .360 ** .311 ** .288 **
Consistency .302 * .271 ** .319 ** .253 **
Multisensory engagement .282 * .244 ** .245 * .109
Novelty .261 * .111 .147 -.069
Organization .266 * .431 ** .276 * .247 **
Sarcasm -.068 .128 -.083 .074
Surprise .174 .130 .161 .134
Verbal engagement .290 * .212 ** .457 ** .177 *
/stop
Behavioral intentions
Characteristic Natural Cultural
Interpreter
characteristics
charisma
Audibility .056 .120
Authentic emotion and .291 * .070
Confidence .330 ** .112
False assumption about -.206 -.041
audience
Humor quality .204 .131
Humor quantity -.033 .039
Responsiveness .035 .015
Program characteristics
Appropriate for the .269 * .122
audience
Appropriate logistics .252 * .156 *
Clear message .186 .128
Connection .215 .090
Consistency .131 .045
Multisensory engagement .183 .031
Novelty -.029 -.009
Organization .190 .128
Sarcasm -.322 ** -.049
Surprise .261 * .041
Verbal engagement .247 * .089
** Significant at p < 0.01
* Significant at p < 0.05
Table 6. T-tests for cultural vs. natural programs.
Satisfaction
Cultural Natural
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.34 -2.6 * -0.31 -2.1 *
Appropriate pace 0.52 3.8 ** 0.46 2.4 *
Use of props 0.07 0.5 0.13 1.0
Visitor experience
and appreciation
Cultural Natural
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.11 -1.9 -0.11 -1.3
Appropriate pace 0.17 2.5* 0.11 2.2 *
Use of props 0.01 0.1 0.17 2.2 *
Behavioral intentions
Cultural Natural
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging 0.01 0.1 -0.30 -1.9
Appropriate pace 0.29 2.1 * 0.11 0.5
Use of props 0.02 0.2 -0.04 -0.2
** Significant at p < 0.01
* Significant at p < 0.05
Table 7. Correlation coefficients for programs that took place in urban
(n = 91) vs. remote parks (n = 131).
Visitor experience
Characteristic Satisfaction and appreciation
Urban Remote Urban Remote
Interpreter characteristics
Audibility .238 * .159 .267 * .043
Authentic emotion and .415 ** .432 ** -.352 ** .280 **
charisma
Confidence .453 ** .519 ** .264 * .294 **
False assumption about -.096 -.308 ** -.189 -.259 **
audience
Formality -.0 46 -.132 -.259 * -.086
Humor quality .373 ** .275 ** .355 ** .207 *
Humor quantity .355 ** -.019 .372 ** -.061
Personal sharing -.027 .060 .073 .044
Responsiveness .230 .235 ** .213 .304 **
Program characteristics
Appopriate for the .371 ** .366 ** .391 ** .344 **
audience
Appropriate logistics .186 .162 .307 ** .240 **
Clear message .285 ** .250 ** .267 * .201 *
Connnction .394 ** .264 ** .270 * .285 **
Consistency .385 ** .300 ** .347 ** .353 **
Multisensory engagement .316 ** .076 .076 .066
Novelty .276 ** .084 .127 -.082
Organization .466 ** .307 ** .239 * .245 **
Sarcasm .290 ** .007 .259 * -.070
Surprse .109 .197 * .068 .190 *
Verbal engagement .285 ** .190 * .279 ** .199 *
Behavioral
Characteristic intentions
Urban Remote
Interpreter characteristics
Audibility .163 .000
Authentic emotion and .069 .262 **
charisma
Confidence .191 .265 **
False assumption about -.039 -.176 *
audience
Formality .100 -.039
Humor quality .198 .141
Humor quantity .163 -.027
Personal sharing -.024 .107
Responsiveness .123 .120
Program characteristics
Appopriate for the .165 .233 **
audience
Appropriate logistics .233 * .167
Clear message .107 .202 *
Connnction .080 .154
Consistency .095 .022
Multisensory engagement .047 .194 *
Novelty -.025 -.077
Organization .178 .148
Sarcasm .051 -.214 *
Surprse -.150 .278 **
Verbal engagement .047 .147
** Significant at p < 0.01
* Significant at p < 0.05
Table 8. T-tests for programs that took place in urban vs. remote parks.
Satisfaction
Urban Remote
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.57 -3.5 ** -0.35 -3.0 **
Appropriate pace 0.46 2.2 * 0.43 3.4 **
Visitor experience
and appreciation
Urban Remote
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.23 -2.5 * -0.10 -1.5
Appropriate pace 0.19 1.8 0.23 3.2 **
Behavioral intentions
Urban Remote
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.06 -0.4 -0.21 -1.8
Appropriate pace 0.39 1.9 0.14 1.1
** Significant at p < 0.01
* Significant at p < 0.05
Table 9. Correlation coefficients for indoor (n = 55) vs. outdoor
(n = 195) programs.
Visitor experience
Characteristic Satisfaction and appreciation
Indoor Outdoor Indoor Outdoor
Interpreter characteristics
Audibility .052 .236 ** .254 .152 *
Authentic emotion and .284 * .442 ** .221 .266 **
charisma
Confidence .273 * .551 ** .093 .337 **
False assumption about -.278 * -.163 * -.302 * -.189 *
audience
Humor quality .145 .330 ** .092 .222 **
Responsiveness .284 .194 ** .183 .195 **
Program characteristics
Appropriate for the audience .330 * .375 ** .214 .368 **
Appropriate logistics .284 * .118 .427 ** .148 *
Clear message .345 * .217 ** .124 .116
Consistency .125 .290 ** -.080 .338 **
Connection .286 * .332 ** .117 .242 **
Multisensory engagement .145 .196 * -.188 .113
Novelty .045 .192 ** -.164 .068
Organization .273 * .385 ** -.098 .297 **
Physical engagement -.266 * .120 -.296 * .141 *
Sarcasm .068 .098 -.078 .043
Surprise .063 .174 * -.013 .179 *
Verbal engagement .025 .228 ** -.008 .182 *
Behavioral
Characteristic intentions
Indoor Outdoor
Interpreter characteristics
Audibility .134 .097
Authentic emotion and .119 .180 *
charisma
Confidence .017 .199 **
False assumption about -.049 -.103
audience
Humor quality .115 .132
Responsiveness .049 .037
Program characteristics
Appropriate for the audience .149 .112
Appropriate logistics .190 .126
Clear message .279 * .131
Consistency -.099 .041
Connection .248 .055
Multisensory engagement -.107 .178 *
Novelty -.054 .024
Organization .001 .142 *
Physical engagement -.125 .080
Sarcasm -.003 -.210 **
Surprise .047 .141 *
Verbal engagement .023 .139
** Significant at p < 0.01
* Significant at p < 0.05
Table 10. T-tests for indoor (n = 55) vs. outdoor (n = 195) programs.
Satisfaction
Indoor Outdoor
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.58 -2.5 * -0.18 -1.7
Appropriate pace .0.36 1.3 0.61 5.2 **
Visitor experience
and appreciation
Indoor Outdoor
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.20 -1.7 -0.01 -0.3
Appropriate pace 0.14 0.9 0.22 3.9 **
Behavioral intentions
Indoor Outdoor
Mean Mean
Program characteristics diff. t diff. t
Fact-based messaging -0.32 -1.6 -0.03 -0.3
Appropriate pace -0.1 -0.3 0.25 2.1 *
** Significant at p < 0.01
* Significant at p < 0.05
Table 11. Program and interpreter characteristics with different
relationships to outcomes in different contexts.
Visitor Behavioral
Context Satisfaction Experience and Intentions
Appreciation
More children in Appropriate Appropriate
the audience logistics (+) logistics (+)
Audibility (+) Audibility Confidence (+)
(+)
Humor quality Appropriate
(+) for audience
Humor (+)
quantity (+)
Nature-focused False False Sarcasm (-)
programs assumption assumption
about the about the
audience (-) audience(-)
Culturally- Humor quantity Humor
focused programs (+) quantity (+)
Urban parks Audibility (+) Audibility
Sarcasm (+) (+) Sarcasm
Humor quantity (+) Humor
(+) quantity (+)
Multisensory
(+)
Remote parks Appropriate Surprise (+)
pace (+) Sarcasm (-)
Indoor programs Physical Physical
engagement(-) engagement(-)
Outdoor programs Physical Confidence (+) Multisensory
engagement (+) Consistency (+)
Appropriate (+)
pace (+) Consistency
(+)
Organization
(+)
Appropriate
pace (+)
(+)
Physical
engagement
(+)