The ultimate goal of higher education internationalization efforts can be stated fairly simply as equipping students with an awareness and respect for cultural differences and ideally an eagerness to engage and collaborate effectively with people from other parts of the world. ‘The call to integrate intercultural knowledge and competence into the heart of education,’ the authors of a recent AAC&U report similarly write, is ‘an imperative born of seeing ourselves as members of a world community, knowing that we share the future with others.’

 

But how does an institution of higher education assess whether its students have acquired the ‘global’ and ‘intercultural’ competence (ICC) alluded to in such statements? The present article describes some recent ways in which scholars have addressed this question, giving special attention to issues surrounding the establishment of an assessment regimen and the selection of testing instruments.

 

In the case of starting points, a review of the literature produced on these subjects shows the immense importance that scholars place on the fashioning of intended -- and measurable -- learning outcomes. As Betty Leask writes, this is the first, indispensable step of any assessment activity (Leask, 53). Furthermore, an assessment system may be envisioned as a series of outcome ‘alignments’ (Judd and Keith). That is to say that institutional level outcomes, as articulated in mission statements and general education requirements, should find reinforcement in student learning outcomes produced on the program level (e.g., those pertaining to a major, discipline, concentration, specific courses, etc.). There are indications finally of growing interest in this area of development, as witnessed in the ACE’s latest Mapping Internationalization report: According to the editors, ‘nearly two-thirds’ of the US institutions surveyed ‘have specified international or global student learning outcomes for all students, or for students in some schools, departments, or programs’ (Helms et al, 14).

 

Samples of general ICC learning outcomes (with rubrics to match) which can be used to initiate an assessment regimen are available from organizations such as NAFSA and the AAC&U. In ‘Measuring and Assessing Internationalization’, NAFSA scholar Madeline F. Green surveys for example the work done on this front at various institutions from around the country. Typical are the outcomes adopted by California State University Stanislaus, which include the following: ‘Students will demonstrate the ability to perceive any given event from more than one cultural viewpoint’, and ‘Students will show how the behavior of individuals, groups, and nations affect others, in terms of human rights and economic well-being’ (Green, 15).

 

Similar aims are expressed in the AAC&U’s ‘Intercultural Knowledge and Competence Value Rubric’, another important assessment resource. The authors of this document begin by identifying a set of intercultural competence outcomes in the areas of knowledge (‘both of one’s own and other cultural frameworks’) as well as skills (‘empathy, verbal and written communication, curiosity and openness’). They then articulate the rubrics or ‘fundamental criteria for each learning outcome, with performance descriptors demonstrating progressively more sophisticated levels of attainment’ (AAC&U, Intercultural Knowledge and Competence Value Rubric). The rubric for cultural self-awareness is depicted below:

Knowledge: Cultural self-awareness Articulates insights into own cultural rules and
biases (e.g. seeking complexity; aware of how her/his experiences have shaped
these rules, and how to recognize and respond to cultural biases, resulting in
a shift in self-description.)
Recognizes
new perspectives about own cultural rules and biases (e.g. not looking for
sameness; comfortable with the complexities that new perspectives offer.)
Identifies
own cultural rules and biases (e.g. with a strong preference for those rules
shared with own cultural group and seeks the same in others.)
Shows
minimal awareness of own cultural rules and biases (even those shared with own
cultural group(s)) (e.g. uncomfortable with identifying possible cultural
differences with others.)
Reprinted with permission from ‘VALUE: Valid Assessment of Learning in Undergraduate Education.’  Copyright 2018 by the Association of American Colleges and Universities. https://www.aacu.org/value.

The authors go on to make the important point that ‘the rubrics are intended for institutional-level use in evaluating and discussing student learning’. They add that ‘the core expectations articulated in all 15 of the VALUE rubrics’ should then be ‘translated into the language of individual campuses, disciplines, and even courses’ (AAC&U, Intercultural Knowledge and Competence Value Rubric). As these words imply, institutional outcomes should inform the assessment work done at the program level. Decisions would consequently be made regarding which majors, disciplines, etc. would be used to assess student performance against these outcomes and the means for their inclusion in course syllabi.

 

Ways of how this might be accomplished are provided in various works of Darla Deardorff, a leading figure in the field of ICC assessment. She writes in one such piece that ‘measurable outcomes under the general goal of “understanding others’ perspectives”’ might be articulated in the learning outcomes of a given course or program of studies as follows: ‘“By the end of the program, learners can articulate two different cultural perspectives on global warming” or “By the end of this class, learners can define what a world view is and three ways in which it impacts one’s behavior”’ (Deardorff 2009, 482). Deardorff adds that ‘writing specific outcomes statements (learning objectives) related to aspects of intercultural competence and developing indicators of the degree to which the statements can be assessed remains an area in need of further research and work, especially within specific fields such as engineering, policing, and health care’ (Deardorff 2009, 482).

 

In the case of the choice of assessment instruments, scholars universally advocate multiple forms of measurement. Deardorff cites for example the results of a recent survey of mainly US faculty and administrators which found that that ‘an average of five different ICC assessment methods [were] used per institution’ (Deardorff 2006, 250). These typically involved a combination of direct (e.g., based on a particular assignment) and indirect (e.g., based on surveys and focus groups) forms of assessment which were both qualitative and quantitative in nature. In terms of the latter, this data is often gathered through ‘Likert-type items’, such as the Intercultural Development Inventory (IDI), which ‘ask the respondents to rate their agreement with a given statement on a scale that ranges from one extreme to another (e.g., strongly agree to strongly disagree)’ (Griffith et al, 15).

 

The preceding sections have discussed ways in which ICC learning outcomes and assessment procedures may be ‘aligned’ on an institutional-wide basis. Such a venture necessarily involves significant commitment and levels of collaboration on the part of administrators and faculty. Although internationalization advocates see such comprehensive ventures as the optimum way of moving forward, readers will also find a large body of studies available which describe more limited, yet still valuable ICC assessment initiatives, such as those intended to test the efficacy of specific programs or ventures.

 

A valuable illustration of efforts of this kind is provided by Adriana Medina-López-Portillo of the University of Maryland, Baltimore County, who describes in one recent article a study undertaken by her department to assess the effectiveness of their study abroad programming. Specifically, the author and her colleagues sought ‘to measure and describe changes in the intercultural sensitivity of University of Maryland students who would be studying abroad in two different language-based programs of differing lengths (Medina-López-Portillo, 182). Much in keeping with what has been written above, this study employed multiple assessment instruments of both a quantitative, as in the use of the IDI, and qualitative form, as in the case of pre and post program interviews and questionnaires (Medina-López-Portillo, 183).

 

In summary, higher education internationalization advocates contend that intercultural competence is something that can be measured. It is furthermore an area of activity that is making headway on American campuses, as indicated in the ACE’s latest Mapping report among other sources. Suffice to say, ICC assessment is likely to remain an important subject for the foreseeable future and a frequent topic of AHEA research updates.

 

 

__________________________

 

Works Cited

 

Deardorff, Darla K. ‘Identification and Assessment of Intercultural Competence as a Student Outcome of Internationalization’, Journal of Studies in International Education 10, 3 (2006), pp. 241-266.

Deardorff, Darla K. ‘Implementing Intercultural Competence Assessment’, in Darla K. Deardorff, ed., The SAGE Handbook of Intercultural Competence (London: Sage, 2009), pp. 477-491.

Green, Madeleine F. ‘Measuring and Assessing Internationalization’, NAFSA: Association of International Educators (2012), pp. 1-26.

Griffith, Richard L., Wolfeld, L., Armon, B.K., Rios, J., and Liu, O.L. ‘Assessing intercultural competence in higher education: Existing research and future directions’, ETS Research Report Series 2016, 2, 1-44.

Helms, Robin, Brajkovic, Lucia, and Struthers, B. Mapping Internationalization on US Campuses: 2017 edition (Washington, DC: American Council on Education, 2017).

Judd, Thomas, and Keith, Bruce. ‘Student Learning Outcomes Assessment at the Program and Institutional levels’, in Charles Secolsky and D. Brian Denison, eds., Handbook on Measurement, Assessment, and Evaluation in Higher Education (Abingdon: Routledge, 2011), pp. 31-46.

Leask, Betty. Internationalizing the Curriculum (New York: Routledge, 2015).

Medina-López-Portillo, Adriana. ‘Intercultural Learning Assessment: The Link Between Program Duration and the Development of Intercultural Sensitivity’, Frontiers: The Interdisciplinary Journal of Study Abroad 10 (2004), pp. 179-199.