Desktop versionMobile version

Open Education

 | 
Patrick Blessinger
, 
TJ Bliss

13. Open Assessment Resources for Deeper Learning

David Gibson, Dirk Ifenthaler and Davor Orlic

Abstract

This chapter outlines the design concepts for the creation of a global Open Assessment Resources (OAR) item bank with integrated automated feedback and scoring tools for Open Educational Resources (OER) that will support a wide range of assessment applications, from quizzes and tests to virtual performance assessments and game-based learning, focused on promoting deeper learning. The concept of “promoting deeper learning” captures the idea that authentic assessment is fundamental to educative activity and the concept of “item bank” captures the idea of reusability, modularity and automated assembly and presentation of assessment items. We discuss the different assessment structures, processes and quality measurements across various types of assessments and outline how a globally distributed technology infrastructure aligned with and linked to OER could help advance education worldwide. Six core operational services of higher education service delivery — content, interaction, assessment, credentialing, support and technology — are used as a foil for the discussion and analysis of the changes in brand differentiators in these services, which are emerging due to OER and can be enhanced with OAR

Full text

Introduction

1Imagine a tutor or sessional instructor anywhere in the world who wishes to know something about what students know and can do. With knowledge about Open Assessment Resources (OAR), a repository is visited that is linked to many sites frequented by instructors and instructional designers. The website links existing OER activities with open assessment resource activity-prompts for online student responses. Within the assessment component of a selected OER, the instructor finds a searchable data bank of concepts linked to core content and activities related to what is being taught. The assessment activity-prompt packages can be made, modified or found and used for the instructor and students cross-linked with the OER. Or one can start by searching for any OER to find an assessment of a transferable skill (e.g. leadership, collaboration, problem-solving) to be assessed. As instructors in new contexts modify the OER over time, the associated open assessment resources developed in that context remain linked to and responsive to those changes.

2Some of the assessment activity-prompts require short answers; others require the student to construct something. A few require several steps of a process and collaborative processes over a period of time. At the end of the search and curriculum construction process, a link is received which can be shared with students (e.g. on twitter, social media or embedded in their online course or unit homepage). Students visit the link and interact with their tutor’s creation, which may take from a few minutes to several days. Their individualized interactions are automatically stored, analyzed and visualized, and narrated in reports. Automated interventions and help suggestions guide students to explore, think, create, interact, solve and respond, and based on what they do, the products they create and the resources they use, ongoing and final reports are emailed or channeled to them and their tutor. The visualizations and text of the report diagnose current status compared to a variety of cohorts selected by the instructor and make recommendations for “next steps” and “additional activities” concerning the concepts selected by the tutors. This is our vision of a globally networked formative open assessment resource network that can mine the social and intellectual creativity of the world’s front line of teaching, and can learn from instructors as well as their students.

3Formative assessment purposes such as these are typically low stakes (e.g. ungraded, advisory in nature) and are focused on helping the learner to perform and achieve (e.g. to aide in acquiring knowledge and skills). Summative assessment purposes, in contrast, are high stakes (e.g. success or failure of a unit or course by an individual, obtaining a credential or license) and focused on making a decision that classifies the learner (e.g. as a “B” student, as a licensed practitioner). The Open Assessment Resources (OAR) framework proposed here delivers these new capabilities to the instructor for formative, low stakes, rapid feedback while also providing a new global infrastructure for improving summative assessment.

4Open Educational Resources according to the William and Flora Hewlett Foundation are:

  • 1 http://www.hewlett.org/programs/education-program/open-educational-resources

[…] teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge.1

5With an emphasis on free access, OER has taken “content” off the table as a brand differentiator for higher education institutions (Atkins, Brown, and Hammond, 2007; Conrad, Mackintosh, McGreal, Murphy and Witthaus, 2013). What does a typical higher education institution have to offer in the way of paid content that cannot be freely accessed from the top universities in the world or directly from the primary source of the information? While there might be some areas of unique content that are not yet in OER, increasing quantities of the general curriculum and a great many advanced courses are in the public domain in OER repositories (Robertson, 2010; Wilson, Schuwer, and McAndrew, 2010). The rush into and hype concerning Massive Open Online Courses (MOOCs) has helped to bring this fact to life and has shrunk the pool of differentiators further by including the learning interactions including assessment (Pappano, 2012). Are the remaining core operational services of higher education (credentialing, support, and technology according to Anderson and McGreal, 2012) reconfigurable into a new business model, if content, interactions and assessment cease to be primary services?

6Perhaps the answer to this question is one of the barriers to OER uptake in higher education, which has been slowed in no small measure by a lack of clarity concerning formative versus summative assessment, certification and accreditation.

Institutional participation in the development and use of OER has been low, with few institutions indicating that they either produce or use OER. Even fewer institutions have implemented open courses for assessment and accreditation. (Conrad et al., 2013)

7Perhaps institutions are resisting OER because they focus on the problems of summative assessment, which prevents them from embracing their formative assessment possibilities.

8In response to this context, this chapter focuses on formative feedback, which can play a critical role in formal assessment systems. Wagner and Wagner (1985) consider feedback to be any type of information provided to learners and Schimmel (1983) found that feedback is most effective under conditions that encourage the learner’s conscious reception and engages the learner in reflecting on the response. Such feedback focuses on improvement information and usually implies multiple attempts at performance because without a second chance to perform, feedback cannot be formative for improvement. Formative feedback is “low stakes” and remains at a distance from certification and accreditation, which rely almost exclusively on “high stakes” summative judgements of academic achievements that result in a determination of status (Harlen and James, 1997). The core idea proposed here is that an open assessment resources (OAR) approach has the potential to increase trust in and use of OER in formal educational systems by adding clarity about assessment purposes and targets in the open resources world. The OAR framework outlined here makes use of the full range from human-scored and human-produced feedback to semi- and fully-automated forms of feedback. Semi-automated feedback approaches include humans and machines working together to make complex judgments, systems that remain open to human shaping and correction after initial machine learning training and gamification techniques where assessment feedback is embedded within the learning experience.

OAR-supported generalized formative feedback is also distinguished from the highly personalized feedback approaches of adaptive assessments and adaptive curriculum, both of which are increasingly playing a role in institutional practices. Personalized adaptive curriculum and assessment approaches require a tight alignment and control of content to provide personal recommendations based on a learner profile and computational matching algorithms that trigger appropriately tagged alternative learning experiences and interactions (Ifenthaler, 2015). The personalized adaptive approaches are hard to federate across varied institutions (e.g. in the sense of a group of providers agreeing upon standards of operation in a collective fashion, as in information science), especially as they are integrated into locally unique educative experiences as part of the value propositions of the higher education institution. In addition, personalization involves several challenging ethical dimensions such as privacy of information, security of data and validation of achievement of individual students (Ifenthaler and Widanapathirana, 2014).

9In contrast, OAR assessments with generalized formative feedback are aligned with a specific educative purpose expressed by some user of a specific OER towards the utility and expectations for using that OER to achieve an educational outcome. The generalization of feedback can follow anonymous crowd behavior (e.g. common misconceptions, common pathways of performance) in the OER rather than individualized behavior. The OAR framework does not add the complexity of a particular student and the availability of a bank of appropriately meta-tagged alternative learning experience options, leaving this challenge for other developers to use the OAR application programming interface (API) for those purposes. Rather, the OAR approach is focused on a few high-level assessable outcomes (e.g. collaboration, problem-solving, communication, creativity) and the feedback (e.g. recommendations for improved performance, prompts for further elaboration of ideas, suggestions for alternatives) that pertain to supporting and achieving these outcomes within a specific OER with fewer ethical challenges. The higher-level deeper learning outcomes are valued by many, are broadly agreed upon as worthy aims of education and, if appropriately supported and scaffolded by the proposed OAR technology, can be shared and federated. The mechanics of the OAR evidence model is comprised of federated algorithms that capture expert domain knowledge as well as crowd behavior and are then used to make automated feedback, recommendations and decisions within the learning object world of the specific OER. See Architecture of OAR below for a detailed description of the instantiation of alignment, focus and agreement in the assessment outcomes.

10We do not address here all of the challenges of assessing deep learning processes (e.g. collaborative problem-solving, creativity, analysis, self-regulation, metacognition), as distinguished from lower level objectives such as remembering, understanding and applying knowledge in some specific field (Anderson, Krathwohl, and Bloom, 2001). The stance taken here is that any area of authentic academic or professional performance can be appropriately documented and measured when there is professional agreement about what someone knows and can do when a level of performance is in evidence. We further assert that “the machine”, by which we mean the globally cloud-sourced distributed intelligence of humankind facilitated by network technologies and computational resources, can play an appropriate and increasingly sophisticated role in network-based educational assessment. These challenges are not insurmountable, but here we are focusing on the broad objective of the framework to create a globally relevant, emergent and continuously improving assessment activity item bank linked to specific OERs with integrated automated feedback and scoring tools.

11The OAR system will support a wide range of assessment applications, from quizzes and tests to virtual performance assessments and game-based learning, focused on promoting deeper learning. The concept of “assessment activity” expresses the idea that authentic assessment is fundamental to educative activity, and the concept of “item bank” implies reusability, modularity and automated assembly and presentation of assessment items.

Background

12Assessment in the context of Open Educational Resources has been discussed primarily as a matter of summative accreditation and credentials (Conrad et al., 2013). Here, we use that discussion as a context to introduce the social and cognitive benefits of rapid, scalable, formative feedback at a global level.

13Of the six core services provided by higher education, that is, content, interaction, assessment, credentialing, support and technology (Anderson and McGreal, 2012) future trends in global education predict a migration of services into new configurations within as well as outside of higher education. Some services will divide into free offerings, some into globally shared resource spaces and some into sharper focus as specialized core competencies in basic research, the application of knowledge and excellence in teaching and learning, following the global trend toward unbundling the corporation’s three primary functions of finding customers, serving them with content and operating the organization (Hagel and Singer, 1999). We envision these migrations of service delivery options occurring in two complementary trends as higher education institutions strive for global reach and to differentiate themselves from others: one aimed at scale supported by lower interaction costs and the other aimed at uniqueness and brand differentiation driven by a complex system of history, reputation, outcomes and impacts (Table 1).

14The trends of scale and uniqueness are not antithetical, but are instead integral to the role of higher education in society as one of the pillars of the advancement of knowledge and the economy. Developing higher educational experiences that are unique to one institution and yet can scale to the world implies a broad conception of quality because a higher education institution’s reputation rests on the quality of its offerings, interactions, and products, and includes the quality of its research productivity, excellence of teaching, the perceptions and ratings that impact world ranking, employer satisfaction ratings and the institution’s impacts on societal and cultural advancement (Sheehan and Stabell, 2013). An institution’s contributions to the world include advancing knowledge and helping to meet the global demand for accessible education, which ultimately demonstrates its considerable influence and power to improve living conditions and its social and economic impacts through sustainable development activities in all fields of knowledge (van Vught, 2008).

15Within the six-services context, we propose the Open Assessment Resources (OAR) model of free automated formative assessments (and free support for semi-automated and fully human formative assessments) to advance the trend toward scale. To advance the trend toward uniqueness by creating a new common ground of deeper learning, which allows universities to focus on higher levels in terms of their specialities, we propose to focus the OAR on transferrable deep learning processes (e.g. collaborative problem-solving, creativity, self-regulation, metacognition) from specialized fields into broader contexts, which are to be distinguished from other objectives of assessment such as acquiring knowledge and applying it in a specific field of knowledge. Several organizations — Hewlett Foundation, Educause, Education Week, Alliance for Excellent Education and others have used the term “deeper learning” — as a way to highlight higher order learning skills. The Hewlett Foundation (2010) identifies deeper learning with five groups of abilities:

  • Mastering core academic content;
  • Critical thinking and problem solving;
  • Working collaboratively;
  • Communicating effectively; and
  • Learning how to learn independently.

16In the next section, we discuss new mechanisms and leverage points for embedding these deeper learning abilities across the six-services model, while pointing out major strategies for utilizing the OAR technology to simultaneously achieve scale and uniqueness.

OAR and the Core Services of Higher Education

17The intersection of the six core services of higher education with the two trends scale and uniqueness provides a structure for OAR interactions that will be elaborated in this section. The costs to learners in the OAR model varies from free, low and medium to high-cost across the six core services depending on options within the trends of scale (an institution’s need to achieve sustainable scale) and uniqueness (an institution’s need to build and maintain brand differentiation).

18In the next sections we present details of this broad outline. We will work backwards with “the end in mind” by starting with the concepts of automated and semi-automated formative assessments and the architecture of OAR. Then we will discuss each of the six core services and include the contexts of the trends toward scalability and uniqueness. Finally, we will conclude by bringing the OAR model back into the larger context of higher education worldwide, with implications for various next steps in research and development.

Table 1. Six dimensions of higher education services with two trends: scale and uniqueness

Table 1. Six dimensions of higher education services with two trends: scale and uniqueness

Automated and Semi-automated Formative Assessment

19Automation and semi-automation (e.g. humans and machines working together) to provide feedback, observations, classifications and scoring are increasingly being used to serve both formative and summative purposes. For example, in teaching and testing writing skills, results from a comparison of automated essay scoring applications (Shermis and Hamner, 2012) demonstrated that “scoring was capable of producing scores similar to human scores for extended-response writing items with equal performance for both source-based and traditional writing genre” (p. 2). The report concluded that, “As a general scoring approach, automated essay scoring appears to have developed to the point where it can be reliably applied in both low-stake assessment (e.g. instructional evaluation of essays) and perhaps as a second scorer for high-stakes testing” (p. 27). The scalability of OER provides a great opportunity for large numbers of training samples and human judgment to be combined at a global level.

20Extending beyond writing and other basic issues of human learning and performance, an international group of researchers has been developing the technology and tools for a highly integrated model-based assessment platform for assessing the acquisition and development of complex cognitive skills (Al-Diban and Ifenthaler, 2011; Ifenthaler, 2010, 2014; Pirnay-Dummer, Ifenthaler, and Spector, 2010). In addition, a global workgroup co-founded by UNESCO and a collaboration of national technology in education entities — EDUsummIT — has devoted its biannual summits 2006 to a range of topics connected to assessment, deeper learning, and the use of emerging technologies to improve education throughout the world. One of the summit’s discussion groups has published analyses and evidence-based position papers on the role of technology in assessment (Gibson and Webb, 2013, 2015; Webb and Gibson, 2015; Webb and Gibson, 2011; Webb, 2011).

Architecture of OAR

21In this section, we outline model architecture for the OAR framework. The architecture supports the inclusion of assessment materials linked to specific OER learning materials and provides a high level completed road map to instantiate, pilot and validate the system with large-scale providers of OER resources and services (Figure 1).

22The overall concept of such a solution is based on the bottom-up approach of applying simple scripts and snippets to the OER sites that would be linked to the strong server analytics side. This is how such a system will provide cross-site functionalities to every site using scripts and snippets, thus creating a network of interconnected OER sites.

23Figure 1 shows the high-level architecture for such a solution. In the middle are various OER sites that would install a few simple line scripts to provide the server side analytics platform with the data for the analytics. The two streams of analytics services will be implemented there:

  • Server side off-line content analytics (colored red in the figure); and
  • Server side real-time user modelling (colored green in the figure).

24Both services will provide back to the OER sites information about (i) the user and their learning model that will be used for learning personalization across the sites, (ii) cross-recommended content that is related to users’ current learning statuses and predicted learning paths, (iii) semantically structured information from automated and semi-automated processes that meta-tag the content that will be used by OER repositories for additional content preparation and (iv) a validation feedback of the OAR assessment.

Figure 1. Model Architecture for Open Assessment Resources Integration with Open Educational Resources. ASR = automatic speech recognition; MT = machine translation

25Below is the example of the simple script on a site for learning analytics:

$. ajax ({
url: “http://log2.quintelligence.com/​qlog.js”, type: ‘get’, dataType: ‘script’, cache: true, success: function () {setTimeout(function() {quintTracker (3);}, 100);}
});

26The OAR offers innovative technology elements that will integrate the currently scattered use of many OER sites across the globe and make those sites act as an innovative learning environment. Current OER repositories have objects that utilize various kinds of interoperable frameworks.

27The OAR solutions that will be offered to OER sites and their elements are:

  • Cross-site: providing technologies to transparently accompany and analyze users across sites;
  • Cross-domain: providing technologies for cross domain content analytics;
  • Cross-modal: providing technologies for multimodal content understanding;
  • Cross-language: providing technologies for cross lingual content recommendation;
  • Cross-cultural: providing technologies for cross cultural learning personalization;
  • Cross-social: providing technologies for social network activities; and
  • Cross-assessment: technologies for cross-site assessment of the impact of OER materials on learning (e.g. population performance metrics).

28The development of the network will follow a waterfall model with early versions concentrating on engaging users through providing them with information about different OERs that match their interests and learning needs, linking them with other learners who may be suitable discussants, either as equals, as advisors or as advisees. The project will track a user’s learning progress and use that to drive an analytics engine driven by state-of-the-art machine learning that can improve recommendations through better understanding of users, their progress and goals, and hence their match with knowledge resources of all types. The project will run a series of pilot case studies that enable the measurement of the broader goals of delivering a useful and enjoyable educational experience to learners in different domains, at different levels and from different cultures.

Six Dimensions of OAR Impacts on Higher Education Services

29With the rationale and architecture of OAR in mind, the next section discusses the major impacts on the six core operational services of higher education institutions.

Dimension 1: Content

30A major impact of OER is that content is free and widely available. Content therefore does not in and of itself constitute a point of differentiation among higher education institutions for a great many discipline areas. For example, one can study accounting anywhere in the world from any institution and be fairly well assured of acquiring a common foundation of knowledge with transferable skills and certifications. OER ideally extends that accessibility to many more fields of knowledge. The end point of the global accessibility of OER content when fully implemented is that a person can study and interact with learning materials on any subject in any field of knowledge from anywhere at anytime.

31OAR adds value to OER’s openness and accessibility by assuring that the learner has acquired or can demonstrate capability with new knowledge. What OER does for content, OAR does for the assurance of what a student knows and can do. OER includes learning resources such as lecture notes and videos of lectures, online learning materials, printed study guides produced by the institution or licensed third-party copyright materials. OAR creates an assessment context for a specific purpose of those specific OER learning resources — an OER-OAR pairing — by adding a prompt to the learner, a specific assessment task to use during or after interacting with the resources, and feedback based on the performance of the assessment task. The assessment purpose, task and feedback package is a specific kind of content uniquely tied to the OER resource for a specific context of use. Multiple OARs for any particular OER (many pairings) make it possible for the OER to serve different learning purposes and provide evidence to the student as well as instructor that the intended learning objectives of an OER-OAR pair were met to some standard of observable performance.

32The trend of scalable content is supported by the use of OER in courses and units; for example by lowering the cost of production of content for online courses (Conrad et al., 2013). The OAR framework supports scalability of assessment as instructors re-use the OER-OAR package with or without modifications. Allowing local remixing and relicensing of OER-OAR by content producers of unique, locally validated research knowledge supports the trend of unique content.

Dimension 2: Interaction

33Content is inert until a learner comes into contact with it, so interaction is key to engagement and learning, as implied by psychological theory (Carson, 1969; Chamorro-Premuzic and Furnham, 2004). In addition to learner-content interaction, experts such as instructors, mentors, researchers and tutors are typically part of a higher education class experience. Peer based and social group learning can also play a role. From the viewpoint of the OAR framework, all of these approaches can be maintained and scaffolded but perhaps most important, due to the unique affordances of eLearning, learner-content interactions can be highly interactive, providing choice and responsive content at higher levels than non-technological delivery in face-to-face contexts (see Benitez-Guerrero, 2013; Manninen, 2001). This is perhaps one of the reasons research has shown the superiority of blended learning over either all online or all face-to-face (Bonk and Graham, 2006; Tayebinik and Puteh, 2012). Three examples of interactions supported by the OAR design follow: learner-expert, learner-content, and learner-peer.

Learner-experts (e.g. Tutors, instructor-led discussions, feedback on assignments)

34Supporting the trend of scalability, the availability of free and low cost video experts accessible anywhere at anytime is an example of providing semi-automated expert interactions with masses. Leaner-expert interactions have been typified along a continuum of one-to-many, and when combined with individual or small group exercises, then extending into one-to-one support, when peers act as experts. Using a peer crowd to source experts in small discussion groups is supported by many-to-many interaction designs in a MOOC. Finally, when advice from a group is channeled toward the individual, it can be characterized as a many-to-one design. An example discussion of this continuum can be found in a reflective blog about the MITx U.Lab course (Scharmer, 2015). Typically, this expanding range of learner-expert interactions has thus far been designed for human-to-human communication, but the possibility with OAR and its capability for globally crowd-sourced semi-automated feedback is to envision where and how the machine can play a role in initiating, promoting, supporting and interacting with learners within this continuum.

35For example, in an adaptive curriculum, the machine can automate some of the decision points of a curriculum or an instructional path, helping to support planning and preparation for learning, or skills practice as seen in digital games, group experiences as in serious games as well as reflective thinking and writing.

36At a medium cost level, experts are trained and supported to provide semi-automated personalized guidance and instruction, for example, from teaching focused scholars, sessional and adjunct faculty members who use the OAR infrastructure as one of the tools of teaching. At the highest cost level are traditional hands-on interactions in real physical laboratories, scholarly apprenticeships that evolve over long periods of time and all forms of face-to-face communications, which might make minimal use of the OAR for exercises, quizzes and tests in a blended course.

Learner-content (e.g. Class activities, Labs, Internships)

37The Internet provides learners with direct access to and interaction with content in a range, from read-only to highly interactive engagement. For example, Google and Wiki MOOC-like content, media and interactions provide massive access to read-only content. Some OER content designed for user actions (e.g. widgets, simulations, interactive visualizations) inhabit the medium level of production costs with distribution costs approaching zero. At the unique end of the continuum, and with the highest cost of production and data maintenance, is highly interactive content with embedded analytics. The leading edges supporting uniqueness include learning experiences that utilize Game-based (Gibson and Jakl, 2015; Ifenthaler, Eseryel and Ge, 2012) and Transmedia Engagement methods (Jenkins, Purushotma, Clinton, Weigel and Robison, 2006). The OAR design, with its capability for immediate feedback supported by crowd-sourced intelligence, supports the evolution of digital game-based and shared story-telling approaches to learning integrated with data analytics and allows learner-to-content interactions to become embedded with appropriate assessments as well as reusable at scale.

Learner-peer (e.g. Study and discussion groups)

38There is considerable potential for self-organizing study groups to be supported by a globally distributed network of peers. OAR’s role in peer-based communication can support a social media market economy for education (e.g. an “eBay” of learning) where anyone with value to add to anyone else will be facilitated into and out of appropriate relationships as an expert, a learner and a peer when appropriate. Similar to how OER has taken some of the friction out of content development and access, OAR will be part of a system to take the friction out of educative relationships by facilitating feedback and allowing the machine to play an appropriate role supporting decentralized and distributed intelligence and communication concerning performance (formative) as well as comparative classification (summative) assessment.

Dimension 3: Assessment

39Authentic assessment is fundamental to providing formative feedback and determining the extent of what someone knows and can do in terms of appropriate, meaningful, significant and worthwhile forms of human accomplishment (Newmann and Archibald, 1992). In the context of someone learning with an OER, central to OAR is a globally distributed and crowd-sourced common ground of understanding among teachers and mentors about what kinds of formative feedback are useful for developing the authentic expertise of a novice in a relevant field of knowledge and practice. The common ground does not have to be created a priori for every OER by pre-arranged agreements; the OAR layer can be grown and developed naturally and automatically by observing and recording the actual feedback given to novices in similar digital performance circumstances, which requires the OAR to support evolving ontologies. With appropriate feedback from all users, the evolving distributed ontologies can range from a folksonomy to an expert-validated ontology for the OER (Angeletou, 2008; Gruber, 2007; Sturtz, 2004; Xie et al., 2014).

40The saved feedback can then be mined for automated formative assessment at scale. The infrastructure can also support the uniqueness of feedback needed to enable an adaptive curriculum by allowing for both private and public information layers to overlap and interact. For example, a new piece of private feedback could be compared to existing public feedback and then a decision could be made to edit the feedback, utilize the public resource, or continue with the new feedback as a new source for future machine learning training in either or both the private and public spheres.

41Assessment also includes classification of a learner’s performance, also known as summative assessment (Bennett, 2010; M. Webb, Gibson and Forkosh-Baruch, 2013; Wiliam and Black, 1996), which has been traditionally associated with grades, course exams and challenge exams for awarding recognition and credit. The OAR can serve as a foundational layer for fee-for-services from higher education institutions that wish to support scalable adaptive assessment (Almond and Mislevy, 1999) through a publicly available API and appropriate Creative Commons licensing (Hietanen, 2008).

Dimension 4: Credentialing

42One of the important products of a higher education program is the degree or credential supported by a transcript of grades or performance quality in the program’s courses. Recently, micro-credentialing and unbundling practices have also begun to appear due to evolving practices involving digital badges (Gibson, Ostashewski, Flintoff, Grant, and Knight, 2013; Grant, 2014). Credentialing is also involved in articulation agreements, which support credit-transfer processes among institutions, as well as in recognition of prior learning (RPL). We envision that OAR will support semi-automated RPL for diagnostics, study plans and microcredentials or badges because the infrastructure for credentialing maps closely to what is needed for summative assessment, where a current state of classification is the outcome sought from interactions with a learner. The infrastructure will also support semi-automated challenge exams for micro-certifications and assessment-based credentials, traditional study plans and graduation examinations.

Dimension 5: Support

43Learning support services in higher education include, among other things, career guidance and counselling, library services and academic study skills support. Freely available shared service models utilizing OAR might include APIs for licensed service groups and globally shared student services. Utilizing the strategy of interacting private and public layers, uniqueness will be supported for personalized and semi-automated personalized services.

Dimension 6: Technology

44The OAR design provides infrastructure and support for blended and technology-enabled learning including online course delivery through low cost distributed and open resources integrated with private cloud-based services for supporting unique added value technology developed and delivered by higher education institutions.

OAR in Global Higher Education

45The proposed OAR structure will require global collaboration and investment over time by a number of primary actors in educational technology. In addition, a number of research topics need to be investigated and can be supported by the data of the emerging system. Once data begins to flow, highly detailed event-level records of student performance will be available for data mining and a number of questions become immediately feasible to address and elaborate, including:

  • Assessment construct validity.
  • Predictive analytics for construct level feedback based on earlier test items.
  • Intervention strategies triggered during a formative assessment.
  • Algorithms of data discovery and evidence rule generation.
  • Human-computer interactions in an assessment ecosystem.
  • Ethics and effective processes of saving and sharing learner profile histories.
  • Exploration and validation of virtual performance assessment psychometric challenges.
  • Modification and adaptation of assessment modules.
  • Effects of teaching to authentic tests.
  • Equity of treatment for subgroups.

46These questions are now addressable primarily with small, single-study research designs by a handful of researchers who have built systems with sufficient teams of experts to enable inquiry into the wide range of related topics. As OAR becomes a reality, then these questions can begin to be addressed by a global community of like-minded educational researchers and access can be given freely to all higher educational institutions forming a new floor for student performance that raises standards of practice, doing for assessment what OER and MOOCs have begun to do for content and learning interactions.

Bibliography

References

Al-Diban, S. and Ifenthaler, D. (2011), Comparison of Two Analysis Approaches for Measuring Externalized Mental Models, Educational Technology & Society, 14(2), pp. 16–30.

Almond, R. and Mislevy, R. (1999), Graphical Models and Computerized Adaptive Testing, Applied Psychological Measurement, 23(3), pp. 223–237, http://dx.doi.org/10.1177/01466219922031347

Anderson, L., Krathwohl, D. and Bloom, B. (2001), A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, New York: Longman.

Anderson, T. and McGreal, R. (2012), Disruptive Pedagogies and Technologies in Universities: Unbundling of Educational Services, 15, pp. 380–389.

Angeletou, S. (2008), Semantic Enrichment of Folksonomy Tagspaces. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5318 LNCS, pp. 889–894, 10.1007/978-3-540-88564-1-58

Atkins, D. E., Brown, J. S. and Hammond, A. L. (2007), A Review of the Open Educational Resources (OER) Movement: Achievements, Challenges, and New Opportunities, William and Flora Hewlett Foundation, http://www.oerderves.org

Benitez-Guerrero, E. I. (2013), Mining Data from Interactions with a Motivational-aware Tutoring System Using Data Visualization, Journal of Educational Data Mining, 5(1), pp. 72–103.

Bennett, R. (2010), Cognitively Based Assessment of, for, and as Learning (CBAL): A Preliminary Theory of Action for Summative and Formative Assessment, Measurement: Interdisciplinary Research and Perspective, 8(2–3), pp. 70–91, http://www.tandfonline.com/doi/abs/10.1080/15366367.2010.508 686

Bonk, C. J. and Graham, C. R. (2006), The Handbook of Blended Learning, Global Perspectives, Local Designs, 1, http://www.lavoisier.fr/livre/notice.asp?ouvrage=1584996

Carson, R. C. (1969), Interaction Concepts of Personality, Chicago: Aldine.

Chamorro-Premuzic, T. and Furnham, A. (2004), A Possible Model for Understanding the Personality-intelligence Interface, British Journal of Psychology, 95, pp. 249–264, http://dx.doi.org/10.1348/000712604773952458

Conrad, D., Mackintosh, W., McGreal, R., Murphy, A. and Witthaus, G. (2013), Report on the Assessment and Accreditation of Learners Using OER. Commonwealth of Learning, http://hdl.handle.net/11599/232

Gibson, D. C. and Jakl, P. (2015), Theoretical Considerations for Game-Based e-Learning Analytics, in T. Reiners and L. Wood (Eds.), Gamification in Education and Business, pp. 403–416, New York: Springer.

Gibson, D. C., Ostashewski, N., Flintoff, K., Grant, S. and Knight, E. (2013), Digital Badges in Education, Education and Information Technologies, 20(2), 403–410, http://dx.doi.org/10.1007/s10639-013-9291-7

Gibson, D. C. and Webb, M. (2013), Assessment as, for and of 21st Century Learning, International Summit on ICT in Education, Torun: EDUsummIT 2013, p. 17, http://www.edusummit.nl/fileadmin/contentelementen/kennisnet/EDUSummIT/Documenten/2013/6_WCCE_2013-_Educational_Assessment_supported_by_IT_1_.pdf

Gibson, D. C. and Webb, M. E. (2015), Data Science in Educational Assessment, Education and Information Technologies, June(4), pp. 697–713, 10.1007/s10639-015-9411-7

Grant, S. (2014), What Counts as Learning: Open Digital Badges for New Opportunities, Irvine, CA: Digital Media and Learning Research Hub https://www.academia.edu/8022569/What_Counts_As_Learning_Open_Digital_Badges_for_New_Opportunities

Gruber, T. (2007), Ontology of Folksonomy: A Mash-up of Apples and Oranges, International Journal on Semantic Web & Information Systems, 3(2), pp. 1–11, http://dx.doi.org/0.4018/jswis. 2007010101

Hagel, J. and Singer, M. (1999), Unbundling the Corporation, Harvard Business Review, 77(March-April), pp. 133–141.

Harlen, W. and James, M. (1997), Assessment and Learning: Differences and Relationships between Formative and Summative Assessment, Assessment in Education: Principles, Policy & Practice, 4(3), pp. 365–379, http://dx.doi.org/10.1080/0969594970040304

Hietanen, H. A. (2008), Creative Commons’ Approach to Open Content, Intellectual Property, pp. 1–88, http://dx.doi.org/10.2139/ssrn.1162219

Ifenthaler, D. (2015), Learning Analytics, in J. M. Spector (Ed.), The SAGE Encyclopedia of Educational Technology, 2, pp. 447–451, Thousand Oaks: Sage.

Ifenthaler, D. (2014), AKOVIA: Automated Knowledge Visualization and Assessment, Technology, Knowledge and Learning, 19(1–2), pp. 241–248, http://dx.doi.org/10.1007/s10758-014-9224-6

Ifenthaler, D. (2010), Relational, Structural, and Semantic Analysis of Graphical Representations and Concept Maps, Educational Technology Research and Development, 58(1), pp. 81–97, http://dx.doi.org/10.1007/s11423-008-9087-4

Ifenthaler, D. and Widanapathirana, C. (2014), Development and Validation of a Learning Analytics Framework: Two Case Studies Using Support Vector Machines, Technology, Knowledge and Learning, 19(1–2), pp. 221–240, http://dx.doi.org/10.1007/s10758-014-9226-4

Ifenthaler, D., Eseryel, D. and Ge, X. (2012), Assessment for Game-Based Learning, D. Ifenthaler, D. Eseryel and X. Ge (Eds.), Assessment in Game-Based Learning, Foundations, Innovations, and Perspectives, pp. 3–10, New York: Springer, http://dx.doi.org/10.1007/978-1-4614-3546-4

Jenkins, H., Purushotma, R., Clinton, K., Weigel, M. and Robison, A. (2006), Confronting the Challenges of Participatory Culture: Media Education for the 21st Century, New Media Literacies Project, Cambridge, MA: MIT, http://mitpress.mit.edu/sites/default/files/titles/free_download/9780262513623_Confronting_the_Challenges.pdf

Manninen, T. (2001), Rich Interaction in the Context of Networked Virtual Environments: Experiences Gained from the Multi-player Games Domain, in A. Blanford, J. Vanderdonckt and P. Gray (Eds.), Joint Proceedings of HCI 2001 and IHM 2001 Conference, pp. 383–398, London: Springer.

Newmann, F. and Archibald, D. (1992), The Nature of Authentic Academic Achievement, in H. Berlak, F. Newmann, E. Adams, D. Archbald, T. Burgess, J. Raven and T. Romberg (Eds.), Toward a New Science of Educational Testing and Assessment, Albany: SUNY Press.

Pappano, L. (2012), The Year of the MOOC, New York Times, http://www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-aremultiplying-at-a-rapid-pace.html?pagewanted=all&_r=0

Pirnay-Dummer, P., Ifenthaler, D. and Spector, M. (2009), Highly Integrated Model Assessment Technology and Tools, Educational Technology Research and Development, 58(1), pp. 3–18, http://dx.doi.org/10.1007/s11423-009-9119-8

Robertson, R. J. (2010), Repositories for OER. JISC CETIS, http://www.slideshare.net/RJohnRobertson/repositoriesforoer

Scharmer, O. (2015), MITx u. lab: Education As Activating Social Fields, Huffington Post, http://www.huffingtonpost.com/otto-scharmer/mitx-ulabeducation-as-ac_b_8863806.html?ir=Australia

Schimmel, B. J. (1983), A Meta-Analysis of Feedback to Learners in Computerized and Programmed Instruction, Paper presented at the AREA 1983, Montreal.

Sheehan, N. T. and Stabell, C. B. (2013), Reputation as a Driver in Activity Level Analysis: Reputation and Competitive Advantage in Knowledge Intensive Firms, Corporate Reputation Review, 13(3), pp. 198–208, http://dx.doi.org/10.1057/crr.2010.19

Shermis, M. D. and Hamner, B. (2012), Contrasting State-of-the-art Automated Scoring of Essays, in M. D. Shermis and J. Burstein, Handbook of Automated Essay Evaluation: Current Applications and New Directions, Routledge Handbooks Online, http://dx.doi.org/10.4324/9780203122761.ch19

Sturtz, D. N. (2004), Communal Categorization: The Folksonomy, INFO622 Content Representation, 16(29.03.2007), http://davidsturtz.com/drexel/622/communalcategorization-the-folksonomy.html

Tayebinik, M. and Puteh, M. (2012), Blended Learning or E-learning?, IMACST, 3(1), pp. 103–110.

van Vught, F. (2008), Mission Diversity and Reputation in Higher Education, Higher Education Policy, 21(2), pp. 151–174, http://dx.doi.org/10.1057/hep.2008.5

Wagner, W. and Wagner, S. U. (1985), Presenting Questions, Processing Responses, and Providing Feedback in CAI, Journal of Instructional Development, 8(4), pp. 2–8, http://dx.doi.org/10.1007/bf02906047

Webb, M. (2011), Feedback Enabled by New Technologies as a Key Component of Pedagogy, in M. Koehler & P. Mishra (Eds.), Proceedings of the Society for Information Technology & Teacher Education International Conference 2011, pp. 3382–3389, Chesapeake, VA: Association for the Advancement of Computing in Education (AACE)

Webb, M. and Gibson, D. C. (2015), Technology Enhanced Assessment in Complex Collaborative Settings, Education and Information Technologies, 4(June), pp. 675–695, 10.1007/s10639-015-9413-5

Webb, M. and Gibson, D. C. (2011), Assessment to Move Education into the Digital Age: Brief Report from Thematic Working Group TWG 5 on Assessment, EDUsummIT 2011: Building a Global Community of Policy-Makers, Educators and Researchers to Move Education into the Digital Age, Paris: UNESCO.

Webb, M., Gibson, D. C. and Forkosh-Baruch, A. (2013), Challenges for Information Technology Supporting Educational Assessment, Journal of Computer Assisted Learning, 29(5), pp. 451–462, http://dx.doi.org/10.1111/jcal.12033

Wiliam, D. and Black, P. (1996), Meanings and Consequences: A Basis for Distinguishing Formative and Summative Functions of Assessment?, British Educational Research Journal, 22(5), pp. 537–548, http://dx.doi.org/10.1080/0141192960220502

Wilson, T., Schuwer, R. and McAndrew, P. (2010), Collating Global Evidence of the Design, Use, Reuse and Redesign of Open Educational Content, Open Educational Resources, Paper presented at the 2010 OER10 Conference, 22–24 March 2010, Cambridge, UK.

Xie, H., Li, Q., Mao, X., Li, X., Cai, Y. and Rao, Y. (2014), Community-aware User Profile Enrichment in Folksonomy, Neural Networks, 58, pp. 111–121, http://dx.doi.org/10.1016/j.neunet.2014.05.009

Notes

1 http://www.hewlett.org/programs/education-program/open-educational-resources

List of illustrations

Title Table 1. Six dimensions of higher education services with two trends: scale and uniqueness
URL http://books.openedition.org/obp/docannexe/image/3582/img-1.jpg
File image/jpeg, 1,5M
Caption Figure 1. Model Architecture for Open Assessment Resources Integration with Open Educational Resources. ASR = automatic speech recognition; MT = machine translation
URL http://books.openedition.org/obp/docannexe/image/3582/img-2.jpg
File image/jpeg, 535k

Author(s)

Director of Learning Futures at Curtin University in Perth, Australia and Chair of the education arm of the Curtin Institute for Computation. Gibson’s research focuses on games and simulations in education, learning analytics, complex systems analysis and the use of technology to personalize learning via cognitive modeling, design and implementation and he has over ninety publications on these topics. He is the creator of simSchool (http://www.simschool.org), a classroom flight simulator for preparing educators, and eFolio (http://www.my-efolio.com), an online performance-based assessment system, and he provides vision and sponsorship for Curtin University’s Challenge, a mobile, game-based learning platform (https://challenge.curtin.edu.au)

Research focuses on the intersection of cognitive psychology, educational technology, learning science, data analytics, and computer science. He developed automated and computer-based methodologies for the assessment, analysis, and feedback of graphical and natural language representations, as well as simulation and game environments for teacher education. His research outcomes include numerous co-authored books, book series, book chapters, journal articles, and international conference papers, as well as successful grant funding in Australia, Germany, and the US (see Dirk’s website for a full list of scholarly outcomes atwww.ifenthaler.info). Dirk is the Editor-in-Chief of the Springer journal Technology, Knowledge and Learning

Co-founded videolectures.net with 20,000 educational videos, created the Opening up Slovenia national education case study, established the UNESCO Chair on Open Technologies for OER and Open Learning and conceptualised the Internet of Education paradigm. He is now managing the Knowledge 4 All Foundation with sixty global members in machine learning. He is active in artificial intelligence research, open education, policies and business innovation in education and has international professional experience in project management — connecting research, technology and business — in the Ed Tech landscape. Davor will be curator of the second UNESCO OER World Congress in 2017

CC-BY-4.0

The text only may be used under licence CC BY 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Search OpenEdition Search

You will be redirected to OpenEdition Search