Version classiqueVersion mobile

Open Education

 | 
Patrick Blessinger
, 
TJ Bliss

15. Credentials for Open Learning: Scalability and Validity

Mika Hoffman et Ruth Olmsted

Résumé

In this contribution we advocate separating credentialing from the learning process as a path to greater scalability and better measurement of what independent learners learn from OER. We address the challenge of matching/aligning OER offerings with standardized exams as a way for independent learners to access academic credit, and explore ways to achieve consensus among educational institutions about what academic credit means and which types of evidence to accept in terms of learning that occurred outside a particular institution. We begin the chapter with an overview of credit by examination, contrasting the standardized testing approach with the classroom teaching approach to academic credit. We briefly describe our process for creating exams and the accompanying materials that make clear to potential test-takers what the learning objectives are. We then define a method for building the bridge between OER and the exam. Finally, we discuss the policy issues of accepting exams for credit and envision a future in which learners can receive transferable credentials in a cost-effective, efficient and valid manner

Texte intégral

OER and Academic Credit

1The growth of Open Educational Resources (OER) has sparked an interesting and productive discussion about how OER might be used to expand learners’ options for earning academic credit without traditional instruction (see, for example, Conrad and McGreal, 2012; Camilleri and Tannhäuser, 2012). The discussions tend to begin with OER and examine how best to grant credit for learning based on that OER. This chapter examines the issue from the other direction: for learners planning to sit for an existing examination for credit, how can those learners best find OER that covers the material they need to master the subject of the examination? As a corollary, how can higher education institutions (HEIs) encourage the validation of independent learning through scalable examinations to take advantage of the scalability of OER? What are known in the US as standardized exams (that is, exams produced for use across multiple institutions) have long served as vehicles for academic credit in the US. They are scalable, flexibly scheduled and cost-effective — but they exist outside of any context of formal classroom instruction and are not tied to a specific HEI, so learners are left to choose their methods of attaining knowledge independently, and may sometimes fail to recognize that their studies have been incomplete. In addition, debate continues in many US HEIs and other organizations that look for a university credential over how and whether to accept particular types of evidence of learning that occurred outside a particular institution. The authors come from the perspective of a US institution that has been in the forefront of prior learning assessment and adult degree completion for more than 40 years. We address three main issues: the concept of what academic credit means, the mechanisms by which OER-based independent learning can fit into a system of large-scale examinations and the need for a common understanding and standard guidelines for accepting and awarding credit by examination in recognition of independent learning.

The Meaning of Academic Credit

2Credit by examination as practiced in the US has grown in a different direction from the assessment practices of the UK and many European countries, where sitting for a comprehensive exam represents a milestone in one’s degree program. Two primary approaches to academic credit have bifurcated in the US: one focused on testing detached from specific HEIs, and one focused on teaching, which is predominant on traditional campuses.

3The testing approach seeks to make rigorous examinations more scalable and reliable than individually-rated program-specific exams can be. Robust standardized exams are built to measure the desired outcomes (usually in chunks corresponding to what would normally be expected in a one-semester course), regardless of how the student learned the material. All candidates for a similar qualification sit for the same examination, so that their learning of, for example, a term’s worth of calculus can be compared on some objective basis. Although many in the US decry the current (over) use of standardized tests in primary and secondary education, standardized subject tests are rooted in American traditions of accessibility, equality and mass production, and evolved in the mid-nineteenth century as a way to promote equality and fairness in compulsory education (US Congress, Office of Technology Assessment, 1992). US examples of the use of standardized exams in higher education date to the middle of the twentieth century, and include the College Level Examination Program (CLEP), the UExcel and Excelsior College Examinations programs, and the DANTES Subject Standardized Tests (DSST). All these exams are designed to be used for academic credit in lieu of participation in a university course, have undergone review by national agencies similar to, but exclusive from, the regional accrediting bodies that certify colleges and universities, and are widely used for that purpose in the US. Note that these are not the same as exams designed by one institution’s faculty for use in determining course placement at that institution; the standardized exams are designed by testing specialists and psychometricians along with subject-matter experts for use at any institution. Hundreds of thousands of students in the US earn at least some of the credit they need for a degree using such exams every year, saving money on tuition fees and earning credit on their own schedule (Council for Adult and Experiential Learning, 2010).

4The teaching approach relies on ensuring that the academic content is well taught, with measurement of what is learned relying on multiple measures in the context of a course, sometimes but not always culminating in a comprehensive high-stakes final/terminal exam. The emphasis in this approach is on instruction; measures of quality in academia rely heavily on how engaging the learning process is and how well-aligned the learning materials are with outcomes, with less attention paid to whether the individual assessments provide good measurement of the outcomes. This approach certainly involves testing, but testing is typically treated as secondary in importance to quality of instruction (see, for example, MarylandOnline, Inc. (2014)). In the US, the backbone of the system is the Carnegie credit hour, which defines one transcripted credit-hour as representing a course that met once a week for a 15-week semester, and required two hours per week of outside homework or lab work. Also contributing to this trend is the requirement from many US accreditors and government bodies that there be a certain amount of instructor-student interaction.

5Consensus is growing, however, that defining the amount of learning, as the Carnegie hour does, in terms of the amount of time spent in class is inaccurate, at best, as it does not take into account directly what outcomes are actually assessed, and there is a growing desire among accreditors in American higher education to consider learning outcomes a more accurate measure than seat time (Laitinen, 2012). This has resulted in the rise of competency-based degree programs that typically assess specific competencies rather than aggregate grades from various assignments (Klein-Collins, 2012). Some programs still maintain a link to the Carnegie hour and many also still emphasize the quality of the learning experience, while others focus on the assessment, usually designing assessments specifically for the program (McClarty, 2015).

Independent and Open Learning as Preparation for Assessment

6We turn now from the concept of credit to the learning students do to earn credit. The OER movement has sought to change what it means for information to be freely accessible, at least for those with internet access. Instead of educators or institutions making limited numbers of copies of material for specific groups of students, information can now be put on the web for anyone to find and access. The universe of independent learning opportunities and content curators has expanded rapidly. This universe includes both truly open resources, which anyone may share and modify, and resources that are free for anyone to access, but that are not open for modification and sharing, such as many Massive Open Online Courses (MOOCs). With this caveat, we will include MOOCs in our discussion of independent learning, as our emphasis is on free access for learners.

7Even with more resources for study, however, there are two major challenges for independent learners: they still need exceptional self-direction and motivation, and earning academic credit for what they have learned is still not an easy process. Most learners would acknowledge that interacting with static material and having to interpret it alone is more difficult than learning with someone who can answer questions or guide learners to appropriate additional resources. But even for those who succeed in learning independently, opportunity to demonstrate that learning and be evaluated for academic credit remains limited. One limiting factor is the availability of credit-worthy assessments for whatever the learner learned; there are far more academic subjects than there are high-quality and scalable assessments, and even if an assessment is available in the general subject area, it may be hard to know whether the resources the learner used actually match the exam content. The other limiting factor is the prevalence of the teaching approach to credit: in this model, assessment and instruction are so closely tied that many institutions find it hard to imagine unbundling them. Processes for awarding transfer credit, accepting standardized examinations for credit and using other methods of awarding credit for prior learning are all permeated by the idea that the credit-granting institution “owns” the definition of credit at that institution and by the accompanying assumption that no one else can assess learning just the way that institution would (see Ferrari and Traina, 2013; Conrad and McGreal, 2012; Camilleri and Tannhäuser, 2012; European Commission, 2015; FitzGibbon, 2014). We will discuss these two barriers in turn, beginning with the problem of linking OER to assessment.

Linking OER to Assessment

8As mentioned above, the assessment challenge is that assessments tend to be either individualized and thus unlikely to be reliable, or large-scale and thus not tailored for the particular individual’s learning. We leave aside the topic of individualized assessments here; our focus is on assessments that can be scaled to the broad needs of many independent learners. The particular challenge of batteries of exams such as those in the College Board’s College Level Examination Program, Prometric’s DANTES Subject Standardized Tests and Excelsior College’s UExcel exams is that although the purposes for each examination are clear and the learning outcomes to be measured/demonstrated are defined, it may not be clear to a learner whether a given learning resource is going to be sufficient during the preparation phase. Coming at the question from the other side, it may not be clear to exam developers exactly what a given learning resource contains without going through the entire resource. Given the number and depth of resources available, this is impractical, particularly for resources that are entire courses, such as MOOCs. If HEIs are to recommend the pairing of OER with credit by exam, they need some efficient way to help learners identify appropriate resources. For example, learners may come with different areas of strength. Those with practical experience may need to learn or review the theoretical basis of the subject in order to do well at a college-level exam, whereas others may need a complete open online course, not just a refresher in specific topics.

9The ideal approach to selecting appropriate open learning for credit-by-exam preparation, then, is to consider both the learner and the content. As a practical matter, no educator can prepare a complete list of the various configurations of material that would be ideal for each learner. Rather, we can be transparent about the learning objectives of the material and provide information about how the material is accessed and presented, to help learners make informed decisions.

10We speak from the perspective of Excelsior College, an American institution with a global student body that has been a pioneer in prior learning assessment (PLA, also known as recognition of prior learning, or RPL), especially credit by examination. Excelsior College helps learners and HEIs understand its exams by publishing the knowledge and skills that are assessed and a detailed content outline that serves the same purpose as a syllabus in a traditional course. Over the years, while speaking encouragingly of “being your own teacher”, we have experimented with learner support offerings such as workbooks and learning packages, and we have fought a constant battle with “test prep” providers who typically stay at the lowest cognitive level with materials such as flashcards and drills rather than providing materials that truly enable learners to meet the learning objectives. The advent of Open Educational Resources has provided a welcome opportunity to recommend free resources to our students, who tend to be lower income and working full time and do not always feel they can afford textbooks.

11Excelsior developed a standard system for reviewing the match between OER and the learning outcomes of its exams: we ask a subject-matter expert (SME) to go through each content area in our test specifications document and to comment on whether the OER covers the entire content of the exam or a defined portion of it.

12Our SMEs, usually two or three of them, examine the match from several different perspectives: the degree to which specific learning outcomes match, the degree to which individual elements mentioned in the detailed content outline of the exam are covered in the open course, and — much more difficult to assess — the weighting and cognitive level match between the expectations of the exam blueprint and the learning provided. We provide a rubric for rating the match on each aspect as excellent, acceptable or deficient. We provide space for comments, and acknowledge the possibility that some element of content will be in the course but not in the exam content outline, as well as the other way around. An example of a partial rubric is represented in Figure 1.

Figure 1. Sample Segment of a Completed Match Review Rubric
Source: Mika Hoffman and Ruth Olmsted; Excelsior College

13This granular approach, identifying matches at the outcome and content-area level, provides opportunities for learners to tailor the level of “instruction” to their individual needs: they can use both full courses and a variety of modules to craft a program that will build or bolster the competencies they will demonstrate on the exam. With the variety of OER available, however, a list considering content alone will not capture the full extent of the possibilities that OER offers. So, we take several dimensions other than content into account as we evaluate OER’s suitability as support for learning for particular exams: is the material presented at a college level? How user-friendly is the OER? Are there specific access concerns?

14We have found that the project has led to interesting collaboration opportunities: OER resource developers can revise their work to fill some gap we found upon initial review or proactively look at our content outlines and learning outcomes as they develop their resources. And we solicit feedback from exam takers about the quality of the match as well, so that learners, resource developers and exam developers all put the learning at the center and participate in the bold endeavor of assisting post-traditional learners in their quest to better themselves through higher education qualifications.

Independent Learning within the Content-Centered Approach

15Our approach separates the assessment of learning from the learning process and considers both to be essential, but different, facets of a process in which learners achieve learning outcomes. This content-centered — rather than teacher-centered or learner-centered — approach is different from most modern, organized education and is the foundation of the testing approach to credit. It arises from the idea that learning materials and exams both start with the question of what we think learners need to learn — the learning outcomes. Exam developers then proceed to spend a great deal of time and effort on assessing those outcomes, not worrying about how learners acquire the knowledge, while resource developers tend to focus attention on enabling learners to achieve the outcomes and only touch lightly on assessment. In our view, this is as it should be: experts in inculcating knowledge and experts in assessment focus on their areas of expertise, with the common thread of the learning outcomes linking their efforts to provide a positive outcome for the learners. Although this sort of disaggregation is not widespread, it is potentially very useful as a means of promoting the use of OER. The Open Educational Resources Universitas (OERu), an international higher-education consortium, spells out the elements of education in the context of how partner institutions can maximize efficiency (Conrad, Mackintosh, McGreal, Murphy and Witthaus, 2013, p. 13):

Figure 2. How to maximize efficiency. Image from Conrad, Mackintosh, McGreal, Murphy and Witthaus, 2013, CC BY-SA

16A major benefit to learners is that they can choose learning materials that are appropriate for them, knowing that those materials will help them toward a credential. This is a matter of input and output. The existence of a robust measurement of the output — what is actually learned — can actually make the selection of inputs more personalized, provided appropriate boundaries are in place between the provider of prep material and the developers of the secure examinations. The disentangling of inputs and outputs allows us to accommodate different students’ needs and quirks, because what they actually learned (maybe from experience, leisure reading, passionate pursuit of a topic or even just reading a more diligent classmate’s notes and papers) is tested in a single, comprehensive measure that is carefully designed to reflect all the desired learning outcomes.

Accepting and Awarding Credit for Independent Learning

17This dissociation of instruction from assessment runs counter to deeply ingrained views of what constitutes good education, which gives rise to the second of the barriers facing learners using OER to prepare for taking independent exams: credentialing. We have matched OER to several dozen exams, all of which bear credit at Excelsior College. But in keeping with the spirit of OER and the scalability of standardized exams, it should be possible for learners anywhere to use the exams for credit at their local institutions. Here, however, there are several challenges.

18First, there is the practical issue of credit transfer generally. There is great diversity around the world in how credit is counted, how outcomes are stated, and how program requirements are built. From an international perspective, identifying equivalencies in level and amount of credit is a challenge for any sort of transfer (Commonwealth of Learning, 2010; European Commission, 2015; FitzGibbon, 2014). Even within a given country, identifying equivalencies can be difficult. At our institution, we have made the decision to standardize our exam development on the American 3-credit course (see definition of the Carnegie hour above), so we are both constrained and challenged to provide sufficient learning output definitions to convincingly test mastery of three credits’ worth of knowledge. But many competency-based programs do not work within that framework, and universities outside North America also use different systems for which there may not be easy equivalents. Here again, working toward clear and publicized outcomes can help. Just as a course developer can stack up learning outcomes to make a 3-credit course, it is possible to stack up the same learning outcomes to define and demonstrate competencies, or move from course-level goals, to the major level, to the entire degree level. This idea is not so different from the kind of degree map or status report students built with academic advisors to make sure all requirements are being fulfilled. This stacking or mapping is not usually shown explicitly on an institution’s transcript but some competency-based programs are moving in this direction, and quite a few institutions’ competency-based degrees define their competencies not so differently from typical general education distribution requirements. Schools all over the United States have databases full of cross-listings of what course meets which requirement. All this indicates that the day may be coming when we are able to map exam outcomes to a set of competencies rather than just “equivalent to a three-credit course in X”.

19Even with a mechanism for credit transfer in place, however, the larger barrier arises from the teacher-centered model: most institutions assume that assessment is the responsibility of the institution granting the credential, indicating a lack of trust that any other organization could adequately determine whether that institution’s students meet standards. This assumption gives rise to many barriers for the acceptance of credit-by-exam as evidence of learning. For example, Camilleri, Haywood and Nouira (2012) outlined a number of possible scenarios for a student wishing to use OER for credit (see Figure 3).

Figure 3. OER Scenarios. Image from Camilleri, Haywood and Nouira (2012), CC BY-SA 3.0

20The scenarios with stars are the favored ones: all involve assessment by the credit-granting institution or a trusted partner. Camilleri and Tannhäuser (2012) expanded on this model, noting: “The necessary conditions for all the scenarios to be viable are that the self-study materials are placed online for general access, and that those materials are sufficient in scope and quality of content, and required associated activities, to enable a learner to acquire the competences defined in the expected learning outcomes, and that a university is able to use them to guide the assessment of those learner competences” (p. 31, our emphasis). Note that the assumption is that the university is responsible for the assessment. Even though the scenarios include one in which assessment, learning, and credit are at separate universities, this situation is noted as problematic, and recognition of prior learning (PLA/RPL) is put forth as a more useful model.

21Indeed, PLA/RPL is a very common model for granting credit for learning gained through OER, as it allows credit-granting institutions to tailor the assessment to their own requirements. The problem is that this model essentially returns to the days of individualized exams, which are not scalable or reliable. Conrad and McGreal (2012) summed up the problem: “Existing RPL practices are usually deeply embedded within individual institutional policy and practice. In some cases, such practices are labor-intensive and not particularly cost-effective or scalable. The definition of RPL practices and the relationship of various types of assessments to each other are also often unique to institutions and are understood to be disparate and even a source of contention within the field” (pp. 2–3). As Camilleri and Tannhäuser (2012) and Ferrari and Traina (2013) have pointed out, the lack of scalability negates the cost savings of OER, since individualized RPL assessment may cost nearly as much as a full traditional course. Ferrari and Traina (2013) concluded: “Thus, this scenario [of independent assessment of OER-based learning] will remain marginal unless automated/systemized testing procedures are implemented, which will allow for economies of scale to be generated” (p. 30). Conrad et al. (2013) wrote positively about the potential scalability and usefulness of challenge exams (institutionally developed exams that a student can pass to validate knowledge and bypass an actual course) but noted that the practice of awarding credit this way is not widespread; again, the teacher-centered model, assuming that assessment and instruction are inseparable, leads to discomfort with the idea of granting credit based solely on any external exam, even the institution’s own.

22A further consequence of the teacher-centered model is that institutions of higher learning vary widely with respect to the amount of credit from any outside sources that can be transferred in, and what kind of evidence of “prior learning” they accept. It is understandable that when colleges and universities are competing with each other for students, they want to differentiate themselves and their programs, and directly granting credit based on anyone else’s evaluation, even if the HEI granting the credit is accredited by the same body, undermines the uniqueness of those programs. Many may feel that accepting outcome-based assessments, or even requiring certain outcomes, impinges on academic freedom (FitzGibbon, 2014). On a more practical level, in the absence of national standards for the content of specific courses, institutions are legitimately concerned that a student who transfers in, for example, three credits of first-semester calculus, will not have learned the same thing that students at that institution learn in the course, and thus may not be prepared for that institution’s second-semester calculus course. It is unrealistic to expect that every institution anywhere in the world would accept any specific exam-based validation of OER learning. However, there are opportunities for institutions to do more than they are currently doing; particularly as more adult students seek to complete a degree, institutions that welcome prior learning assessment including credit-by-exam may attract more students and improve persistence rates (Council for Adult and Experiential Learning, 2010).

23In summary, credentialing learning via independent assessment faces both the practical challenges of translating what the assessment is measuring and philosophical challenges arising from a view that instruction, assessment and credentialing should belong together. Although we believe that independent assessment disaggregated from OER-based learning is a powerful and legitimate way for learners to earn credentials, we need to examine the root of the problem a little more closely.

Credit-worthy Assessment

24The desire to link instruction and assessment and the lack of trust of other people’s assessments or other institutions’ credit both arise from a widespread lack of understanding about how to evaluate the quality of assessments. It is certainly reasonable that if an institution cannot determine whether an assessment truly measures the important outcomes to the standards the institution requires, it is not going to want to use the results of that assessment. Test-taker authentication is one issue that institutions point to (it can be set up to allow valid assessments to be associated with OER, although such measures almost inevitably render the OER no longer free) but it is not the only measure of assessment quality. It is also important for the assessment to provide tasks that actually measure the outcomes to the right level. Many people assume that certain types of tasks, notably multiple-choice, cannot possibly measure college-level outcomes (Camilleri and Tannhäuser, 2013; Ferrari and Traina, 2013; Witthaus et al., 2015). This is an oversimplified view of assessment; in the hands of skilled assessment professionals, virtually any type of task, even multiple-choice, can provide useful information about learning outcomes, even at higher cognitive levels, and machine-scored task types can provide excellent reliability and fairness compared to subjectively rated types. Witthaus et al. (2015) provide an overview of assessment “robustness” that conflates task type and security: although their main point is that assessment robustness correlates with formality of recognition, which is an important point to make, it is unfortunate that scholars working to understand the relationship between assessment and credentialing are not diving deeper into understanding assessment quality. Returning to our content-centered model, assessment is a specialization of its own and many in the field of education do not understand how to build and justify high-quality assessments. This is why the high-stakes, standalone verification of learning is better done through experts in assessment rather than through experts in instruction who may not know how to build assessments to the standards needed for those stakes. Note that this does not mean that instructors are incompetent at assessment: validity in assessment is determined in the context of the use of the scores (American Educational Research Association, American Psychological Association and National Council on Measurement in Education, 2014), and what may work perfectly well for a unit test or even a final exam in the context of other input may not provide valid evidence of learning for a standalone credit-bearing exam. Professionally run testing programs publish validity arguments or evidence to support the use of their test scores.

Putting the Pieces Together

Credit for MOOCs?

25Given this context-driven view of validity, consider the assessments currently existing as part of OER. In many cases, the assessments are the same as those created for the “traditional” version of the course, or modelled after similar assessments. While there may be nothing wrong with these assessments in their original contexts, their validity needs to be determined afresh in the new context. Without identity verification, for example, the assessments are not valid, for they do not link the mastery of the learning outcomes with any particular student. And even when attempts are made to insert identity verification, such as by having a proctored final exam, if this final exam has only one form, so that the content quickly becomes known, or if it does not cover all the outcomes, the results will still be inadequate as the only evidence for granting credit.

26One of the big questions swirling around any discussion of open education has been “Should credit be granted for MOOCs?” This is the wrong question. “Should credit be granted for what students learn in MOOCs?” provides better focus. No one asks whether credit should be granted for library books, or for Wikipedia. People ask whether MOOCs should be treated just like traditional courses because MOOCs look so similar to traditional courses, and thus people think they could be similar in other respects as well.

27In response to the credit-for-MOOCs question, two US organizations, the American Council on Education (ACE) and the National College Credit Recommendation Service (NCCRS), have evaluated selected MOOCs using the same standards to which they have reviewed corporate and military training and other courses offered by “non-collegiate” organizations. For both agencies, their historic standards for evaluating “courses” are different from their standards for evaluating “exams”. For courses, they rely on the qualifications of the instructors and the existence of systems for tracking individual participation and performance. These evaluators have treated MOOCs and course-like OER such as the offerings of the Saylor Foundation as courses, because they provide instruction and at some level “look like courses”. To address the online, anonymous format of such offerings, ACE and NCCRS have required the course providers to add identity verification, discussion facilitators, and challenge questions, as well as a proctored final exam, to provide assurance that the student being awarded the credit is the one who actually learned something. But since ACE and NCCRS do not hold assessments within courses to the standards required for validity in the context of massive numbers of learners, we believe such course credit is still less reflective of mastery of learning outcomes than an exam with multiple forms that is expressly designed to assess the outcomes of an entire college course.

28Consider what it is about traditional courses that provides the credible assurance of learning. US regional accrediting agencies will look for three things: the course has learning outcomes; students in traditional courses are known to the instructor; and the instructor provides assessments of learning linked to those individual students and to the learning outcomes: essays, performance in labs, class participation, projects, and/or proctored quizzes and exams. MOOCs typically have learning outcomes, but lack the other two requirements. It is not currently possible for a MOOC to provide individually verified assessment performance and still remain free. Identity verification must be done on an individual basis, and establishing the validity of assessments at a large scale is typically too costly for a free course to provide. So “granting credit for MOOC learning” directly is not feasible, and in fact, trying to modify MOOCs or other OER so that credit can be associated with them directly undermines the very openness and accessibility of the learning resources. All this is a result of thinking that instruction and assessment are inseparable.

29But once instruction and assessment are separated, granting credit for the learning obtained through MOOCs or OER becomes feasible. Assessment and identity verification need not be bound up in the instructional elements of a learning experience. Indeed, the argument can be made, and is made by advocates of competency-based programs, that assessment of knowledge not tied to the idiosyncrasies of any individual’s instruction is superior to assessment that may depend too much on assumptions that students were paying attention in class.

Solutions and Recommendations

30Good assessments are available: established credit by examination programs are run by professionals who have spent a career learning the principles of assessment validity and how to apply them to build, administer, and score a test. What is needed is for institutions to understand the content and outcomes covered and trust the evidence of validity supplied by the exam programs.

31We believe that large-scale professionally produced exams are a good fit for a considerable amount of the learning from OER and MOOCs. For relatively common subjects such as Statistics, learners accessing their learning at no cost can gain credentials at a low cost. The activity of learning needs to be conceptually disaggregated from the activity of assessing what has been learned; the credential is granted through the linking of the learning outcomes to the assessment and the learner. It is up to any given credit-granting institution, of course, to determine whether to grant credit, so it is incumbent upon assessment professionals to provide them with the information they need. It is also incumbent upon credit-granting institutions to have clear, rational, coherent, and transparent policies for acceptance of standardized credit by exam. The match of independent assessments with learning that can be gained through OER will enable massive numbers of people who previously might not have had access to higher education to gain not only learning, but credentials that they can use to further their careers and better their lives.

32As institutions consider transfer and articulation policies, the issue of how learning was attained continues to be given a great deal of importance, even when there is credible assessment linked to learning outcomes (Camilleri and Tannhäuser, eds., 2012). Higher education institutions need to consider learning outcomes as a basic building block, and determine their transfer and PLA/RPL policies based on which learning outcomes can be appropriately represented by credentials from elsewhere, and which ones they feel need to be assessed internally. Further, adoption of something like the UK Quality Code’s [Chapter B6] Indicator 2, regarding transparency of assessment policies, by institutions everywhere would assist students in making informed decisions about their own educational paths. In turn, makers of exams need to be transparent about the learning outcomes and content areas addressed by the exam, and to make their validity arguments clear and transparent as well. And institutions need to understand the basics of what makes good validity evidence: an excellent resource is McClarty and Gaertner (2015), which, although focused on competency-based education, provides a good explanation of assessment concepts as they apply to disaggregated assessment. Together, these elements will provide institutions with the tools they need to readily evaluate external assessment evidence that students may bring for credit.

Bibliographie

References

American Educational Research Association, American Psychological Association and National Council on Measurement in Education (2014), Standards for Educational and Psychological Testing, Washington: American Psychological Association.

Camilleri, A., Haywood, J. and Nouira, C. (2012), Giving Credit for OER-based Learning, Paper presented at UNESCO World Open Educational Resources Congress, Paris, http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/pdf/themes/nouira.pdf

Camilleri, A. and Tannhäuser, C. (Eds.) (2012), Open Learning Recognition: Taking Open Educational Resources a Step Further, European Foundation for Quality in e-Learning, http://efquel.org/wp-content/uploads/2012/12/Open-Learning-Recognition.pdf

Commonwealth of Learning (2010), Transnational Qualifications Framework for the Virtual University for Small States of the Commonwealth, Vancouver: Commonwealth of Learning, http://www.colfinder.org/vussc/VUSSC_TQF_document_procedures_and_guidelines_Final_April2010.pdf

Conrad, D., Mackintosh, W., McGreal, R., Murphy, A. and Witthaus, G. (2013), Report on the Assessment and Accreditation of Learners Using OER, Vancouver: Commonwealth of Learning, http://oasis.col.org/bitstream/handle/11599/232/Assess-Accred-OER_2013.pdf?sequence=1&isAllowed=y

Conrad, D. and McGreal, R. (2012), Flexible Paths to Assessment for OER Learners, Journal of Interactive Media in Education, 2 (p. Art. 12), http://jime.open.ac.uk/articles/10.5334/2012-12

Council for Adult and Experiential Learning (2010), Fueling the Race to Postsecondary Success: A 48-institution Study of Prior Learning Assessment and Adult Student Outcomes, http://www.cael.org/pdfs/PLA_Fueling-the-Race

European Commission (2015), ECTS Users’ Guide 2015, http://ec.europa.eu/education/library/publications/2015/ects-users-guide_en.pdf

Ferrari, L. and Traina, I. (2013), The OERTest Project: Creating Political Conditions for Effective Exchange of OER in Higher Education, Journal of e-Learning and Knowledge Society, 9(1), pp. 23–35, https://www.learntechlib.org/j/JELKS/v/9/n/1

FitzGibbon, J. (2014), Learning Outcomes and Credit Transfer: Examples, Issues, and Possibilities, Vancouver: British Columbia Council on Admissions and Transfer, http://www.bccat.ca/pubs/Learning_Outcomes_and_Credit_Transfer_Feb2014.pdf

Klein-Collins, R. (2012), Competency-Based Degree Programs in the U.S., Chicago: Council for Adult and Experiential Learning, http://www.cbenetwork.org/sites/457/uploaded/files/2012_CompetencyBasedPrograms.pdf

Laitinen, A. (2012), Cracking the Credit Hour, New America Foundation and Education Sector, http://www.cbenetwork.org/sites/457/uploaded/files/Cracking_the_Credit_Hour_Sept5_0.pdf

MarylandOnline, Inc. (2014), Quality Matters Higher Education Rubric, 5th edn., https://www.qualitymatters.org/rubric

McClarty, K. N. and Gaertner, M. N. (2015), Measuring Mastery: Best Practices for Assessment in Competency-based Education, American Enterprise Institute, Center on Higher Education Reform, https://www.aei.org/wp-content/uploads/2015/04/Measuring-Mastery.pdf

US Congress, Office of Technology Assessment (1992), Testing in American Schools: Asking the Right Questions, Washington: US Government Printing Office, http://govinfo.library.unt.edu/ota/Ota_1/DATA/1992/9236.PDF

Witthaus, G., Childs, M., Nkuyubwatsi, B., Conole, G., Inamorato dos Santos, A. and Punie, Y. (2015), An Assessment-recognition Matrix for Analysing Institutional Practices in the Recognition of Open Learning, Open Education Europa, eLearning Papers, 40, http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_1

Table des illustrations

Légende Figure 1. Sample Segment of a Completed Match Review RubricSource: Mika Hoffman and Ruth Olmsted; Excelsior College
URL http://books.openedition.org/obp/docannexe/image/3589/img-1.jpg
Fichier image/jpeg, 151k
Légende Figure 2. How to maximize efficiency. Image from Conrad, Mackintosh, McGreal, Murphy and Witthaus, 2013, CC BY-SA
URL http://books.openedition.org/obp/docannexe/image/3589/img-2.jpg
Fichier image/jpeg, 280k
Légende Figure 3. OER Scenarios. Image from Camilleri, Haywood and Nouira (2012), CC BY-SA 3.0
URL http://books.openedition.org/obp/docannexe/image/3589/img-3.jpg
Fichier image/jpeg, 557k

Auteurs

Executive Director of Test Development Services at the Center for Educational Measurement for Excelsior College in Albany, New York. She has over twenty years of professional experience in test design, quality control, integration of psychometric analyses, assessments development and production processes for higher education and government. Prior to coming to Excelsior, she managed the high-stakes Defense Language Proficiency Test program at the Defense Language Institute Foreign Language Center. She began her career at Educational Testing Service working on the Graduate Record Examination (GRE) as it transitioned to a computer-adaptive format

Faculty Program Director in the School of Liberal Arts at Excelsior College, Albany, New York, where she has specific oversight of the BA/BS in Liberal Arts degree programs. These degree programs are the College’s most flexible offerings, affording students many opportunities to use credit-by-exam and other forms of prior learning assessment, as well as transfer credit, to meet distribution, depth, and level requirements. Previously, Ruth spent twenty years in what is now the Center for Educational Measurement, overseeing the editorial and test development functions and both electronic and paper-based portfolio assessment. She also has many years of teaching experience, both face-to-face and online

CC-BY-4.0

Le texte seul est utilisable sous licence CC BY 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Acheter

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search