Version classiqueVersion mobile
OpenEdition Books

Higher Education and the American Dream

 | 
Marvin Lazerson

Part III. The Teaching and Learning Conundrum

Chapter 5. Academic disciplines, research imperatives, and undergraduate learning

Texte intégral

I would really like to teach one of the new interdisciplinary
courses in the general education program, but my
colleagues in my department would accuse me of
betraying my academic discipline.
(Summary of numerous conversations with faculty colleagues)

1For America’s professors, the great triumph of the post-World War II era lay in the dominance of the academic disciplines. The disciplines gave faculty intellectual authority as they searched for new knowledge, trained graduate students, and shaped the undergraduate curriculum. Organizationally, the disciplines were centered in academic departments, which overwhelmingly controlled their own hiring, promotion, and the awarding of tenure, as well as becoming the most influential entities in the governance of individual colleges and universities. If all of this was insufficient, the academic disciplines lay at the heart of the research enterprise.

5.1 Purposes and tensions

2After World War II, debates about the purposes of higher education came to the fore, with three themes receiving primary attention. The first came out of the immediate success of theoretical and applied research, as scientists who had been active during the war made an effective case for continuing federal investments in research on university campuses. Ultimately they were successful in creating the National Science Foundation and receiving substantially increased foundation support for graduate education and advanced research, preparing the next generation of scholars to expand the boundaries of knowledge. Indeed, it is safe to say, that university-based research took on an importance in almost every sphere of American life— what Roger Geiger calls, “research and relevant knowledge”—that had been barely imaginable before 1940 (Geiger, 1993, 1986; Graham, 2005).

  • 1 Derek Bok (2003) makes a version of this argument with regard to intercollegiate athletics and exp (...)

3The amount of money that became available for research was mind-boggling and with the dollars came a dramatic shift in the distribution of power as a corrupt bargain—my label—emerged. Individual professors able to gain substantial amounts of research funds achieved independent status within the university. The American system of distributing research money was usually based upon a peer review process evaluating the worth of the research proposal. While the money was channeled through a particular university, in practice, it was being awarded to the primary researchers, with the university serving as little more than distributing agency, essentially delivering the checks to the researchers. In return for this, the university received two things it desperately wanted—money and prestige—each of which carried considerable importance. Money paid professors’ and staff salaries, allowed for graduate student fellowships, and bought equipment, but it also often came in the form of “overhead”—central administrative support, university libraries, heating and electrical costs—often amounting to an additional 50% of original grant. Money bought prestige and in turn, prestige made it easier to attract still more money. For the funded researcher, the university’s gains were a godsend, leading the universities to treat research professors as treasures, who could, if they so chose, sell themselves to competitors. The power of money and prestige was simply too much for university officials, who usually chose not to look too closely at such annoyances as how the money was actually being spent, the actual conduct of the research, ethical questions involved in the research, the treatment of graduate students, the quality of teaching, or even whether the research professor was regularly on the university campus. Research funding created a free agency world, like the free agency of professional athletes, in which individual professors had enormous negotiating power—over salaries, working conditions, extra travel and summer funds—creating a two tier faculty system, akin to George Orwell’s Animal Farm, in which all professors are equal, but some are more equal than others.1

4The second theme that emerged greatly expanded higher education’s emphasis on vocational purposes, as more and more occupations sought to become professions through formal schooling, as existing professions extended the length of time necessary to enter them, and as new professions appeared. The increasing tendency to define the importance of higher education in terms of professional preparation had a remarkable affect on access: the route to higher income and status narrowed to a single-minded emphasis on going to college and beyond—the road to the American dream. No wonder that young people scrambled to get in, no wonder communities put pressure on political representatives to build branches of the state university in their area or to convert state teachers colleges into full-flung universities with graduate programs or to establish community colleges. No wonder that as the civil rights and women’s movements took shape, they began to focus on access to higher education, laying the basis for the politically controversial affirmative action programs of the 1970s and 1980s. And, with entry into the growing number of professions becoming the dominant concern of students, came a revolution in the curriculum, with more and more courses and majors being established that claimed to provide practical returns.

5The third theme in the debate about the purposes of higher education centered on general education. The movement for general education stressed a common core of learning, a turn against the tide of curricular fragmentation and disciplinary specialization that had already become apparent. General education drew upon 19th century traditions of the liberal arts and the belief that knowledge had moral and humane ends. World War II reinforced these views as general education took on the mantle of teaching common social values. General education’s aim after the war was nothing less than preparing intelligent citizens capable of making wise and moral judgments in a world that had become increasingly dangerous. If America’s colleges and universities, the argument went, could not recognize how important were commitments to and responsibility for maintaining a democratic society, what value did they really have (Sloan, 1980)?

5.2 The neglect of learning

6The purposes of higher education that emerged after World War II —research, vocationalism, and civic education—were neither new nor were they easily compatible. Seeking to achieve them created innumerable tensions. Yet each of the purposes expanded, although in different ways. General education found a home on probably half of America’s colleges in the 1950s (Sloan, 1980), and continued to be a feature of higher educational institutions around the country, regularly debated and resurfacing in various forms. Research and vocationalism became the essential features of American higher education. As different as the two goals seemed, they had something in common, for each pointed higher education toward a greatly expanded curriculum to accommodate the desires of the faculty and the goals of the students. Debates about purposes inevitably drove faculty into questions of curriculum. What is the curricular content of an education for citizenship? What should be the relationship between the liberal arts and vocational or professional preparation? How should research and graduate education connect to the undergraduate curriculum? What should be the balance between required courses and electives? What defines a major?

7Often faculty questions about purposes became questions about curricular modifications and departmental power rather than about knowledge and learning. For instance, what courses should students take? When should they take them? How many courses can students choose and from what menu? What should students read? Do interdisciplinary courses water-down the curriculum? Ironically, what was supposed to be an effort to connect the purposes of higher education to what students should learn, understand, and make meaningful, converted into decisions about what each department would require of its students, into negotiations over how much each department had to “service” (a commonly used phrase) general education courses, and into a fixation by individual faculty members on the courses they had to teach and their course reading lists (Cuban, 1999).

8During these debates, little attention was paid to learning itself, how students learned, what kinds of knowledge they acquired and how long they retained it, how applicable the knowledge was for students’ lives, or whether the methods of teaching and of assessing student learning were the best ones available. Curriculum was of central importance to professors since most spent their time teaching and put substantial efforts into reading lists and testing students’ course knowledge. Yet learning itself and the appropriate pedagogy were rarely addressed, beyond students fulfilling course requirements and professors preparing lectures and seminar notes. The few studies undertaken to assess the impact of teaching on student learning had little effect on how professors went about their business.

9Not until the widely publicized decline in SAT scores in the late 1970s did the question of learning begin to occupy a noticeable place in higher education, and even then the initial reaction was to blame forces external to postsecondary education—low academic standards in elementary and high school, too much time watching television or, what came later, playing computer games, even the breakdown of the family—or to complain that open and low college admission requirements had reduced student incentives to learn. Academia itself did not take seriously questions about the relationship between what was taught and how it was taught on the one hand, and student learning on the other. Only with the challenges to higher education in the late 1980s over the price-returns squeeze—soaring tuition increases and growing costs—did student learning become a serious agenda item, especially as public and political criticisms of the amount and quality of teaching mounted.

10The faculty’s concerns with learning and teaching have inevitably been translated into questions about what to teach and when to teach it, questions which were primarily answered within the academic departments and in terms of each academic discipline. What constituted the discipline’s most important scholarly questions? What were the discipline’s most appropriate methodologies? What were the cutting edge specializations within the discipline and could the faculty teach them? These were useful and relevant questions, but they were not about issues of student learning.

11It is not hard to explain the relative absence of discourse within higher education over student learning or of any sustained discourse on the effectiveness of courses or about how well students comprehended what they had been taught (Association of American Colleges, 1985; Wagener, 1989). Little incentive existed for faculty and administrators or, for that matter, parents and students to worry about what students learned. As long as the system grew in numbers and wealth and everyone presumed that professional success and income returns were tied to college graduation, the breadth, depth, and content of classroom learning took a distant second place. And with faculty focused on their own disciplines—on their capacity to understand and teach the primary questions of their discipline— they saw little need to ask questions about the relationship of student learning to citizenship or vocational responsibilities.

12There was a second reason why student learning was so little addressed. Classroom teaching became associated with academic freedom. What professors did inside the classroom had to be defended against external threats—from McCarthyism, conservatives and religious fundamentalists, leftist radicals, politicians, administrators, and ultimately from the students themselves. The defense of academic freedom had the effect of making the classroom a “private” domain—as the widespread faculty disregard of student evaluations often made clear. Any questions about what happened in the classroom, even whether students were learning anything, were viewed as threats to the faculty member’s liberty. The transactions of the classroom, teaching and learning, needed to be excluded from serious observation and evaluation.

13Instead of concentrating on learning, American higher education focused on organizing academic content and delivering it. Colleges experiment with technology and new approaches to teaching even less than elementary and secondary schools do. Sad to say, the recent versions of this, like the use of power point presentations, comes across as pathetic, usually little more than lectures with the lights dimmed so that the students can read the slides and enjoy the occasional animated features. Lectures and seminars dominate the presentation of knowledge, with the former often directed at large numbers of students. After the 1960s greater informality between faculty and students occurred with professors and students similarly dressed and referring to each other by first names. Informality may have enhanced collegial feelings between professors and students, but it led neither to students learning more nor to any substantial change in the delivery of information to students.

14The lack of interest in pedagogical experiments reflected the dominance of substantive academic content over instructional values. The ascendant model of academic knowledge derived from the research universities. It was regularly contested, as evidenced by the various efforts to introduce specially constructed general education courses or to involve students in hands-on clinical or practical experiences or most recently in community-based service learning. Yet the dominant notion of higher education’s knowledge base has remained: students should learn what the professors know and the most important kind of knowledge professors know is conditioned by the research community and its disciplines. Whether through departmental structures, the organization of course catalogues, reading lists, or the requirements for majors, the patterns set by the research universities have become standard for most of higher education, especially as research universities became the source of the overwhelming number of professors in higher education. Alternatives continued to exist, but they were precisely that, alternatives to the dominant pattern of belief and practice.

15This argument, of course, risks generalizing developments at the research universities into what is characteristic for all of higher education, reducing the distinctions between liberal arts colleges, comprehensive universities, and community colleges to mere caveats. That is not my intention. Sectors of higher education and individual campuses differ, often in substantial ways. Nonetheless, I am persuaded that in the organization of knowledge and its relationship to student learning, the research universities have come to dominate the discourse and remain the most influential model. To quote Richard Freeland (1992, p.118), “The central tenet of this model was that the university whose faculty was most productive in research, as measured by publications in important scholarly outlets and… by success in attracting outside funding, was the best university. The model incorporated a clear hierarchy of values: it celebrated modern, scientifically oriented research above traditional forms of interpretive or synthetic scholarship; investigation of basic problems above applied work and therefore the arts and sciences above professional fields; research over teaching; and graduate-level training above undergraduate education. It also retained more traditional indicators of academic prestige: selective admission policies, residential facilities, and strength in the liberal arts and the elite professions… by becoming research universities, leading institutions altered the terms in which other campuses, occupying positions of lesser prestige, understood the requirements of upward academic mobility.” Freeland’s summary statement written in the early 1990s needs revision; contract and therefore applied research, professional education, and fundraising as distinct from research grants have all assumed substantially greater influence throughout higher education. The main thrust of Freeland’s analysis nonetheless remains accurate.

16Higher education’s curriculum underwent broad, substantial change after 1970. Its size exploded and it became chaotic. Even small colleges produced course catalogues that made any notion of a focused curriculum anachronistic. Large schools had city telephone-book-sized course catalogues; as early as 1975 Cornell University needed 700 pages to present its undergraduate course offerings (Rudolph, 1977, p. 1). It was not anarchy, since some requirements continued at almost every institution, but the orderly progression of courses from freshman- to senior-level that had previously constrained choices and demanded that majors in a discipline go through a set of hierarchically ordered courses, from introductory surveys to more specialized advanced seminars, gave way. By the 1980s the range of what a student could choose to satisfy degree requirements, the very quantity of courses offered, and the difficulties in distinguishing between elementary and advanced courses were frequently overwhelming. And, as increasing numbers of students began taking courses at more than one institution or began adding internet courses to the dossier, the idea of an even partially coherent curriculum simply disappeared.

17At the same time higher education accepted and even exaggerated the growth of the parallel curriculum in which student life flourished. Building upon a tradition of student interests separate from the academic interests of the faculty, colleges and universities increased the number and intensity of student services, built student centers, expanded residential facilities, provided health care and career counseling, supported an increasing number of clubs, and in many cases, created a mega-sized intercollegiate athletic juggernaut that frequently defined the image of what a university was about. Much of the justification for student life once asserted itself through the view that such activities aid students in learning the teamwork, cooperation, leadership, and responsibility that will make them more effective professionals and citizens, implying that academic learning contributes a modest amount, at best, to these goals. Its value has also gotten a huge boost from the recognition that many students need academic, social, and psychological supports that either cannot or should not be met by professors. As competition for students grew and as the student life itself became fragmented into highly specialized clubs and activities, student services became a place where almost anything was justifiable. So powerful had the extra-curriculum become that by the last decades of the 20th century, it was routinely being referred to as the co-curriculum, an extraordinary recognition of its role. Although efforts have emerged to bring the co-curriculum into closer connection with the academic curriculum—e.g., through living and learning experiments— the co-curriculum remains a parallel and separate domain, by and large predicated on the absence of faculty to the mutual satisfaction of students and professors.

18These developments were tied both to the triumph of the faculty as the principals in higher education and to the power of students to achieve their demands. For the faculty, the dramatic explosion in courses opened the way for professors to teach their specializations, to make what students should study congruent with what the faculty wanted to teach, which was their academic disciplines and their methodologies. The students gained expansive choice in what courses they took and a robust student life in which they could engage. While Americans committed themselves to the extraordinary growth of higher education for all sorts of reasons—national defense and economic development, an educated citizenry, local and regional pride, personal income and vocational gain, the expansion of educational opportunity—the professoriate made the academic disciplines the organizational center and intellectual heart of universities and colleges. The students, with active institutional support, created a parallel and largely independent world.

5.3 The separation of science and morality

  • 2 The section that follows depends heavily on Reuben (1996), which shows how colleges and universiti (...)

19The triumph of separate tracks, with professors as master of their domain, while students had wide choices within the curriculum and were free to pursue student life free of academic oversight, was not de novo. It did not just happen after 1945. It had been evolving since the late 19th century, as the disciplines made scientific research and the methodology of science their raison d’etre, creating a world that was by and large separate from that of students.2

20For most of the 19th century, American higher education assumed that the unity of truth combined science and religion in the service of one another and that religion was the basis of morality. Higher education’s purpose was to reinforce this unity, training simultaneously the intellect and moral character. The curriculum exemplified this, making the connection explicit in a culminating course in moral philosophy, in which students explored the literature of philosophy and theology to confirm their obligations to family, community, nation, and God and to reconcile religion and secular studies. As universities and colleges became connected to national and regional interests and to economic development—marked at the federal level by the Morrill Acts of 1862 and 1890, which explicitly articulated the utilitarian aims of higher education—criticism of the curriculum’s neglect of modern and practical subjects mounted, as did the failure to offer advanced instruction and the limitations that theology placed on scientific research. Colleges developed a much broader set of purposes than the traditional one of preparing for the learned professions and public life. New private universities—like Johns Hopkins, Cornell, and Chicago—and older ones—Harvard, Columbia, and Pennsylvania—as well as state universities like Wisconsin, Michigan, and California at Berkeley capitalized on the intensified interest in utilitarian and vocational outcomes, advanced research, and science to become the dominant players in higher education, even as the liberal arts colleges both adjusted to the new climate and justified their more traditional ethical and community responsibilities (Geiger, 1986, 1995; Leslie, 1992).

21Initial strategies to reform the curriculum and to advance research tried to reaffirm the traditional connection between religion and science and thus between higher education and morality. The generation of “great university presidents”—Charles William Eliot (Harvard), Daniel Coit Gilman (Johns Hopkins), Andrew White (Cornell), William Rainey Harper (Chicago), and Nicholas Murray Butler (Columbia)—assumed that scientific research would continue to support religion. They hoped to show this by making religion a focus of scientific study. They failed.

22By the first decades of the 20th century, efforts to put universities at the service of the moral goals of the classical college while advancing knowledge were in retreat, “the ideal of the unity of truth did not seem plausible to younger intellectuals trained in the new universities” (Reuben, 1996, pp. 3–4). Over the next decades academics came to embrace the separation of facts and values. Facts were what natural and social scientists discovered. Teaching values and having them implemented behaviorally was neither the responsibility of scholars nor the goals of classroom instruction. While liberal arts colleges continued to hold to the validity of morally-based instruction and responsibilities, a new generation of university faculty severed the connection between their search for knowledge and moral behavior, between their roles as professors and the institution’s responsibility for student values and behavior. It was not simply the making of the modern university; it was a revolution that worked a fundamental change in American higher education.

23By the 1930s, the dominant view of knowledge centered around research in the academic disciplines, structurally organized within academic departments. The advancement of knowledge occurred most effectively and successfully when it was specialized, experimental (controlled as much as possible), quantitative, had replicable methodologies, and sharply distinguished between “pure” and “applied” research (with the former accorded higher status than the latter). Most powerfully, knowledge was best acquired and was most trustworthy when scholars removed ethical concerns from their research, achieving ethical neutrality or ethical detachment. Only then could scientific credibility be achieved; only then could research achieve stature and social influence.

24There were degrees and differences with which these views took hold. They were held and implemented most insistently in the natural sciences, which took seriously the need to separate research from religion and morality. But the new ideology came to dominate the social sciences also. Sociology, for example, saw the rise of a new scientism, in which facts had to be measured. The direct application to and involvement in social reform by social scientists was rejected, a phenomenon that at the University of Chicago pitted women faculty in a losing battle with their male counterparts for control over the social science disciplines (Bannister, 1987; Fitzpatrick, 1990). The humanities initially proved highly resistant to the separation of morality and science. The New Humanists critiqued the methodologies of science and its assumptions of progressive modernity. Potentially overwhelmed by the success of their social science and natural science colleagues, faculty in the humanities challenged the value and validity of morally neutral research and teaching. Among college and university administrators, few, if any, were willing to dispense with the view that an undergraduate’s character and morality were an institutional responsibility. In the liberal arts colleges, the separation of fact and value, of science and morality, was especially contested (Leslie, 1992).

25By World War II universities had made a substantial shift to viewing science as a value-free enterprise engaged in by ethically neutral researchers. If teaching needed to acknowledge moral and normative values, if the institution had to provide for the character-building of undergraduates, so be it. Outside of courses that examined, scientifically, such issues, these were not the faculty’s problem, since they were best taken up outside the realm of scholarship, except insofar as moral issues were themselves a subject of scientific research. In opposition, many liberal arts colleges and those in universities that wanted their undergraduate program to be more traditional sought to overcome the disjuncture between the faculty as scholars and the moral responsibilities of teaching by urging required courses in the humanities and social sciences, as in Columbia’s required Contemporary Civilization courses, in order to promote citizenship education. Administrators, unwilling to challenge directly the faculty’s freedom to specialize and to engage in research, stressed the importance of teaching and faculty advising, as well as the faculty members’ moral character—the professor as personal model. They stressed the importance of the humanities in keeping open the dialogue between scholarship and morality (not incidentally, making the English department the academic home of such concerns). And, wherever they could, universities and colleges expanded on-campus housing and gained institutional oversight of the extra-curriculum. Ultimately, however, higher education settled upon a dual track educational program.

26One track involved the formal organization of knowledge—the curriculum—controlled and delivered by an increasingly powerful faculty. The second track—the extra-curriculum—was the students’ domain loosely coordinated by student life professionals. While college and university administrators regularly stressed the complementary and overlapping nature of the curriculum and the extra-curriculum, leading the latter to be renamed the co-curriculum, the stress was more rhetorical than real. On university campuses the two curricula existed as independent and non-collaborative enterprises. In the liberal arts colleges, professors were asked to and often did breach the divide, although the trend was always toward separation. After World War II, the divide would achieve its apotheosis.

5.4 Triumph of methodology

27The enlargement of faculty authority within the research and teaching domains was of extraordinary importance in the history of American higher education. Its story after World War II tends to be told as if this was a natural extension of the knowledge explosion and the potential contribution of knowledge to the national interest, stimulated by federal investment and institutional competition for prestige and dollars. But it is also useful to think of what happened in terms of the triumph of disciplinary methodology. Teaching a discipline to undergraduates meant training them in the methodologies relevant to that discipline. What this meant varied by each discipline, but in every field the pressure was toward a model of greater scientific methodological precision, a trend that had the effect of inhibiting the conversation between disciplinary scholars and their undergraduate students, who were interested in many things, with methodology not being one of them.

28The evolution of research methodology as the driver of teaching and learning had been underway for some time, but the dramatic acceleration of efforts to achieve methodological precision after World War II was not entirely predictable. The early postwar years after all witnessed a tremendous outpouring of rhetoric about higher education and democracy, the importance of general education for an informed citizenry, equality of opportunity, and the utilitarian and practical purposes of postsecondary schooling—a sufficiently broad set of aims which could have tolerated substantial diversity in the organization of knowledge. In retrospect, however, there was an eerie duality about the aspirations of the academy and the rhetoric of democracy. On the one hand, democratic and utilitarian purposes gave enormous boost to higher education’s postwar growth. The combination of knowledgeable and productive citizens and the application of science to economic growth and national defense was irresistible. It was precisely this engagement with the outside world, the successes that were palpable, which brought millions of students and dollars into the industry. On the other hand, the faculty sought to sharpen the academic disciplines’ foci and to create methodological forms that separated their work from the citizens they were educating. The faculty, which in fundamental ways depended upon the postwar expansion of enrollments, was disinclined to make much accommodation to the calls for civic-minded education and the reality of greater student diversity. Even as the enterprise of higher education expanded, and even as higher education claimed utilitarian responsibilities—justifying investments in it and expanding enrollments—the knowledge that was being taught within the academic disciplines became narrower and narrower, more and more based on methodologies, and more and more disconnected from the everyday world of the students (Bender, 1997).

29The way of the faculty had considerable merit. Given the organization of knowledge into academic departments based on the disciplines and the incentives to contribute to new knowledge, scholars were wise to construct technically grounded methodologies, with which they earned a distinctive status among their colleagues. The bind professors faced would have been difficult to resolve in the best of circumstances, for they were asked to speak to communities outside the academy as part of their civic and utilitarian responsibilities, yet simultaneously were expected to create a distinctive community of discipline-based colleagues whose language gave them exclusionary status. Mostly, they opted for the latter. In the historian Thomas Bender’s words: “In retrospect it appears that the disciplines were redefined over the course of the half-century following the war: from the means to an end [civic responsibilities] they increasingly became an end in themselves, the possession of the scholars who constituted them. To a greater or lesser degree, academics sought some distance from civics. The increasingly professionalized disciplines were embarrassed by moralism and sentiment; they were openly or implicitly drawn to the model of science as a vision of professional maturity. The proper work of academics became disciplinary development and the training of students for the discipline” (Bender, 1997, p. 6).

30Put differently, when faculty in the 1940s debated the curriculum and its relationship to society, they were engaged in discussions about an educated citizenry and the best forms of knowledge to connect their students to their post-college lives. This was the essence of the debates over general education and vocationalism. By the early and mid-1960s, curriculum discussions among the faculty— even with growing controversies about “relevance”—were much more likely to be about how to provide a structured introduction to each academic discipline. Undergraduate education was less about faculty concern for knowledgeable citizens and more about the specializations of each faculty member or department.

31The extraordinary growth of higher education produced an enormously expansive industry built upon a shaky foundation, shaky because the foundation was held together by two critical conditions. The first assumed that economic returns to students would grow, opportunity costs would continue to go down, and students (and their parents) would always feel satisfied that each year of college was an excellent investment. As long as these occurred, the actual classroom enterprise made only modest difference. What happened when professors and students met in the classroom was not all that consequential as long as there was substantial profit in acquiring the degree. The second condition was related to the first: higher education depended upon the success of the extra- or co-curriculum to provide students with the learning students considered most relevant to their success—social skills, leadership, networking, knowledge of the world around them, community and civic participation. As long as the co-curriculum was well supported and thriving, classroom learning was just not that important. When questions got raised in the 1980s and 1990s about high expenditures and high tuition costs and about how much students were really learning—essentially questions about value-added—controversy over faculty responsibilities immediately flared into the open. Professors were unclear and confused about why they and the institutions of higher education were being singled out, when they after all had built such a powerful industry.

5.5 Economics: queen of the sciences

32At least part of the discontent with higher education involves the ways the academic disciplines evolved by divorcing themselves from the experiences and concerns of undergraduate students. Two disciplines—economics and philosophy—serve as examples.

33No social science or humanities discipline achieved higher acclaim and stature than economics after World War II. Once referred to as the dismal science, economics quickly became a beacon of American higher education, simultaneously able to assert itself as a science and to claim utilitarian value. During the first decades after the war, economics laid plausible claim “to the belief that economists had learned how to manage (if not plan) an economy; that the business cycle was largely obsolete… that full employment was a possibility; economic growth could be maintained; and that the ‘Keynesian revolution’ had given economists the theoretical and practical tools to achieve all these goals” (Bell, 1982, p. 30). Economics’ great transformation lay in the application of mathematical model building and statistical analysis to a broad range of economic problems. In David Kreps’ words, “mathematical modeling, a small piece of the subject until the 1940s and 1950s, became the all-encompassing (some would say suffocating) language of the discipline” (Kreps, 1997, p. 62). Economists were able to parlay their claims of utilitarian value with methodological rigor to become the queen of the sciences.

34The ability and desire of academic economists to transform economic knowledge into an analytic toolbox and harness the power of mathematical model-building was truly revolutionary, for it substantially broadened economists’ ability to make their discipline a science and to understand and to resolve complex economic and social problems. Model-building transformed the ways we understand all sorts of activities and behaviors. But it also subordinated economic history, ethics and normative judgments, and the direct observation of the real and messy world to theoretical mathematical models. For undergraduate education, these developments meant that the study of economics was, on the one hand, attractive because of its potential utility, and on the other, focused on exposure to analytic tools and model-building, which in many cases was more about technical skills than substantive economic issues. Economics for undergraduates became a version of the requirements of first year graduate students. The undergraduate’s responsibility was preparing to do economics, learning the analytic toolbox rather than studying and understanding economic problems directly (Solow, 1997; Bell, 1982, pp. 23–30, 46–52).

35The discipline of economics thus successfully defined itself in the postwar period as a field of study under little obligation to engage in conversations with undergraduate students about economic institutions or about the economic issues that concerned students. The economic literacy necessary for an educated citizenry was not the responsibility of the academic discipline of economics. Undergraduates were required less to study such topics as international trade, labor markets, the historical development of economic conditions, or the relationship between politics and economics than they were to understand the language of mathematical modeling and the use of statistical techniques.

36These conditions were not uniform. The day-to-day teaching in college and university classrooms, the need to mount a full range of courses to satisfy teaching responsibilities and, not so coincidentally, to justify the appointment of more economics professors, and the academic limitations of students meant that the exposure to the methodological toolbox was not the only agenda. Economists at liberal arts colleges occasionally found themselves at odds with the emphasis on pre-graduate training within the undergraduate curriculum, since the teaching tradition at their colleges required a more comprehensive approach (Barber, 1997). Nonetheless, the heart of the discipline, the path by which economics gained promotion and prestige, lay in an approach which was resistant and even hostile to what undergraduates expected economics would be about. Little wonder then that when given the opportunity, undergraduates flocked to economics-like courses in other disciplines and interdisciplinary programs, in business schools, and in other professional schools. Indeed, it is plausible to argue that for undergraduates the most interesting economics was being taught outside economics departments.

37There are a number of caveats with which one could counter my argument about the absence of conversation between the discipline of economics and undergraduates. One, commonly proffered by economists, focuses on the students and other faculty rather than on the discipline’s development; the decline in academic skills among undergraduates and their disinclination to take academic work seriously after the 1960s made it difficult for students to learn the necessary tools to study economics. Often implicit in this view is that there was a decline in standards among the other academic disciplines and that the growth of economics courses for non-economists in professional schools and in other departments further lessened the enthusiasm of undergraduates for serious learning. These arguments may have some measure of truth, but they are partial at best and tend to deny that economists themselves played a role in the process.

38A second caveat suggests that my description of an absence of conversation is romantic in its implied view that there ever was a conversation between economists and undergraduates prior to the dominance of mathematical modeling, and that it neglects the substantial widening of the field of economics since the 1970s. The former is probably correct and I do not mean to portray an idealized notion of economics professors in conversation with their students before the 1950s. We know enough about the evolution of the disciplines and student cultures historically to disavow a golden age of universally curious and academically committed undergraduate learners (Horowitz, 1987). But I do think that the idea of conversation has to be understood within the context of the enormous growth in stature of economics and what I believe was a genuine thirst for knowledge about economic issues among students. The case is about opportunities to improve learning foregone.

39Economics did broaden its methodological focus and substantive concerns in the decades after 1970. The initial impact of mathematical modeling between 1950 and the mid-1970s had been the elimination or narrowing of the range of topics addressed in teaching and research (Kreps, 1997, pp. 65–74). That reduction shifted, in part under the pressure of the 1960s to treat non-rational behaviors, uncertain goals, and disequilibrium with the same regard as the trinity of assumptions about rationality, goal orientation, and systemic equilibrium that had dominated the previous twenty years and, in part, by the growth of “professional school” economists who focused on what they considered real world problems.

40A concrete example involving the concept of educational choice might be helpful in clarifying the argument. Econometric models stress the common and shared knowledge held by decision-makers, the application of rational self-interest to decision-making, purposeful action to pursue well-defined goals and a resulting equilibrium as educational providers and educational seekers adjust to one another. This model of rational decision-making with equally available information and clarity of purpose fails in the reality that when parents and their children make educational choice decisions, they rarely have the same knowledge as everyone else, often face or have racial and religious preferences and prejudices, may be poor or wealthy, decide under various kinds of peer and familial pressures, as well as sibling rivalries, and so forth. These can be put into econometric models, but they can also be examined in ways that invite conversation about the messiness of choices about education. Students are more than able to recognize this messiness, for they encounter it in myriad ways. The more the messiness is acknowledged, the more “real” the discussion to students, who observe and experience unpredictability and irrationality all around them. However, the messier the analysis is to the economist, the more unsatisfying the approach. That, I think, separates undergraduates who are willing and may even delight in messiness from the academician’s desire for methodological tidiness.

41The development of economics as a discipline is suggestive of how disciplines within the academy could become methodologically more sophisticated, more precise, and grow in stature and at the same time become less and less available to students. That is not, of course, a remarkable insight. More revealing, as occurs with the absence of concern about what students are actually learning, is that there is little to prevent and little protest against the widening gap between the faculty in the discipline and the undergraduate students who are ostensibly the faculty’s responsibility. Had it occurred simply in economics, such developments would have mattered little. But they occurred in other disciplines with much the same impact: as a discipline became more technical in its methodology, it lost its connection to undergraduate students. The end result was the creation of a powerful discipline-based academic organizational structure ostensibly designed to expand student learning but which failed to engender a conversation between faculty and undergraduate students on the serious issues that bound them as citizens.

5.6 Philosophy: the analytic (non) conversation

42The discipline of philosophy took a path quite similar to economics, but it did so with even more devastating consequences for the conversation between itself and undergraduates. From outside the discipline, it appeared that philosophy was poised for a substantial burst of enthusiasm and interest among students as World War II ended. Such issues as the nature of evil, social purposes, civic responsibility, and the role of the individual and the state—all of which had historically fallen within the domain of philosophy— looked ready to find a substantial student audience. This did not happen. Instead, philosophy opted for a narrowing of subject matter and methodological purity designed to separate itself from other, less rigorous disciplines and from philosophy’s own history. The dominance of analytic philosophy was first and foremost a triumph of methodology with its stress on precision and clarity, on tidiness in observing, understanding, and explaining the scientific enterprise and the meaning of language. Its model was scientific precision and mathematical logic and it depended heavily on the “formal language of logical calculi,” a language “that combined clear structures of logic, mathematics, and the empirical sciences” (Nehamas, 1997, pp. 212–214).

43As had occurred with economics, the outcome built upon prewar developments, but it was not inevitable. In the half-century before the war, philosophers engaged in a ferocious debate over how to respond to the growing authority of science and the trends toward specialization and professionalization within the academy. The struggle, as Daniel Wilson notes, “set the stage for the rise of technical, professional philosophy, later embodied in logical positivism and analytic philosophy” and in the process, philosophers “unintentionally created the basis for philosophy’s growing marginalization in 20th-century American culture, as the community of philosophic discourse contracted to a relatively small professional circle” (Wilson, 1990, pp. 1–2).

44That outcome appeared self-evident in the years following the war, but the prior struggle had been a genuine one and the minority view kept latent its questions about community and civics that connected John Dewey and other pragmatists to the social concerns of the late 20th century. Yet the victors were clear: logical positivism and analytic and linguistic philosophy gave “substantive coherence” to the discipline, providing it with legitimate questions and methodologies. This approach to philosophy was self-conscious in rejecting the primacy of values, emotions, and normative judgments. Nor were philosophers to think of themselves as part of the same enterprise as historians and literary scholars, scholars with whom they had once been linked. Rather, philosophers in postwar America came to think of themselves “as participants in the enterprise of science” (Nehamas, 1997, p. 212).

45There is a lot to be said for this emphasis on the precise, the logical, and the verifiable for it brought to the unfocused, the vague, and the irrational, ways of thinking that potentially allowed for the clarification and resolution of differences. But, as with mathematical model-building among economists, analytic philosophy had a way of driving alternative methodologies to the side and seeking to deny the messiness of ordinary life. In doing so, the discipline of philosophy curbed its capacity to speak with wider audiences and, in the context of higher education, its conversation with undergraduates. Academic philosophy retreated from the public domain; it observed the world but refused to engage in it. The irony of this was patent: philosophy had the potential to (and often did) address issues of interest to diverse audiences, but it did so in extremely technical terms that excluded rather than invited participation. Analytic philosophers themselves showed little inclination to open the wider conversation. As Stanley Clavell put it in 1964, “For any of the philosophers who could be called analytical, popular discussion would be irrelevant... for the analyst, philosophy has become a profession, its problems technical; a non-professional audience is of no more relevance to him than it is to the scientist” (quoted in Daedalus, Winter, 1997, p. 224).

46In the decades after 1970, philosophy broadened both the issues it addressed and its methodological approaches, in ways that parallel economics. To a substantial degree, Thomas Kuhn’s The Structure of Scientific Revolutions (1963) initiated the process of rethinking philosophy by reintroducing history into its daily work. A turning point came with the publication of John Rawls’ A Theory of Justice (1970), which had an impact across the social sciences and humanities, with a subsequent expression of interest by philosophers in issues of public policy, civic and ethical judgments, and feminist ideologies, and with the resurgence of John Dewey and Deweyean concerns with public life and problem solving. These developments have affected the teaching of philosophy within philosophy departments, but, as has also been the case with economics, the greatest influences have been felt in the teaching of applied philosophy in other arts and sciences departments and in medical, business, education, and law schools where ethical issues and European continental philosophers like Habermas, Foucault, and Derrida have found homes (Nehamas, 1997, pp. 217–218).

47For all their substantial differences, then, philosophy and economics have traveled parallel paths. The promise of discourse between the growing numbers and diversity of undergraduates and the two disciplines was short-circuited and left unfulfilled as the disciplines focused on methodologies that stressed mathematical models and mathematics-like logic showing little inclination to take into account the messy world that students experienced and the questions they posed about their lives and society. About the two disciplines, historian Carl Schorske writes, “The intellectual quest for scientific objectivity and the professional advantages of a value-free neutrality reinforced each other in the establishment of a new methodological consensus as the basis of the discipline [of economics]… the analytic philosophers purged or marginalized traditional areas of concern where values and feelings played a decisive role. Ethics, aesthetics, metaphysics, and politics were all for a time equally excluded as the source of pseudo-problems that could not be formulated or addressed with the rigorous canons of epistemological reliability by and out of science” (Schorske, 1997, pp. 296–299).

48Recently efforts to broaden the conversation across disciplines by expanding topics and acknowledging alternative forms of knowledge and ways of knowing have occurred, but these have affected teaching and learning more outside philosophy and economics departments than within them. For the most part the failure of conversation between the two disciplines and undergraduates has been viewed as unimportant by those within each discipline and, in any event, was often ascribed to the failure of the students. Indeed, a kind of “we are not to blame” defense has set in, claiming as Alexander Nehamas has written, that the public has “no patience for any position that is not virtually self-explanatory, refusing to take seriously any view that requires careful thought and that cannot receive practical application without serious and sometimes relatively long preparation” (Nehamas, 1997, p. 220). Such a view might make sense if university and college faculty were not so dependent upon the public and students to pay the bill.

5.7 Generating a learning conversation

49A real and perhaps inevitable tension exists between questions about students’ learning—how much do they know, how do they learn, how do their experiences connect (or not) to their learning, what issues might challenge their minds or transform their ways of thinking and doing—and the questions faculty ask about the academic disciplines—what is known, what are the disciplinary (or interdisciplinary) questions, how should the discipline generate its questions, what are its methodologies. The different questions point in different directions. Pursuing one set rather than the other leads to quite a different understanding of what is important in the learning process.

50Many professors weigh these differences seriously and, at their best, they synthesize the varied strands into a creative tension. But higher education as an entity, colleges and universities as institutions, and academic departments, including many interdisciplinary programs as collections of discipline-trained professors, have not historically made the relationship between the questions posed by how students learn and the questions posed by the disciplines as a center of attention. Professors tend to think about transferring knowledge based on the kinds of questions they might ask as disciplinary scholars. Students, in contrast, tend to think of knowledge that helps them understand and act in the world around them. Even in the best of circumstances this makes it difficult and, too often, seemingly impossible to have a sustained conversation about learning between professors and students. The absence of such a conversation has made the academy itself vulnerable, for too few students believe that the faculty or academic learning is the soul of higher education.

51Do not misunderstand me. The evolution of the disciplines brought tremendous advances to our understanding of the world, substantively and methodologically. The disciplines have shown us that there are rigorous ways to ask questions, probe for answers, and summarize findings. In a relativistic world, they suggest that anyone’s opinion is not as correct as anyone else’s. The most important lesson we teach undergraduates is that some ways of analysis more effectively comprehend the universe than others. And, as research itself has become more interdisciplinary, so too has teaching. That said, the evolution of the academic disciplines has tended toward a rather narrow definition of what Lindblom and Cohen (1979) call “usable knowledge.” The language and methods with which the disciplines work make it difficult to appreciate that using other lenses and methods are also valid ways of knowing. The disciplines in this way have worked to exclude a broader public—in Thomas Bender’s phrase, they have engaged in “academic enclosure” (Bender, 1997, p. 7)—thus denying access to their knowledge and dismissing what the public knows and experiences as not being worth very much.

52This combines with the tendency of the academic disciplines to misunderstand the discrepancy, in Charles Lindblom’s words, “between widely accepted scientific ideals and actual feasible practice, a discrepancy that was not faced and intelligently dealt with but rather swept under the rug” (Lindblom, 1997, p. 233). Lindblom is referring specifically to the tensions within political science in the 1940s and 1950s between developing a science of political analysis and matching that to actual real world accomplishments. Similarly, Rogers Smith (1997) has argued that political science has historically wanted to be a pure science and contribute to buttressing democracy without recognizing that the desire has led to ideological blinders and has been impossible to accomplish in any event. Although their disciplinary reference point is political science, Lindblom’s and Smith’s arguments are applicable more generally. The academic disciplines sought scientific and methodological purity while neglecting to understand that subject matter itself became constricted and that ethical neutrality brings its own ideological baggage (Schorske, 1997).

53The irony is hard to overstate; higher education entered the last half of the 20th century with an optimism never before seen in its history. An important and critical premise was that it could engage in the education of large numbers of people. And yet, even as students flocked to universities and colleges in droves, as governments expended vast sums in its support, and as local communities battled for the establishment of new campuses, scholars defined their fields in ways that made it difficult for people to understand them and in ways which proclaimed that the lack of communication did not matter. Not surprisingly, when faced with skepticism, from both outside and inside higher education, disciplinary scholars have rarely been able to convince skeptics. Even more telling, they have often viewed the skeptics and critics as irrelevant or so threatening as to require united defenses against the barbarians at the gates, leading to the view that the best defense is to convince outsiders that the subject matter was too complex for them to understand and they should, in effect, leave it alone.

54Some of this has shifted. It was impossible for higher education to ignore the civil rights movement and racial conflict, the discovery of poverty and inequality, the protests over Vietnam, and the counterculture, especially when students were bringing the issues onto campuses and extending them to include the ways they were treated and taught. With the scandal of Watergate tarnishing the presidency, the shock of stagflation during the 1970s, and fears of a declining economy, the notion that scholarship and teaching should be immune from examination and revision was hard to sustain. Repeatedly, events outside higher education forced reexaminations— most recently, the worldwide financial collapse—literally demanding that colleges and universities relook at what students are actually learning. Rogers Smith’s conclusion about the impact of the 1960s and 1970s on political science is broader and takes on even more power today: “In that conflict-ridden era, political science could persuasively be accused of offering models that failed to reveal and challenge unjust inequalities; to produce any behavioral laws; or to predict, explain, or provide effective social guidance concerning the startling events then occurring. And most damning of all, to an embarrassing extent, the political science literature failed even to discuss these topics” (Smith, 1997, p. 260). Such a view applies just as forcefully today to the thousands of professors and scholars in professional schools whose work failed to focus on the realities underlying economic, social, and political institutions of the early 21st century.

55That said, many scholars have changed the way they go about their business, and genuine debates over knowledge, its relationship to culture and values, and its presentation to students have occurred. Perhaps the most dramatic of these is the assertion of normative claims and the explicit discussion of values in scholarship, challenging the neutrality of method that the disciplines held so dear. New topics have been invented, in part as a result of “normative claims” around inequality, justice, discrimination, the influence of gender, ethical behavior, and the study of the previously unnoticed (Schorske, 1997; Kimball, 1988). One manifestation is the willingness with which philosophers contend with one another over public issues of morality and justice, as in the 1997 brief to the Supreme Court over the right to assisted suicide (The New York Review of Books, March 27, 1997). Another is the effort by educational researchers to bring their scholarly understanding of the effects of race-based financial aid to the Supreme Court (Linn and Welner, 2007). This shift to a more value-laden scholarship and to new topics that reflect normative concerns has provoked greater interest in the historical evolution of issues and of the disciplines themselves, in particular asking how things came about and why we study them in the ways we do, opening up still new approaches to fields of study (see also Walzer, 2006, Ackerman, 1991, 1998).

56Real world experiences and direct observation have become fashionable. Research on “natural experiments” has grown in importance. The most remarkable methodological development is the immense popularity ethnographic research has achieved and where some of the most interesting methodological debates occur—about the immersion of the scholar in the life of the community being studied, about the relationship between those being studied and the studier, and about how replicable the findings are. These attest to a methodological shift toward qualitative research that seemed unlikely only a few decades ago. Undertaking scholarly research, quantitative and qualitative, on problems drawn from the experiences and dilemmas that people and institutions face has also increased the emphasis on the interaction between actors and structure, making indeterminacy and uncertainty a more prevalent conclusion than previously thought appropriate, wise, or scholarly (Lindblom, 1990). Disciplinary boundaries for researchers have blurred and many scholarly questions are generated by the dilemmas that people and institutions face, leading researchers to pursue whatever disciplinary approaches seem useful. Often this has had teaching consequences as more faculty than ever before teach in explicitly interdisciplinary undergraduate programs. More faculty who were trained within a discipline are thus doing research and teaching across disciplines; more undergraduates are enrolling in interdisciplinary majors; and more colleges and universities are establishing interdisciplinary teaching and research programs.

  • 3 On calls for graduate research training based on interdisciplinarity and real world concerns, see (...)

57The growth of interdisciplinary research and teaching leaves higher education in an awkward organizational dilemma. Large numbers of faculty and students are engaged in interdisciplinary studies, but discipline-based departments remain the dominant organizational basis for decision-making, with the departments often acting as if each discipline was an isolated and autonomous entity. With reference to literary studies and English departments, Catherine Gallagher writes: “[We] have applied ourselves to the building of interdepartmental, rather than departmental, institutions: humanities institutes, interdisciplinary journals, women’s studies programs, ethnic studies programs, film studies, team-teaching programs, and the like. While we attended to these institutional tasks, we avoided translating our ideas into coherent graduate programs… This fact may indicate that we are in the midst of an enormous institutional shift away from the traditional departments even though we continue to locate our professional training inside those [depart - mental] structures” (Gallagher, 1997, p. 152). That graduate doctoral programs have been so slow to acknowledge these shifts is especially disturbing since the shift toward interdisciplinary is so congruent with how many researchers actually go about their business.3

58The rise of the professional schools and professional programs to prominence and the consequent diminution of the arts and sciences— phenomena that evolved rapidly in the 1990s under pressure to produce more “real world” and vocationally oriented programs— suggests that the traditional arts and sciences disciplines have had a difficult time engaging students in conversations about their work. There has also been a growth in orientation toward theoretical concerns, with contradictory results. Current theories almost always bring issues of race, gender, social class, ethnicity, and culture into the classroom. They tend to emphasize the historical moment, power and authority, the interaction between actors and structure, and the relative nature of values. These theoretical interests have thus had the effect of making scholarly questions seem both immediate and controversial, a scholar’s dream and a student’s delight. And yet the fascination with theory has many of the same ingredients as the economist’s mathematical model-building and the philosopher’s insistence that only logical analysis matters: it communicates a view that only those who understand the theory and the language, who have, in effect, the right theoretical toolbox, can engage in the debate.

59The developments described above have created tremendous uncertainty in scholarship and in teaching. What is the core of each discipline? Should there be a core? What do students need to know? Not all the disciplines have been equally affected by the debates. English departments are engulfed by them. History departments have diversified their understanding of what students need to know without necessarily tearing themselves apart. Economics and philosophy departments have often stood their ground, although the financial crisis that began toward the end of the first decade of the 21st century may have changed that. Certainly economists and philosophers outside of economics and philosophy departments have been active in taking up new methods and topics. Yet for all the differences among the disciplines, questions in higher education about what is taught, what should be taught, and how much is being learned have started to have influence. Often initiated by external agencies expressing critical doubt about the amount and quality of learning occurring among undergraduates, sometimes taking shape as arguments over political values or between new and old scholars, debates about the quality of what college students are learning have moved to the fore. For some within higher education, the debates are treated with scorn as an intrusion into their academic freedom to teach what and how they wish. Among others a kind of mournfulness appears, as if of an orderly world of the past has been shattered, a time when history, not women’s history or African-American history, was taught and learned. But sometimes there is enthusiasm about addressing questions about teaching and learning, an enthusiasm generated by the possibilities of change.

60Clearly, questions about the disciplines and their relationship to undergraduate learning are not easily answered. Students now have few required courses and lots of choices and the size of the curriculum remains unwieldy, testimony that faculty specialization remains dominant. It is almost impossible to tell the difference between elementary and advanced courses, except perhaps by the numbers of students enrolled in them. While it is fashionable to argue that the dismantling of a once orderly curriculum was due to the failure of nerve and the collapse of faculty authority in the face of external conditions, the curriculum disorder of the last decades is part of a disciplinary revision that began at least in the 19th century and was rooted in the dismantling of what had once been the core of each discipline. The canon may have been challenged from without higher education, but its breaking occurred from within as discipline-trained faculty looked for new problems and alternative ways to resolve them.

61One should not underestimate the complexity of generating a conversation about the disciplines and their relationship to student learning. It is not easy to determine what really matters within a discipline when almost anything can be studied and a variety of methodologies are appropriate to their study. We know incredibly little about the relationship of knowledge to how students learn. Nor is it easy even to hold on to the notion that any discipline is a unique entity when so many of the same or similar issues are studied in multiple disciplines and in similar ways, whatever the professional training of the scholar. Add to these genuinely complex dilemmas the tendencies to view all potential changes through their marketability—whether they can be sold to students and funders— to phrase them in politically-charged terms or as a cover for fiscal cutbacks and the enormity of the problems are apparent.

62What we do know is that students are badly under-learning and that colleges and universities do not seem capable or even willing to reverse the situation. As Derek Bok (2007) persuasively argues, many students show little improvement in writing, moral reasoning, critical thinking, and quantitative skills. Most students do not learn a foreign language, seem to develop few new cultural and aesthetic interests, and do not learn what might be considered necessary skills to participate as informed and active citizens in a democracy. This would seem to achieve the level of scandal, but almost no attention seems to be paid to this evidence by professors when they teach or discuss teaching, the latter an all too infrequent occurrence.

63In fact, when looked upon from the perspective of undergraduate students, the current situation raises marvelous opportunities, for it suggests ways of looking at scholarly dilemmas that can and ought to be appealing to students, especially as the undergraduate student population itself now runs the age and experiential gamut. The possibilities of a genuine and vigorous conversation occurring between students and faculty, however, will require both a commitment on the part of the faculty to that end and a willingness to acknowledge that conversation between students and the disciplines requires a shared sense of participation and worth. And that is not easy to come by.

Notes

1 Derek Bok (2003) makes a version of this argument with regard to intercollegiate athletics and expresses worry that the same thing is happening with regard to contract research. My view is that the research enterprise has already achieved the power that now resides in intercollegiate athletic programs.

2 The section that follows depends heavily on Reuben (1996), which shows how colleges and universities redefined their traditional responsibilities for the moral and character building of students to accommodate the new expectation that the faculty’s primary role was to become research scholars. Reuben also argues, as I do, that the growth of the extra-curriculum was directly connected to changes in the organization of knowledge and the emphasis on research.

3 On calls for graduate research training based on interdisciplinarity and real world concerns, see Yehuda Elkana (2005).

© Central European University Press, 2010

Conditions d’utilisation : http://www.openedition.org/6540