Précédent Suivant

3. NSF and DARPA as Models for Research Funding

An Institutional Analysis1

p. 45-75


Texte intégral

1The Federal government expends roughly $33 billion annually on scientific research and development in academic institutions, or 60 percent of total academic R & D funding. The former figure represents roughly one percent of U.S. GDP. These funds are allocated through a number of different government agencies and organizations, each operating in a somewhat different way. This study is designed to identify different organizational models of the way in which these funds are allocated to academic research and make a very preliminary assessment of the impact of these different models on the way in which researchers behave and the products their work produces. This has important implications for national science policy and the emergent field of “the science of science policy”.

2The study grew out of a much narrower project focused on the attempt to create an agency within the Department of Energy designed to foster radical innovation in energy technologies. The new agency, Advanced Research Projects Agency-Energy (ARPA-E), was modeled on the Defense Advanced Research Projects Agency (DARPA), an agency in the Department of Defense (DOD) that was credited with having generated a variety of new, discontinuous technologies and was generally contrasted with other agencies in the DOD, but more particularly, the National Science Foundation (NSF) and the National Institutes of Health (NIH), which were considered more cautious and conservative, and which fostered more continuous or incremental technological developments.

3It rapidly became apparent, however, that the critical characteristics of the DARPA model—if indeed there was such a model—were not obvious. The project was consequently restructured to focus on DARPA as an organization, and, subsequently, on the attempt to identify what was peculiar about DARPA, relative to NSF. Material on NIH and other funding provided by the Defense Department was also collected but it is more limited in scope.

4From the very start, the project has been conceived in the context of the broader debate about the effectiveness of government, i.e., public sector, initiatives. DARPA attracted our attention in no small measure because of the reputation of the agency as a great success in a period when government has been generally disparaged and government initiatives, especially in the promotion of particular industries, enterprises or technologies, have been viewed with great skepticism. In recent years, there has been a revival of interest in active government. The NSF and DARPA have garnered new interest as countries— particularly developing countries—look to the United States for models for the promotion of economic growth via what has become the new mantra of economic development: “innovation and entrepreneurship in the knowledge economy”.

5DARPA attracted our attention for a third reason too: the central role the program managers play in its organization and operation and the power and discretion which is lodged in the hands of these agents at the base of the organizational pyramid. In this respect, it constitutes a “street-level” bureaucracy, a class of governmental organizations that we have been studying in other contexts and which appear to offer a model for public sector management that is alternative to both the classic Weberian bureaucracy, widely viewed as rule-bound and rigid, on the one hand, and the new public management, which uses the profit maximizing firm in a competitive market as a template to construct a more flexible alternative, on the other hand.2

6This chapter is divided into sections as follows: the first section discusses the methodology and research approach. The second section presents the basic findings. It is divided into three subsections, focusing first on DARPA, then on the National Science Foundation (including some background material on NIH), and finally on the origination and motivation of the faculty researchers whose work these Federal organizations fund. The third section of the chapter then turns to an interpretation of the results. I conclude with a discussion of some of the broader implications of the study and the further research toward which they point.

I. Methodology and Research Approach

7Our study is centered on MIT. It is based primarily upon data gathered at MIT itself and from outsiders with whom our contacts at MIT had worked directly or whom they recommended as particularly good informants The MIT focus creates a relatively well-defined universe, but obviously limits the generalizability of the results. We discuss those limits in the body of the text.

8The focus was dictated by challenges of access. We talked early on with some of the top officials at DARPA, but the agency would not provide us with the data or the names of personnel that would have been required to draw a random sample of researchers or Agency personnel or even to select our informants in a more systematic way.

9The study has both a quantitative and a qualitative dimension. The qualitative dimension is based on interviews with key informants. We sought out MIT faculty members who had previously worked on DARPA projects and were knowledgeable about the agency. All of them had also received funding from other sources as well, and hence were able to compare their experiences across Federal agencies, and to a limited extent, with non-Federal funding sources. Virtually all of them had experience with the NSF. Some had also received funding, or considered applying for funding, directly from one or more of the military services, from NIH, and from private organizations (e.g., companies, foundations, and the like). We tried to interview the DARPA program managers of the projects on which our MIT respondents had worked, but we were limited to program managers who had left the agency. In total, we held formal, but open-ended, interviews with twenty-two MIT faculty members, and twelve current or former program managers and agency officials. Fourteen of these came from DARPA, eight from NSF, and five from NIH.

10For the quantitative dimension of the study, we started with a data set of all research projects which received outside funding at MIT in the years 1997–2008. We then linked this data to data on patents, licenses, commercial ventures (startups) and citations in scholarly journals. The bulk of this data was provided directly by various offices at MIT, to whom we are greatly indebted for their cooperation. The citations, however, we collected ourselves with the help of a team of MIT undergraduate research assistants.

11We focus here on the qualitative dimension of the study, but report preliminary results of the quantitative dimensions as background in the next section below.

II. Basic Findings

DARPA

Background

12To appreciate the nature of this Agency and its role in the debates surrounding Federal research policy, it is important to understand its history, and the nature of its success, particularly in the period of widespread skepticism and general depreciation and disparagement of government and its ability to create and maintain dynamic, innovative programs.

13DARPA was created in 1958 in reaction to the launching of the Soviet space satellite Sputnik, and the universal surprise with which it was greeted by the U.S. military, the country’s scientific establishment and the political class. That surprise was widely attributed to the conservative bias of scientific and engineering research, particularly the National Science Foundation that provided the major component of Federal research support and was the principle vector of research policy. The conservative bias was in turn attributed to the peer review process through which funding was allocated and the research effort more generally evaluated. A second component of military research was financed by the Offices of Research of the various branches of the armed services through grants but also through their own laboratories. The obligation of these offices to support the existing infrastructure was a second conservative force in the existing structure. A new agency was then conceived in large measure in reaction to these other organizations. Thus, DARPA was effectively given carte blanche to develop its research projects on its own, unconstrained by the existing research establishment. The institution that we set out to study was the result. It is partly the result of a mission and ethos defined in opposition to these other agencies and partly of organizational characteristics created to escape the constraints under which they operated. In this study, we use the NSF as a foil against which to define and understand the DARPA model, since for academic research it is by far the most important of the various institutions against which DARPA was conceived.

Evaluation of Success

14The organization that has emerged over time is, as we shall see, distinctive and poses a challenge to the principles of organization that guide these other agencies. But it has proven to be very resistant to systematic evaluation. The resistance is in part conceptual—it is hard to know how the agency ought to be evaluated. But it is also institutional: DARPA has refused quite explicitly to help support an effort at evaluation, at least in connection with the present study. It rejected our request for data which would have enabled us to define a list of projects, trace the participants drawn into the agency’s orbit, and assess the impact upon conventional measures of scientific output such as patents and citations in scholarly journals. Their claim is that the agency has to be evaluated in terms of its contribution to the mission of the armed forces, a mission that is notoriously difficult to define.

15The most extensive evaluation effort of which we are aware is a three-volume study by Van Atta, et al.33 The study reviews approximately forty projects and develops a narrative account both of DARPA’s contribution to the projects and the contribution of the technology which emerged in the process to the military mission and to civilian uses. A great strength of the study is that it includes most of the projects upon which the agency’s reputation in the general public or the science policy community rests, and in that sense it both reflects and sustains the esteem in which the agency is held. But the projects were selected largely on the basis of the data available to evaluate them in this way, and there is no effort to map them onto the larger universe of projects in which DARPA has been engaged, or might have been engaged in the period. Indeed, in the sense that the study purports to evaluate the agency’s success, the projects studied are selected on the dependent variable. The study does not include projects that were considered and never undertaken, or undertaken but abandoned or, as apparently is frequently the practice, folded into other very different projects. It is, moreover, difficult on the basis of this study to compare DARPA to other funding agencies with a different organizational structure and approach.

16On the other hand, it is not clear how one would evaluate an agency of this kind. Conventionally, programs are evaluated in terms of benefits and costs. But in the case of research on new technologies the costs are the opportunity costs of research in domains whose pay-offs, since they were never actually undertaken, are impossible to know and the benefits of these projects accrue not only in military preparedness, which even when it is not classified is ill-defined, and some of the projects—the World Wide Web, for example—have so fundamentally altered the texture of everyday existence and have such widespread commercial ramifications that the benefits seem virtually infinite. The Agency is certainly right: Its mission cannot be reduced to the patents and citations in terms of which research results are conventionally measured in academic studies.

17Nonetheless, in order to make any systematic comparison, it would be helpful to have some of these conventional measures of success. And for this study, we have constructed such measures starting from data provided by our own institution: MIT maintains a roster of grants and contracts obtained by its faculty and researcher staff. We have linked that individual contract data to several outcomes which are conventionally used as indicators of success. The granting agencies include DARPA, NSF, and NIH as well as the various military research offices, and a number of nongovernmental funding sources (private companies, foundations).

18The outcomes which we looked at are threefold: patents, citations, and technology licenses. In addition, we linked the technological licenses to data on new business ventures. The results of this project will be reported in a separate paper. Preliminary findings with respect to patents, technology licenses and new business ventures, are contained in Tables 3-1 and 3-2. As can be seen there, DARPA performs better than any of the other agencies on all of these measures, notwithstanding the fact that the agency explicitly rejects them as measures of its performance. Finally, our own work has been particularly influenced by the research of our colleague Erica R. H. Fuchs, who originally called our attention to the significance of DARPA as a possible model of government organization. Fuchs focuses specifically on the role of DARPA in one particular technology, the technology of computing, and places emphasis on the role of the program manager in creating and maintaining networks of researchers or research communities. We follow Fuchs in this last respect, but the broader range of projects which we examine (albeit much more superficially) and the contrast with the NSF complicates this picture.44

Table 3-1 Patents supported by sponsored research at MIT, 1997–2008.

(Table prepared by the authors)

Table 3-2 Startups supported by sponsored research at MIT, 1997–2008.

Image 10000000000002CE000002301368B64D038B43B9.jpg

(Table prepared by the authors)

Image 10000000000002A000000148F44D09E0635B3EAD.jpg

Qualitative Findings

19Our findings are best understood against the backdrop of a standard peer-review model, which our respondents seemed to carry in the backs of their heads. Central to this model is an academic or scholarly discipline. The financing agency issues a call for proposals from such a discipline. Researchers from that discipline are invited to submit proposals. A panel from within the discipline is then recruited to review these submissions. The panel ranks the proposal, and the agency awards its funds in order of rank, progressing from the highest ranked proposals down the list until the funds are exhausted. The funds are typically awarded in the form of a grant, generally with reporting requirements but with minimal reviews of the research results and no effort to ensure adherence to the original proposal. The model is actually very close to the way in which research funding is organized at the NSF and NIH, albeit, as we shall see, with important qualifications. But the DARPA model is very different. Which of the differences is important for the research outcomes is, of course, an open question, and given the number of dimensions along which practice departs from the standard model, not an easy question to answer.

The DARPA Model

20The central figure in the DARPA model is the program manager (PM). The PMs typically comes into the agency with a very specific technological idea which they want to develop. They then spend some period of time—often a year or more—researching that technology and the domain (or domains) in which it lies through their own reading, visiting and talking to key figures who are thought to have something to contribute to the technology or to its development, and colloquia, conferences, small group meetings and other encounters, which he or she typically organizes, in which the technology is discussed and various approaches to its development are debated. After this initial exploratory period, the PM works up a plan for development of the technology and writes and issues RFP’s soliciting proposals for the various components of that plan. At DARPA, these are known as Broad Agency Announcements (BAA). The proposals are sent out for review to experts whom the PM selects, within the government (particularly the military) and outside. But the ultimate decision as to which proposals to fund rests with the PM alone. Proposals that are accepted then serve as the fulcrum for a research contract which is negotiated with would-be contractors. Contracts typically include specific performance requirements. Contractors are required to submit frequent progress reports and progress is continually monitored through these reports and through site visits. Contracts are subject to revision or cancellation in the light of research experience. In addition to the review process, the organization holds regular seminars and conferences, comparable to those out of which the project initially emerged: contractors (who at DARPA are called performers) are required to attend these meetings, where they are expected to report their own progress and to listen and comment on the reports of others.

21Given the central role of the PMs, the way the organization operates depends a lot on the way in which the PMs are recruited and managed. Hence key to the organizational model is the fact that the PMs come from the research community outside the organization, have relatively short tenure in the agency itself (an average of four to five years), and then leave the organization to pursue their careers elsewhere. We have not been able to follow these careers systematically, but it is significant that no obvious pattern emerged in the interviews. Most of the PMs whom we interviewed came from an academic or military background, and afterwards returned to their home institutions, often as a research administrator, but sometimes as rank-and-file professors and researchers, or, alternatively, joined the supporting consulting firms which surround DARPA (to which we will return shortly). Significantly, all of the PMs to whom we talked thought of their DARPA experience as a high point in their careers, one of the most exciting and stimulating periods in their professional lives (this point is stressed particularly by Fuchs).

22The Agency operates outside of the civil service recruitment, hiring regulations and salary structure; and although it seems unable to pay exactly what the PMs would earn in the private sector, it is able to negotiate pay scales and contract terms significantly better than those that other government agencies can offer.

23Emphasis was placed in virtually all of our interviews upon the fact that the PMs come to the agency with their own project, an idea which they essentially originate and to which they have a personal commitment (respondents talked of that commitment in fact as if it were an obsession—although that was not the term they actually used). In turn, it is obvious that the environment in which the agency operates and its structure determine who brings proposals to the agency and which of those proposals, i.e., which potential PMs, are actually recruited and hired.

24DARPA is a flat organization, a hierarchy with three levels: a director, a series of office managers, and the program managers. The director has an associate director who works with him or her but not as a separate level in the hierarchy. The director sets the broad outlines of the research agenda. The research itself is grouped into program areas, largely on the basis of technology and mission, and the office managers flesh out the agenda in their own areas. The PMs coming to the agency with their own ideas present them to the director and/or the office managers. DARPA cultivates a reputation for being open to new, radical ideas originating outside the organization (indeed, listening to people talk, one is led to believe that the ideas always originate from outside the organization) whether or not they fit the defined program. But the office managers and the director play an active role in recruiting ideas that fit into the program and in screening proposals to ensure that the program has some coherence and direction.

25While the program itself originates with the director and is fleshed out by the office managers and the PMs whom they hire, it is conceived in consultation with the military services, with Congress and with the Administration. And it is clear in discussions with the agency that careful attention is paid to cultivating support within the political and administrative environment in which it operates. Particular emphasis is placed in virtually all discussions with people about the program upon the military mission of the agency and the way in which that operates to shape the programs.

26Another significant factor shaping the programs is the agency’s mission in supporting radical, discontinuous technological change. That mission, as we have already mentioned, is rooted in DARPA’s origins in 1958 as a response to the Russian launching of Sputnik and the way in which Sputnik caught the U.S. military and scientific establishments by surprise.

27These two factors—the military mission, and the focus on discontinuous technological development—surface repeatedly in interviews. The Agency is always looking at whether, on the one hand, the research would be undertaken elsewhere in the government or the society, or, on the other hand, whether there is a constituency—already existing or one which could be cultivated—in the military services which would adopt the new technologies and actually deploy them. To the outside observer, the role of the military mission in the operation of the agency—and particularly in the ability of the organizational model to operate in other contexts—is difficult to understand. This is because the technologies under development are often so distant from actual military application that it is hard to imagine a technology for which no military application could be found, and much of what the agency does seems to have no obvious constituency within the military establishment. Nonetheless, reference to the critical role played by the military missions in the success of DARPA was stressed so repeatedly and by so many different informants, especially in discussions of transferring the DARPA model to the Department of Energy in the form of ARPA-E, that one had to believe it is indeed central to the organizational model.

28In sum, the characteristics which distinguish DARPA as a funding organization are:5

  1. The discretion and authority lodged in the PMs;

  2. Awards in the form of contracts with specific deliverables and specified performance measures periodically monitored for specific performance. Typically, performance measures specified in contracts are set unrealistically high—targets which stimulate and focus debate about the characteristics of the technology;

  3. PMs recruited and compensated outside of the regular civil service regulations;

  4. Flat organization consisting of only three levels—PMs, the office managers, and the Director with an assistant director;

  5. The tenure of the direct employees of the organization is very short—three to five years for the PMs, even less for many of Director (with the major exception of Tony Tether, who held the position for the full eight years of the Bush Administration

  6. 2009).

29In addition, two characteristics, which have received little attention in the literature and which we have not discussed so far, stand out:

306) The very extensive use of support personnel hired from outside subcontractors, typically consulting firms, not independent contractors. These consulting firms—but often the particular personnel assigned by the firm to work with DARPA as well—have a long-term relationship with the agency. The tasks which they assume and the roles they play range from clerical and administrative support to high level professional functions. The latter include scientific and engineering research, but also key administrative, training and supervisory tasks. Contractors are used, for example, to “orient” (and in effect to train) new PMs and also to advise them in the development and execution of their programs throughout their careers in the agency. Given the short tenure of DARPA’s own personnel, the contractors provide the organizational continuity. And many of the subcontractors who work with DARPA have a long history with the agency, some having actually served as PMs or as performers.

31This role of the outside contractors, and particularly the consulting firms, is a complete reversal of the usual relationship between temporary and permanent employees and, from the point of view of organizational studies, is probably the most interesting aspect of DARPA as an institution. Temporary employees typically have short tenure with the organization and are used to smooth out the variation in personnel requirements, a buffer against flux and uncertainty. The role of these outsiders suggests that a great deal of the much-vaunted flexibility (or malleability) of the organization, and the adaptability which it is supposed to confer on the agency’s program relative to other federal research agencies such as the National Laboratories or NSF, is illusory.

32Parallel to the use of consultants, but somewhat different, is the way the agency draws on outsiders to audit and police its contracts with researchers. The outsiders in this case, however, are experienced government employees who are certified to perform this function. The Agency looks for the most qualified auditors within the military services, people who are able to use government contracting regulations in a creative way to accommodate the needs of the performers the PMs want to recruit—although the specific examples which were cited in the interviews related to the requirements of private industry, not academics. The academics, however, reported that the auditors were surprisingly knowledgeable about the technical dimensions of the projects and helpful as the researchers tried to provide explanations for why they were unable to meet contract requirements—explanations that could then be used by the PM in defending his or her program within the agency.

337) The interaction which occurs in the process of contract administration should be understood as part of a final characteristic of the DARPA organizational model: the continual review and discussion which surrounds a program from its very inception until it is completed or phased out. That discussion takes place through a variety of vehicles, including small group meetings; larger and more formal seminars and conferences; formal meetings when seeking funding for new program proposals and on continuing or expanded funding for ongoing programs in meetings between the PMs, the office managers and the DARPA director; and reviews and auditing of contracts with outside auditors and with the PM. It involves continual questioning both of the ends of the program (why do we want to have this research in the first place? Why is DARPA, and not the private sector or some other government agency, financing it? How do you assess its success in doing so? What are the proper metrics? Etc.). We will come back to the significance of this review process shortly.

The NSF

34The central thrust of NSF research support—and the focus in the present study—is its grants awards for discipline-based scientific research and education. The Agency also has a series of ancillary programs and activities which are organized around specific scientific and policy problems, and/or are explicitly interdisciplinary in character (among which is the program which supports our own research project). Other special programs support research institutions as opposed to individuals and sponsor special conferences.

35In its disciplinary programs, NSF presents a sharp contrast to DARPA. Its organization and mode of operation resembles the model which faculty members carry in the back of their mind, as we noted initially. It is basically organized around scholarly disciplines and is designed to support and sustain them. Funds are awarded in the form of grants through a competitive process organized and administered by a program manager. Competitions take place on a regular basis in a schedule announced and publicized in advance. The NSF does not actively solicit proposals. Applicants select the division to which they wish to apply, almost invariably the division corresponding to the discipline in which they were trained. Submissions are evaluated in a peer review process by a panel drawn from members of the discipline. The panel ranks the proposals relative to each other. Funds are allocated to the various divisions at higher levels of the organization (through a process which we did not investigate for the study). Within each division, funds are then generally awarded to proposals in the order in which they have been ranked by the review panel until they have been exhausted.

36The role of the PM is, however, not as limited as this conventional picture seems to suggest. program managers at the NSF certainly do not have the wide latitude to define their program and to pick out the investigators who will participate in it that their analogues do at DARPA. However, they are not completely bound by the peer review process. They actually have the power and responsibility to fund proposals out of the order established in the peer review process if, for one reason or another, they believe it is desirable to do so. Furthermore, the attention devoted to the procedures for funding proposals out of rank order in the training and orientation of the PMs implies that this is not an incidental part of their job; that they are expected to continually review and evaluate the panels’ rankings, although they may not often actually act to contravene it. When they do fund a proposal out of order, the decision is usually justified by its importance to the health and progress of the discipline. In this, they do not act alone; they must first obtain the approval of their supervisor in the division. The procedures for obtaining that approval apparently vary somewhat across the agency, but, as it was described to us in interviews, it typically entails a written memorandum which is then discussed and evaluated by the division director. In at least some divisions, these “out of line” proposals are discussed formally and informally among the PMs as a group. Those discussions are part of an ongoing discussion within the division about the direction of the discipline and the kind of research that would be required to sustain it and maintain a balance among its different components. These discussions, we will argue, play a role analogous to the continual discussion and debate which surrounds the research support process at DARPA.

37The NSF has a reputation for being extremely conservative with an overwhelming bias in favor of proposals which hover very close to the center of the discipline, in terms of the hypotheses which they entertain and the methodology which they employ. As we have already noted, the surprise launching of the Russian Sputnik in 1958 was attributed to this conservative bias and DARPA was explicitly and deliberately designed to counter-balance it. NSF continues to have that reputation. It was reflected in comments of MIT faculty in virtually every interview we conducted, often spontaneously, but always when respondents were asked to compare NSF and DARPA funding. Many commented that so much emphasis was placed on feasibility at NSF that you actually had to have done the research (or a good part of it) before you submitted the proposal for funds to finance it. Several faculty members said their strategy was to submit proposals to fund research already underway and use the funds to initiate new projects, which then became the foundation for their next grant proposal.

38The conservative bias is widely attributed to the peer review process through which funds are awarded. But it appears that the bias is not inherent in the process itself but rather in the way it is organized and administered. That in turn reflects the way in which the agency conceives of its mission, which is to sustain the country’s scientific capability through education and research, a capability which is in turn embedded in the academic disciplines. The PMs have an incentive to emphasize the awards as the outcome of the peer review process to avoid having to justify the outcome to rejected applicants. Their responsibilities, in contrast to those of DARPA managers, leave them very little time to give detailed feedback, a point which our faculty respondents emphasized repeatedly. But more fundamentally, if the PMs fail to intervene in the process it is because they share the biases of the review panels. They are very much a part of the scientific community which the discipline defines. Their backgrounds make it natural that they would think in these terms. Indeed, they are selected for that reason. In contrast to DARPA PMs, the PMs at NSF are drawn from the disciplines whose research proposals they manage. About half of the PMs are career civil servants, the other half are on short-term contracts of one to three years, on leave from university research positions and are often actually paid through their universities at the levels they were receiving as faculty members.

39This is not to say that the PMs add nothing to the process. The role of the NSF in reviewing a wide variety of research proposals and the PMs own position within that process gives them a broader vision than any particular review panel is likely to have. But it is still very much a vision of what Thomas Kuhn would call “normal science”,6 a vision in which progress occurs within the boundaries of the discipline, through adherence to the standards of the community that develops within those boundaries, and which the community promulgates and enforces through the control which it exercises over the careers of its members. The way in which the PMs represent the community was driven home in one of our interviews by one of the respondents who, when confronted with the criticism that the most important criteria in judging a research proposal at NSF was feasibility, gave us a long defense of feasibility as a cannon of “good science”.

40One can see this as well in another area where the PMs act with power and discretion helping researchers whom they do not fund themselves find support through other government agencies, acting essentially as brokers and at times even putting together packages of funds from several different agencies. These efforts are facilitated by the extensive contacts which career PMs develop with the Federal research establishment. But they do not seem to see this activity as part of their regular responsibilities to oversee the health of the disciplines for which they are responsible, and they talk about it in very different terms, terms which make a sharp distinction between the discipline approach of NSF and other criteria which might justify a given research project (potential contribution to social welfare or to economic progress, for example).

41A final piece of evidence suggesting that it is not the peer review process per se but the orientation of the organization which uses it is provided by the comment of one faculty member who had participated in NSF panels: he argued that the conservative bias in the research which the panels funded reflected the instructions which the panel members received. He and his colleagues, he insisted, were perfectly capable of evaluating and ranking the kind of high risk, original research which DARPA sought out and funded, if they were instructed to do so. It is to be noted that this comment calls into question the central role of the PM at DARPA as much as that of the peer review process at NSF.

42We emphasize the dichotomy between the way in which the NSF actually operates and the way in which MIT faculty members perceive its operation, because in terms of the impact of the organization upon the research community, it is not clear which is more important. It is after all the faculty who must actually conceive the research program and carry it through. To appreciate how their perceptions influence the research process, it is important to understand how they think about their work and how they design their research programs. A second set of findings that emerged from this study relate directly to this question.

The NIH

43It is perhaps worth adding at this point a few limited observations about what we learned about the NIH. It is virtually impossible to make broad generalizations about the NIH, given its $30 billion annual budget (fully half of all civilian R & D expenditures)7 across twenty-seven Institutes and Centers. But several interviews with MIT faculty and Program Officers (Pos, as opposed to PMs) at institutes within NIH provide some context for thinking about the role of the Program Officer at NIH relative to NSF and DARPA.

44Program Officers have relatively little discretion is selecting proposals to receive funding. Proposals across the NIH first go to the Center for Scientific Review (CSR) that then categorizes the proposals and assigns them to the relevant institute. The proposals are reviewed by “study sections” (equivalent to a review panel) which score the proposals. The final scores and reports are sent to the Pos who then gather within each institute for a “Paylist” meeting within their division (one level below Institute level) to discuss the awards and decide which programs to fund at what level.

45Like PMs in the NSF, Pos can challenge the scoring of a particular proposal, but instead of approaching their supervisor in their division like in the NSF, Pos approach the “Advisory Council”, a body that reviews the study section process, and ask for a special review of a proposal that they consider a “high program priority”. However, this seems to happen infrequently and internal research at the NIH shows that there is a fairly smooth curve demonstrating that as the scores get higher, the percentage of awards at that level gets lower. Going outside the payline doesn’t happen that often. As one PO stated, as much as they like to think they are finding the diamonds in the rough, they are not as aggressive in going beyond the payline as they like to think they are.

46Where Pos seem to have more influence is in supporting the overall direction of the Institute’s agenda and new areas of science where they see a lack of investment. For areas of research that are new and where “you would never get something like that approved in a regular study section”, Pos can make the case within their Institute that there should be more attention and investment. This could come through “funding opportunity announcements” (FOAs) which indicate the Institute’s interest in a new area. The NIH may also encourage more research through the creation of new program areas that receive formal set-asides for funding. This currently represents approximately 15–20 percent of all NIH funding. Pos talked about the impact they felt they have had on the development of their field in important new areas of research. This might be in the form of a new program or through a process of “coaching and coaxing” applicants on their proposals for funding in these new areas of research.

47As with the NSF, Pos have relatively limited contact with their grantees, usually connecting once a year when progress reports are due. They are also less engaged today in sponsoring conferences than in the past due to budgetary constraints. However, they seem to play an active role in supporting and encouraging next generation Pos to apply for NIH grants and help them navigate the system. This aligns with the NIH’s efforts to lower the average age of grant recipients (the average age is forty-two, with a median of fifty-two).8

MIT Faculty

48The funding agencies are only one side of the research equation. On the other side are the scientists and engineers whom the agencies need to attract if the work they want to support is actually to be carried out. At DARPA, these researchers are aptly referred to as performers. In this study, they are represented by those faculty whom we interviewed at MIT. The interviews suggested that they have a dual motivation. On the one hand they have a profound intellectual commitment to science and engineering, although not necessarily a well-fleshed out research agenda. On the other hand, their position at MIT requires them to raise substantial funds from agencies and organizations on the outside. These funds are not required to support their family. The wide range of opportunities open to the faculty at an elite school like MIT ensures that they will always be able to earn a comfortable living. But the Institute is only committed to paying the academic portion of their salary support. An additional two to three months is viewed as “summer support” and must be raised through research grants and contracts on the outside. In addition, faculty are expected to support a mini-research establishment consisting of overhead on lab space, equipment and administration and a team of graduate students who work with them over the course of three or four years on projects related to the faculty member’s own research. In many respects the research establishment is like a small business and the terms in which faculty members discuss it makes them sound like independent entrepreneurs.9

Evaluation of Experiences with Funding Agencies

49All of the faculty members with whom we talked were very enthusiastic about the intellectual experience of working with DARPA. This is perhaps not surprising given the fact that we were talking primarily to faculty members who had received DARPA funding, although the unanimity of opinion on this score was striking. There were a number of components to this experience. These included the opportunity to interact with other researchers in the various meetings and conferences which DARPA PMs organized in the process of putting together and then executing their programs.

50Often these involved encounters with researchers from other disciplines or from outside the university, in private industry and/or in government labs. Several respondents reported that they had developed relationships in this way that fundamentally altered their research trajectories and/or created the foundations for long-term research collaborations. It is to be noted that several of the PMs suggested that this is exactly what they were trying to do in developing their program—although the MIT faculty did not seem to be simply echoing the comments they had picked up at DARPA.

51Faculty members also emphasized their interactions with the PMs themselves whom they tended to talk about as colleagues and collaborators rather than merely as research funders or supervisors. These intellectual interactions with the PMs ranged from the initial discussions when the PM was preparing his or her research program to the extensive feedback which the DARPA PMs provided when a proposal was turned down. But they also mentioned the interaction with colleagues working on similar projects in seminars where they were required to present their research in progress as stimulating intellectually and important in the research process.

52As noted earlier even the interactions with contract auditors were viewed as part of the intellectual experience, a feature of the way DARPA operates which is not accidental. The auditors are typically seconded from the military and recruited because of their ability to understand the substance of the research and its relevance for the agency’s mission. Since performance standards specified in the DARPA contracts are often deliberately set at levels that are virtually impossible to achieve, auditors spend considerable time trying to understand the obstacles to attaining the specified standards and identifying more realistic targets. Indeed, it is precisely to stimulate this type of discussion that targets are set above realistic expectations.

53In addition to the intellectual experience of working with DARPA, two other features were mentioned in interviews. One is the size of the awards, which were, by and large, much larger than could be obtained through the NSF or NIH. The second was the ability to buy expensive lab equipment which could then be used for other projects.

54On the downside was the threat that the agency would cut off funding in the middle of a project. Because funds are awarded in the form of contracts rather than grants, and because, as just noted, specified performance requirements were often unrealistic, the agency is in a position to cut off funding not just because of the research performance itself, but actually for any reason. This was a major threat under the administration of Tony Tether; he was believed by our MIT respondents to have cut contracts when budget cuts forced him to reorder the agency’s priorities in ways that were unrelated to the research which the contract initially covered. Funds were also cut when the research suggested that the project itself was not viable and the goals could not be achieved, or when a competing approach to the problem proved to be more successful. Whatever the actual reason, the sudden loss of funding was a particular problem for faculty members who are using the funds to finance graduate students working on doctoral dissertations, and several respondents reported that as a result of their DARPA experience, they had moved to a portfolio strategy for financing, in which they were careful to avoid excessive dependence on a single agency.

55The other downside of DARPA funding is the frequent reporting requirements, in many cases every three months. This was particularly a problem for faculty doing basic science (as opposed to applied work), since they often did not have results at these reporting intervals.

The NSF

56In contrast to DARPA, the intellectual experience of working with the NSF was universally characterized as dull, indeed pedestrian. It certainly involved none of the excitement or intellectual stimulation associated with DARPA. Proposal writing was seen as a chore. There was no thought of showcasing the intellectual excitement associated with the work. The widely expressed view that you had to have done much if not all of the work in advance of proposing it eliminated the element of surprise and discovery which the researcher might originally have felt and gave the process a slightly dishonest flavor (although the respondents did not put it in precisely those terms). Our respondents generally view the NSF’s program managers as competent; they talked of them as colleagues and, although they were not asked to compare them directly to DARPA PMs, the comparison was not unfavorable to NSF. But there was little opportunity to interact with them in the way that they interacted with DARPA PMs; they provided little help in preparing proposals and little feedback when the proposals were rejected. NIH project managers incidentally were not respected as colleagues in the way that PMs at NSF and DARPA were; they also do not have the capacity to fund proposals outside of the rank order established by the peer review panels.

57Most of our respondents who had received NSF grants had also participated in review panels, but this participation was seen as a chore: people felt obligated to participate to support the discipline and in return for funding they had received, but it was not viewed as a rewarding experience. One could imagine the discussions in the review panel meetings as comparable to the small group meeting which DARPA organized, but they were never discussed in those terms. The range of proposals that the panel members were required to read could have been seen as an opportunity to get an overview of the field but it was never discussed in these terms either.

58In sum, the advantages of the NSF were on the “business side”. Here, the main advantage of NSF funding was that once a grant was awarded, the funding was secure, and one could count on it, especially in supporting graduate students. This contrasts with DARPA, where there was always the possibility that funding would be cut off in the midst of a thesis project. Also, NSF grants involved minimal reporting requirements; the major incentive to perform was to gather material to support the next grant proposal.

III. Interpretation

Economic and Sociological Perspective

59The DARPA material lends itself to two quite different interpretative lenses. From the point of view of standard economic theory, with its preference for market mechanisms and individual incentives and its distrust of government bureaucracy, the salient feature of the DARPA organizational structure is the way in which it suspends the rules and regulations which normally constrain government officials. The mechanisms here include the freedom from the regulations governing hiring and salary scales, the use of contracts with requirements for specific performance (as opposed to grants), the way in which program managers are hired from outside the organization, their short and very limited tenure, and the very extensive use of outside contractors who can be replaced easily and at will. On the other hand, the standard theory which would emphasize the rules which normally constrain government actors rests upon a rational choice theory of individual behavior in which the actors are presumed to make a sharp separation between means and ends, and the technical relationships that determine the way in which the former affects the latter, and then to maximize the ends given the means at their disposal. The characteristic of the problems which DARPA, and NSF as well, are designed to address is that the ends are ill-defined and unclear, and the causal relationships between the means and the ends are exactly what the organization is supposed to be investigating. This entails what economists call “Knightian uncertainty”, i.e. uncertainty about what the possible outcomes actually are let alone what the probability of realizing any one of them.10 Neither the competitive market nor the rational choice model has much to say about how this should be addressed.

60The standard rational choice theory has a second problem too. The theory attempts to understand and explain behavior in terms of individual self-interest. It has very little to say about the agent’s behavior when he or she has no particular interest in the choice among the alternatives we are attempting to understand. The choices of the faculty researchers are, up to a point at least, understandable in those terms, but the role of the PMs is not; or, at least, they do not yield an obvious interpretation of our findings. At both NSF and DARPA, the PMs seem to be motivated primarily by the intellectual interest and excitement of the work in which they are engaged. They seem to believe in the mission of the organization and see little difference between their own interests and that of the organization for which they worked. This was of course no accident. The agencies consciously recruited them with this in mind. However, it called not for a theory of individual choice but rather a theory of how the agency’s mission was conveyed to the agents, and how it was understood by them.

61The second interpretative lens through which the material gathered for this study might be addressed is organizational theory. We use this term very loosely here to refer to a range of theoretical ideas drawn from sociology, cognitive theory, language theory, and social psychology, all of which, however, suggest that human behavior must be understood in terms of the social context in which it occurs. Behavior in this view cannot be reduced to individual actions, coordinated indirectly and impersonally by a market (or market-like) mechanism, but rather must be understood in terms of the way in which people interact with each other. Applied to science studies, the basic idea is that scientific inquiry takes place within a community and is governed by a set of rules, habits and customs, partly explicit but with a substantial tacit or implicit component, which the community generates. These rules have both a social and an intellectual dimension. The funding agencies are then understood in terms of their impact upon such communities. The same basic conceptual apparatus can be applied to understanding the internal operation of the funding agencies themselves, for they are also communities of practice which arise and evolve over time.11 This is true of both DARPA and NSF. The major difference between them is that DARPA is creating new communities and NSF is managing scientific disciplines which are communities that already exist.

62Our own understanding of this perspective derives from a series of case studies conducted by the Industrial Performance Center at MIT on the organization of product design and development in the private sector.12 Related understandings can be found in Fuchs and Phech Colatat,13 which are however not independent of the current project, and also in Donald Schön, and Kuhn.14 In the IPC study, we conceptualized a research community as like a language community. Like language, it emerges and evolves through conversation, discussion and debate. We termed that conversational process interpretation. Particular product ideas, or in the present case, research projects, are drawn out of this conversation and pursued through a second process, analysis. Analysis proceeds very much as it does in engineering (and economics) textbooks: there is a clear statement of the end or ends which the product is designed to achieve, and one then organizes alternative resources, or means, so as to optimize (or maximize) the ends. But the interpretative process is under-theorized and requires some amplification. It is, we argued in the IPC study, like a conversation, a discussion or debate. It depends on who participates in that conversation, what they actually talk about, how the conversation proceeds from one subject to the next. The role of the manager in this process is to foster the conversation and to guide it. In this, he or she is like a host at a cocktail party, inviting the guests, introducing them to each other, suggesting topics of discussion that might be of common interest, introducing new topics or new people to the conversation group when the discussion flags and the participants begin to lose interest, breaking up groups when the discussion becomes too intense and threatens to collapse in mistrust and acrimony. Ultimately this discussion and debate leads not to agreement but to a common understanding that serves as the basis for further discourse. We think of that common understanding as like a language.

63The interpretative process then essentially divides into two phases. In the first, or initial phase, the community is in formation. The participants are building a common understanding, generating a new language so to speak. In the second, or mature, phase they are using that language to discuss the technology in which they are interested and the products or research projects to which it might lead. In so doing, they do not make the clear distinction between means and ends that is central to analysis; indeed, they move back and forth between means and ends, revising (or reinterpreting) the ends in the light of the means and vice versa. Importantly, the common understanding that sustains the community, and, in a sense, defines it, continues to evolve through discussion and debate even in this mature phase.

64Understood in these terms, what is distinctive about DARPA is that the PMs are essentially creating an interpretative community and then driving it toward the generation of novel products. They bring together around a technological problem people who would not be in contact with each other without the PMs intercession, guiding them through a variety of different encounters, meetings, discussions and seminars to talk to each other, to enter into a conversation in a way that effectively develops a language of community and then sustaining that conversation and encouraging them to draw out of it specific research projects that they then “analyze”. But what is striking to the outside observer listening to the participants describe this experience is the priority accorded to interpretation even in the later stages of project development. This is most apparent in the administration of contracts when the performers fail to meet the specific goals. The failure triggers a discussion in which the first question is whether the goals were correctly specified and how they might be redefined in the light of the research that has already taken place. It is not, as it would normally be in the analytical phase of product development, focused solely on what means would be required to achieve these goals. The Agency refuses to estimate the success rate of the projects it undertakes precisely because rather than kill a project outright, it is redefined.

65In contrast to DARPA, NIH and NSF are entering into research communities that already exist and seek to support and perpetuate them rather than either create them or direct them. These communities too are sustained by an internal conversation that evolves over time. The discussions that occur among the PMs at NIH and NSF or among the members of the review panels as they evaluate different proposals are a part of that conversation. However, the conversation is largely autonomous of the funding agencies and those conversations that occur in the funding process are more the expression of a set of values and criteria of judgment that have been developed elsewhere than a direct determinant of those values. In sharp contrast to DARPA, the project proposals cannot be revised in the light of the discussion within the agency, and in that sense the panel’s judgment tends to involve the analytical application of criteria that the panelists bring with them, rather than an interpretative conversation about those criteria.

66On the other hand, the PMs are engaged in a discussion within the agency about the direction of the discipline. The discussion is largely undirected although the division director must exert some influence over it. Unfortunately, we did not explore the nature of that discussion in our interviews. It is an area left for further research. That research could focus on the documents that are generated when the PMs intervene to fund a proposal that would not have received money on the basis of the peer review ranking. An understanding of this process is, in certain respects, more important than understanding DARPA, since a number of developing countries look to NSF as a model of how to support their own education and research establishments.

Conclusions

67This study is part of an attempt to understand the structure and operation of Federal agencies supporting academic research in science and engineering. It centered on the contrast between DARPA and NSF, drawing on the experience of faculty members of MIT who have received funding from both organizations. The focus has been on the role of the program (or project) managers in the two agencies. In both agencies the program managers have substantial discretion in the selection of projects to fund and in the management of the funding process. That discretion was anticipated in the case of DARPA, and was one of the major reasons for selecting that agency for study. The degree of discretion at NSF, on the other hand, was surprising. It is much greater than the faculty whom we interviewed generally believed, and is one of the major findings of the study.

68The program managers stand at the base of the organizational pyramid in both agencies, and given the discretion that is lodged there, both organizations are in effect street-level—as opposed to classic Weberian—bureaucracies. But the two agencies operate very differently.

69At the NSF, proposals are evaluated and ranked through a peer review process, and the discretion of the program manager consists of his or her ability to fund proposals out of the order of peer review ranking. The process for doing so is carefully supervised and reviewed by higher levels of the organization. The procedures for the exercise of discretion are carefully laid out for new PMs in their initial orientation, along with the basic criteria upon which these decisions are supposed to be made. Written reports are required along the way.

70Moreover, there is an ongoing discussion among the PMs within the organization about the way in which the academic discipline they are funding is evolving, and about possible biases in the review process. It is in the context of that discussion that funding decisions by the staff are made. There is, however, very little direct interchange between the PMs and the researchers whom the agency funds. The process here is totally consistent with the literature on the management of discretion within street-level bureaucracies.

71DARPA is managed very differently. The program managers receive very little orientation or training. While there is extensive interaction between the PMs and the research community they are seeking to draw into their project, there is very little interaction among the PMs themselves (the quip is that the only thing they share is a travel agent). There is a strong organizational culture and a high degree of organizational continuity, but, given the very high turnover and the short tenure of the PMs and, with a few exceptions (like that of Tony Tether) the agency’s directors as well, it is very hard to understand how that continuity is maintained and the strong organization culture is created and sustained. It appears that a critical factor here (possibly the critical factor) is the network of consultants and consulting firms that support the organization; many of these consultants have worked with DARPA over a long period of time and some of them have actually been PMs within the organization.

72The existence of that network and the role it seems to play is the second major finding of this study. DARPA has a reputation for flexibility and is often contrasted to classic bureaucratic organizations. However, given the role of outside consultants in maintaining organizational continuity, it would appear that a good deal of the flexibility of the organization is illusory, and that to the extent that it exists, the flexibility must reside in the role assigned to the PMs and not the way they perform that role.

73The findings of the study are incomplete. In focusing on the role of the PMs, we have neglected other aspects of the organizational models, and especially those levels of the organization where the basic budgetary decisions are made, allocating funds among competing disciplines in the case of NSF and broad project areas, in the case of DARPA. Moreover, while the contrast between the two organizations helps us to identify and highlight key aspects of each, it leaves the impression that they are competitive with each other and that the choice between them is a key to national science policy, whereas in fact at the national level at least they are complementary. The NSF is responsible for maintaining the country’s basic scientific establishment, ensuring the supply of technical manpower and maintaining its basic research capabilities; DARPA is dependent upon that establishment for the raw material from which its projects are created.

74But the most important implications of this project are not its substantive findings but the implications for how one thinks about science policy and the conceptual issues in the emergent field of “the science of science policy”.15 While the field is ostensibly interdisciplinary, it has been heavily influenced by the discipline of economics and what might be termed the conceptual biases of that discipline as a lens for understanding public policy. The influence is pervasive, and it would take a true outsider coming from some other discipline (which we are not) to identify what these are. But one perspective that seems particularly important is a view of government policy in which government intervention consists of imposing restrictions upon, and creating incentives for, action and that its impact can be understood in terms of the self-interest of individuals whose behavior is a response to the price incentive in the market. In science policy, this seems to imply that the budgetary allocations made in our cases at the peak of the organizational hierarchy are the critical policy decisions. But what this study emphasizes is that, in the United States at least, government institutions intervene at the very micro level in the way the projects are conceived and executed. The way that these interventions are conducted is the product of an active debate and discussion within the organization, and also between the organization and the scientific community. We have drawn here upon our own research to understand the nature of that debate, and how the way it is conducted and managed influences the outcome. But the more general point is that that understanding is critical to science policy, and that one has to reach far beyond the conceptual framework of economics to analyze it

Bibliographie

Des DOI sont automatiquement ajoutés aux références bibliographiques par Bilbo, l’outil d’annotation bibliographique d’OpenEdition. Ces références bibliographiques peuvent être téléchargées dans les formats APA, Chicago et MLA.

References

Bonvillian, W.B. (2006). “Power Play, The DARPA Model and U.S. Energy Policy”, The American Interest 2/2, November/December, 39–48, https://www.the-american-interest.com/2006/11/01/power-play/

Christensen, C. M. (1997). The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School Press.

10.1016/j.respol.2015.01.005 :

Colatat, P. (2015). “An Organizational Perspective to Funding Science: Collaborator Novelty at DARPA”, Research Policy 44/4: 874–87.

Cook-Deegan, R. (2015). “Has NIH Lost Its Halo?”, Issues in Science and Technology 31/2: 36–47. (Chapter 15 in this volume).

10.1111/j.1541-1338.2011.00523.x :

Fealing, K. H., Lane, J., Marbuger, J. III, and Shipp, S., eds. (2011). The Science of Science Policy: A Handbook. Stanford, CA: Stanford University Press, https://doi.org/10.1111/j.1541-1338.2011.00523.x

10.1111/j.1541-1338.2011.00523.x :

Freeman, R. B. (2011). “The Economics of Science and Technology Policy”, in The Science of Science Policy: A Handbook, ed. K. Fealing, J. Lane, J. Marburger III, and S. Shipp. Stanford, CA: Stanford University Press. 85–103, https://doi.org/10.1111/j.1541-1338.2011.00523.x

10.1016/j.respol.2010.07.003(Chapter 7 in :

Fuchs, E. R. H. (2010). “Rethinking the Role of the State in Technology Development: DARPA and the Case for Embedded Network Governance”, Research Policy 39/9: 1133–47, https://doi.org/10.1016/j.respol.2010.07.003(Chapter 7 in this volume).

Harris, A. (2014). “Young, Brilliant and Underfunded”, New York Times, 2 October,https://www.nytimes.com/2014/10/03/opinion/young-brilliant-and-underfunded.html?_r=0

10.4324/9781315641706 :

Knight, F. H. (1921). Risk, Uncertainty, and Profit. Boston, MA: Hart, Schaffner & Marx.

10.1119/1.1969660 :

Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago, IL: University of Chicago Press.

10.4159/9780674040106 :

Lester, R., and M. Piore. (2004). Innovation—the Missing Dimension. Cambridge, MA: Harvard University Press.

Lipsky, M. (1980). Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. New York, NY: Russell Sage Foundation.

10.1111/j.1748-5991.2010.01098.x :

Piore, M. (2011). “Beyond Markets: Sociology, Street-Level Bureaucracy, and the Management of the Public Sector”, Regulation & Governance Special Issue: Sociological Citizens: Practicing Pragmatic, Relational Regulation 5/1: 145–64, https://doi.org/10.1111/j.1748-5991.2010.01098.x

Van Atta, R., Deitchman, S., and Reed, S. (1990–1991). DARPA Technical Accomplishments. 3 Volumes. Alexandria, VA: Institute for Defense Analyses.

10.4324/9781315237473 :

Schön, D. (1983). The Reflective Practitioner: How Professionals Think in Action. New York, NY: Basic Books.

Notes de bas de page

1 This article was originally released as an MIT Industrial Performance Center Working Paper in July 2015.

2 Piore, M. (2011). “Beyond Markets: Sociology, Street-Level Bureaucracy, and the Management of the Public Sector”, Regulation & Governance Special Issue: Sociological Citizens: Practicing Pragmatic, Relational Regulation 5/1: 145–64, https://doi.org/10.1111/j.1748-5991.2010.01098.x; Lipsky, M. (1980). Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. New York, NY: Russell Sage Foundation.

3 Van Atta, R., Deitchman, S., and Reed, S. (1990–1). DARPA Technical Accomplishments. 3 Volumes. Alexandria, VA: Institute for Defense Analyses.

4 Fuchs, E. R. H. (2010). “Rethinking the Role of the State in Technology Development: DARPA and the Case for Embedded Network Governance”, Research Policy 39/9: 1133–47, https://doi.org/10.1016/j.respol.2010.07.003 (Chapter 7 in this volume).

5 Bonvillian, W. B. (2006). “Power Play”, The American Interest 2/2, November/ December, 39–48, at 48.

6 Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago, IL: University of Chicago Press.

7 Cook-Deegan, R. (2015). “Has NIH Lost Its Halo?”, Issues in Science and Technology 31/2: 36–47. (Chapter 15 in this volume).

8 Harris, A. (2014). “Young, Brilliant and Underfunded”, New York Times, 2 October, https://www.nytimes.com/2014/10/03/opinion/young-brilliant-and-underfunded.html?_r=0

9 For a somewhat different view of the relationship between economic and intellectual motivation see Freeman, R. B. (2011). “The Economics of Science and Technology Policy”, in The Science of Science Policy: A Handbook, ed. K. Fealing, J. Lane, J. Marburger III, and S. Shipp. Stanford, CA: Stanford University Press. 85–103, https://doi.org/10.1111/j.1541-1338.2011.00523.x

10 Knight, F. H. (1921). Risk, Uncertainty, and Profit. Boston, MA: Hart, Schaffner & Marx.

11 Schön, D. (1983). The Reflective Practitioner: How Professionals Think in Action. New York, NY: Basic Books

12 Lester, R., and Piore, M. (2004). Innovation—the Missing Dimension. Cambridge, MA: Harvard University Press.

13 Fuchs. (2010). “Rethinking the Role of the State”; Colatat, P. (2015). “An Organizational Perspective to Funding Science: Collaborator Novelty at DARPA”, Research Policy 44/4: 874–87.

14 Schön. (1983). The Reflective Practitioner; Kuhn. (1962). The Structure of Scientific Revolutions.

15 Fealing, K. H., Lane, J., Marbuger, J. III, and Shipp, S., eds. (2011). The Science of Science Policy: A Handbook. Stanford, CA: Stanford University Press, https://doi.org/10.1111/j.1541-1338.2011.00523.x

Précédent Suivant

Le texte seul est utilisable sous licence Creative Commons - Attribution - Pas d'Utilisation Commerciale - Pas de Modification 4.0 International - CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.