9. Deliberative Processes in Decisions about Best Buys, Wasted Buys and Contestable Buys
Uncertainty and Credibility1
p. 147-169
Texte intégral
9.1 Introduction
1Deciding whether a prospective buy in the field of Non-Communicable Disease is likely to be a Best Buy is a tricky business. It is tricky for at least the following reasons:
the criteria for deciding what is a Best or Wasted Buy may not be agreed;
the alternative best uses of resources (the opportunity costs) are rarely obvious and may lie outside the health sector;
the health benefits of NCD interventions are often in the long rather than the short term;
the evidence upon which the appraisal is based is rarely complete, accurate, locally applicable, or entirely relevant and may even be wholly absent;
the processes through which a decision or a recommendation about a possible Best Buy are made may be secretive, dominated by specific interest groups and incomprehensible to outsiders;
many of the interventions require collaboration with other sectors and non-health organizations;
the implementation of any decision is hindered by absent or underfunded delivery mechanisms and organizational weaknesses.
2As a result of the foregoing, a decision may lack credibility and generate a mistrust of the professional scientists, clinicians and others involved in the process and bring the use of cost-effectiveness analysis and kindred methods into disrepute.
3Each of the recommendations we shall be making can be interpreted as implying the use of deliberative processes in decision making because there will be so much to discuss: the diseases in questions are often insidious in their onset and complex in their manifestation over time; the mix of politics, social value judgments and science is thorough; the disciplines required to understand the interventions and the genesis and treatment of NCDs are in many cases non-medical; the professions involved in diagnosis and treatment are likewise many and include non-medical ones; technical understanding and experience is often limited and needs nurturing with opportunities and support to enable local people to become both competent and confident. There is considerable public interest in finding ways to control the NCD epidemic but less understanding of why the apparent priorities are as they are; in many cases there are vested interests that could be threatened by effective NCD policies but that might be reassured or even brought on side by sympathetic initiatives.
9.2 Criteria, Opportunity Costs and Social Value Judgments: A Role for Deliberation2
4Everyone involved in NCD prevention and treatment needs to be aware that social values permeate all aspects of both. Decisions are not merely ‘technical’, let alone scientific. Moreover, since uncertainty abounds, all decisions require the exercise of judgment — judgment about the quality of the evidence, the difficulty of implementation, the value of the outcome, the value of what is forgone as resources are committed to specific purposes, the merits of openness and transparency, the worthwhile nature of reaching outside the health and finance ministries, etc. Any criterion for what constitutes a Best Buy embodies value judgments. For example, the commonly encountered ‘threshold’ criterion, which a technology must meet to be adopted, states that the incremental cost-effectiveness ratio (ΔC/ΔE) must not exceed a stated monetary sum, thereby making two social value judgments: that cost ought to be a factor and that effectiveness ought to be another. In addition, the threshold criterion embodies an assumption (other things being equal) that more effectiveness is good. Further analysis reveals that effectiveness is typically (though not invariably) indicated by a specific measure such as the Quality-Adjusted Life-Year (QALY) or averted Disability-Adjusted Life-Year (DALY), which may or may not be good proxies for ‘health’. Moreover, other things are not always equal, so additional criteria may be required. Two common criteria concern the distribution of health benefits (QALYs or DALYs) and the impact the intervention has on exposure to out-of-pocket costly healthcare needs. Other value-laden issues include how much risk or uncertainty about the evidence can be tolerated; whether future costs and benefits ought to be discounted (reduced in current value) at the same general rate as is used elsewhere in the public sector; how much information (some of which may be claimed to be commercially confidential) should be shared with stakeholders, including journalists and the general public; whether the right technologies have been selected for investigation to start with and for use as comparators; how to negotiate clashes between criteria when they occur; where to look to find out what values the public and its constituents have; and a host of social value judgments regarding the processes of decision-making such as: choice of stakeholders; the nature of their involvement, if any, in decision-making; opportunities to appeal against decisions; the public nature and openness of committee and other meetings and the accessibility of their minutes; the frequency of revisiting past decisions as circumstances and knowledge change. This list merely elaborates the commonplace observation that ‘one size (or recommendation) does not fit all (circumstances)’.
5Deliberation is a thoughtful and careful way of reaching a conclusion or deciding something. It is not precipitous and discourages rushed judgments. It involves the focused evaluation of alternatives, weighing their pros and cons. Deliberation can be a learning process — learning about the evidence and learning from other people about perspectives on the question that had not previously occurred to one. In deciding or advising on matters of NCD policy it requires a kind of ‘round table’ at which significant interests and expertise are represented. A major political value judgment must be made when deciding what counts as ‘significant’.
6Deliberation can be a means of suppressing the arbitrary and subjective self-interest of the participants in a decision-making process. It should be a means of achieving an impartial state of mind in which people of good will restrain their more selfish personal and professional concerns in pursuit of a wider, or deeper, idea of the social good: one that is not simply the sum of the preferences, prejudices (admirable or not, well-informed or not, representative or not, based on mature reflection or not) of those participating in the debate. Deliberation enables decision-makers to reflect on, discuss openly and possibly revise their beliefs about a problem. Is this our top priority? Who loses most if we do such-and-such? Do we believe the scientists? Can we trust the economists? Have we got the balance between rival assertions right? Have we inferred correctly from the evidence?
9.3 Deliberation Contrasted with Algorithms
7In stark contrast to the deliberative process stands the algorithm. An algorithm is a systematic mathematical process sequentially linking various strands in a decision problem to an outcome. A good example of an algorithm for present purposes is the EQ-5D version of the QALY, which combines a set of pre-defined characteristics of good health, measurable at a variety of intensities and weighted in a pre-set fashion in order to measure a health outcome such as the difference between a person’s health with and without, or before and after, an intervention or in comparison with an alternative intervention. The algorithm can be made as complicated as one likes, at least in principle, by adding characteristics, breaking it into social subgroupings, refining intensities, changing the weights, including probabilities and uncertainty, discounting future health changes and so on; and every element of the algorithm can even be moderated by the results of consultative engagement with patients, say, for their values, and public health doctors, say, for their beliefs about the transitional probabilities. The process remains, however, mechanical, unidirectional and, if used without interaction between decision-makers, not conducive to learning. Rather than enabling the exercise of judgment about the merits and interpretation of evidence, it can conceal important conclusions that have already been reached. These may (as with EQ-5D) have been reached in earlier (which may even have been deliberative) stages of preparation for a decision, but the nature of dispute resolution, the character of value judgments, the extent of agreement about them, the adequacy of the information base available and so on, all become subsumed in the algorithmic solution. The use of algorithms is likely to be perceived as impenetrable to those not involved in the decision-making process but who may nonetheless have significant stakes in its outcome. The effective use of an algorithm requires there to be sufficient expertise within the decision group for its members as a whole to have confidence that no unacceptable short cuts have been taken. It may often be useful to adopt and then adapt someone else’s algorithm. For example, to ensure localization and context sensitivity, several countries have developed their own QALY weighting system.3
8The same may be said about the use of computerized models to simulate decision-making processes. Computers are good at storing, retrieving, manipulating and communicating information but they cannot exercise judgement. A chair or facilitator and members of the decision-making unit must perform that function: formulating problems, locating those deemed most important, identifying key issues, considering risk and uncertainty about the future, forming preferences, making judgments of subjective value, establishing goals and objectives, appraising the quality of evidence and assessing trade-offs among objectives whilst also incorporating algorithms (and explaining them) into the decision-making process.
9.4 Evidence
Box 9.1 Categories of Evidence
Defined by method of collection, discipline or theoretical framework:
• observational, experimental, quasi-experimental, extrapolated, survey,
experiential;
• administrative;
• quantitative, qualitative, economic, ethical/philosophical;
• narrative review, systematic review, meta-analysis;
• legal, epidemiological, clinical;
• clinical epidemiology, decision science, expected utility theory.
Defined by general purpose:
• problem identification, description or scoping;
• cost-containment, efficacy, effectiveness, cost-effectiveness,
implementability;
• cultural, leadership, measurement; philosophical-normative, practical-operational; academically driven by discipline (clinical, biostatistics,
economics, sociology, etc.).
Defined by source:
• primary research data, secondary data (meta analyses etc.) administrative data;
• clinical experience;
• patient/carer experience;
• political necessity;
• local managerial experience;
• professional (scientific, theoretical, practical, expert, judicial, ethical).
9Evidence can be classified in a variety of ways, as summarized in Box 9.1.4 The first type is based on the method of collection used for the evidence; for example, whether it was experimental or from a survey. A second focuses on the general purpose to which the evidence would contribute, such as identifying a problem or measuring the effectiveness of an intervention. A third emphasizes source, usually distinguishing research by professional researchers from unsystematic forms of evidence such as ‘clinical experience’.
10When people in the clinical, management or policy worlds are asked what they consider to be evidence, they tend to think of a medley of scientifically verifiable and locally idiosyncratic types of information, which Lomas et al. call ‘colloquial’ interpretations, drawing on a wide range of experiences and using a broad definition of evidence.5 Thus, clinical effectiveness data compete with expert assertion, cost-benefit calculations are balanced against political acceptability and public-or patient-attitude data are combined with the recollection of recent personal encounters with strong personalities. The evidence-informed decision-making movement has, however, engendered for many of them a greater regard for the more scientific forms of evidence than would have been usual thirty years ago and there is an increasing tendency to ‘dress up’ the conclusions of a decision-making process in the language of science.
11By contrast, the research community’s view of evidence, both in clinical subjects and the social sciences, tends to be restricted to information generated through a prescribed set of processes and procedures recognized as scientific. In this case, both scientific tradition and more modern influences from the philosophy of science determine what is evidence, which can be summarized as knowledge that is explicit (that is, codified and propositional); systematic (that is, uses transparent and explicit methods) and replicable (that is, it can be tested to see whether others following the same methods with the same samples arrive at the same results).
12At a basic level, the general notion of evidence concerns actual or asserted facts (a fact is defined as a ‘thing certainly known to have occurred or be true’ [Oxford English Dictionary] intended for use in support of a conclusion. Most decision-makers view evidence colloquially and eclectically as anything that increases their degree of belief in a fact (Fig. 9.1). They define it by its resonance with experience and relevance to the kinds of decisions they have to make. This is the first form: colloquial evidence. The second and third forms are two versions provided by scientists. Scientists’ views on the role of evidence divide into those who emphasize context-free universal truths (identified closely with evidence-based medicine) and those who emphasize a context-sensitive role for evidence in a particular decision process (identified more with the applied social sciences).
13The appropriate methods for obtaining scientific evidence about context factors are not the same as those for obtaining evidence related to the testing for the validity of bioscientific hypotheses. Though the research designs may be very different, the scientific principles are, however, the same. Hypothesis testing is common to both, as is the control of ‘confounding’ variables. But both the phenomena hypothesized about and the method required to do the testing differ. The intent when using context-free evidence is to ensure ‘internal validity’ of evidence, that is, evidence that is free from bias. The intent when using context-sensitive evidence is to ensure ‘external validity’ of evidence, that is, evidence that the intervention will work under conditions likely be met in a practical context. Thus, whereas the gold standard procedure for controlling for confounding variables in clinical sciences might be a form of prospective randomized trial, where randomization does much of the work of removing bias from confounders, the gold standard for quantitative social scientists in assessing the resource consequences of adopting a technology is more likely to be a retrospective multivariate econometric study with contextual elements specifically modelled as determinants. Scientific evidence on context must, in addition, be more than merely medical and can embrace professional attitudes, ease of implementation, organizational capacity, competences of workforce, forecasting future burdens of sickness, economics or finance and ethics. Not all will always be relevant, but some will always be relevant (given the context). Colloquial evidence will typically embrace the resources likely to be available, expert and professional opinion on a matter, political judgment, values, habits and traditions, lobbyists and pressure groups and the particular pragmatics and contingencies of a situation. In healthcare decisions, all three kinds of evidence are more or less constantly in play.
14These three different forms of evidence — colloquial, context-free scientific and context-sensitive scientific — will not combine of themselves to determine Best or Wasted buys. Combining and interpreting them requires a process and the most suitable process may be deliberative through, for example, what has recently been described as qualitative Multi-criteria Decision Analysis.6 Regardless of which of the three types of evidence one is considering, any suitable process needs to address a common set of complexities:
all evidence needs to be interpreted;
its relevance needs to be assessed;
its quality needs to be assessed;
its applicability in the current context, as compared with that in which it was generated or collected, needs to be assessed;
its completeness needs to be assessed;
qualitative evidence needs to be weighed alongside quantitative;
any technical controversy over its standing needs to be settled;
the precision of estimates of effectiveness needs to be assessed;
the robustness of the results needs to be tested by sensitivity analyses;
the evidence, of whatever kind, needs to be considered on the basis of values to determine priorities, ‘worthwhileness’ and to specify what ought to be done and by whom.
15Facts do not ‘speak for themselves’ and any single piece of evidence, whether of the scientific or colloquial type, is rarely complete enough to enable guidance to be created without further evidence and assessment. To be useful, a deliberative process must therefore facilitate the combination and interpretation of the evidence for the purpose intended and enable those engaged in it to explain why they decided as they did.
16Maintaining a common understanding of what constitutes evidence is likely to become increasingly difficult as further interest groups or stakeholders are added in any procedure for determining Best Buys. Conversely, the more homogeneous the group in terms of professional background and level of responsibility, the less tension and disagreement is likely to exist about what constitutes permissible evidence. However, it seems unlikely that the object ought ever to be to maximize the homogeneity merely for the sake of achieving a common understanding. It is convenient if a common understanding can be reached but, if it cannot be reached, then the differences and the reasons for them are worth facing up to explicitly and should not be obscured through selection bias.
17In short, the decision-making process ought to provide a means through which the preferences of participants can be transformed rather than merely aggregated; it should be a process that allows participants to change their minds; it should allow the three kinds of evidence to be assessed and combined — colloquial (e.g., from professional experience, case-studies, other gossip); context-free science with high internal validity (such as evidence from explanatory RCTs); context-specific science with high external validity (such as evidence from cost-effectiveness analyses, pragmatic trials,7 most budget impact analyses) — and it should enable such things that people bring to the deliberation to count (such as their own values, experience, attitudes to risk and degrees of understanding and knowledge).
18Some of the problems posed by evidence that might be resolvable through deliberation include situations where:
evidence from more than one expert discipline is involved;
evidence from more than one profession is involved;
some stakeholders’ interests are threatened by evidence;
there are technical disputes to resolve;
evidence is scientifically controversial;
evidence is incomplete;
evidence is lacking;
evidence gathered in one context is to be applied in another;
issues of outcome, benefits and costs go beyond the conventional boundaries (of concept and end-point) of medical research design;
there is substantial uncertainty about key values;
there are risks (quantified or unquantified) to patients that need to be assessed and weighed;
there are risks (e.g., of malpractice suits) to professionals that need to be assessed and weighed;
there are other social and personal values not taken into account in the scientific evidence;
there are issues of equity and fairness of treatment (e.g., of patients similar in many respects but differing in their capacity to benefit);
there are issues of implementability and operational feasibility;
there are issues of short-term financial feasibility;
there are reasons to suppose that implementation may seriously destabilize local strategies and priorities;
wide professional ‘ownership’ is desired;
public credibility is desired;
political ‘trust’ is involved (e.g., no unpleasant surprises for ministers; help on how to handle unwelcome or embarrassing evidence).
19When there is evidence from more than one expert discipline, issues can easily arise about language. ‘Cost’ and ‘outcome’ are unlikely to mean the same to a clinician, a sociologist or an economist. Confusion may arise through failing to distinguish between statistical, clinical and policy significance. Views about the relative virtues of cross-sectional and time-series data are not shared. Bayesians and frequentists do not always see eye to eye. Equilibrium gets confused with equipoise. There are a lot of conventions that are manifestly different between disciplines and these can easily become barriers to communication. Many such issues can be resolved only by talking and, moreover, by frequent engagements of a deliberative character.
20Feeling threatened is something that is dangerous, not only for the person threatened but also for the whole decision-making process. A deliberative process can be one in which people’s interests are exposed and the character of the risks to which they are exposed is assessed. That in itself may be sufficient protection, for example, through enabling those affected to take preliminary steps to minimize adverse impacts, or for further analysis of the size of the threat and for exploration of any more extensive protection or compensation that might be warranted. But further protection may be required if, say, the revelation that a member of a committee espoused an unpopular political position were to lead them to subsequent discrimination and harm.
21Deliberation is likely to be useful when there are technical disputes to resolve in connection with evidence. These are endemic and nontrivial. Some relate to the evidence itself, some to its generation and some to the methodology used to summarize it.
22Complex problems will often benefit from deliberation. Examples include issues concerning outcomes, benefits or costs, any or all of which might go well beyond someone’s conventional boundaries of concept (for example, when the principal beneficiary is a family member rather than the patient); issues of metric (biological proxy measures of outcome like blood pressure in comparison to the clinical or social consequences of such indicators); issues of end point (end of trial versus remainder of expected life); issues of uncertainty about the importance attached to different elements in a decision; and lots of other types of issue too.
9.5 Uncertainty
23Uncertainty is all-pervading, both that which is formally measured through conventions about statistical significance (for example, less precision in an estimate is usually indicated by a larger standard error) and that which is qualitatively expressed, for example, via a Likert scale of ‘more or less’ likelihood. There can be uncertainty about the right methodology (should benefits be discounted by the same factor as costs? Was the sample large enough to make statements with confidence about the experience of subgroups of patients? Was the measurement of other social and personal values, which are not normally taken into clinical account, appropriate? Ought such effects be taken into account at all?) It seems plausible to suppose that open discussion about matters of which one is uncertain may help to locate more precisely the reason for the uncertainty and whether, for example, it is the sort of uncertainty that can be resolved by having more, or better, data; or that needs greater investigation of analytical methods; and whether there is a comfort in agreeing on a course of action about which there is a consensus, even though everyone is uncertain. When taking politically controversial decisions, it may be helpful for the minister to be able to explain in Parliament and to the public that there has been extensive consultation, much deliberation, full consideration of expert opinion and the ample weighing of the values of those most affected by the decision. At a minimum, the case becomes easier to make that the decision was not arbitrary and its rationale becomes communicable. This will take on specific significance if the decision is an unpopular one. Both the process and its outcome help to make a decision credible and to legitimize it.
9.6 Credibility
24Decisions taken on behalf of other people need to be credible. That is, the ‘other people’ in NCDs, typically the public at risk and the professionals who care for them, want to know that decisions taken were taken for good and understandable reasons (especially when controversial); that they were taken in a way consistent with generally accepted social values; and that they were informed by the best quality evidence available. This is true not only of decisions regarding Best Buys but also, and perhaps especially, of buys judged likely to be Wasted, especially if such buys have powerful political or commercial backing. If the public is going to be able to judge the credibility of the decisions made on its behalf, it needs to be able to penetrate the decision-making process to discover whether the reasoning was sound (and other possible decisions considered); the value judgments were acceptable; and whether the evidence was appropriately identified and interpreted. The public will want to be satisfied that those involved in the process were competent (for example, that the scientists were men and women of unimpeachable scientific authority and integrity); that they sought to promote the public interest and not a narrow selfish interest (whether personal, professional or commercial); and that those who were there to represent the public were appointed in a fair way and could be held to account. Credibility is further served if all stakeholders (i.e., any group likely to be affected for good or ill by the decision) have had a reasonable opportunity to comment before a final decision is taken.
25Deliberative processes often include, but are not the same as, consultation or comment. A famous example of consultation was the Oregon experiment to help determine which clinical procedures ought to be included in that state’s Medicare program. It was not a deliberative process, but a process of consultation in which there were forty-seven community meetings, twelve public hearings and fifty-four panel meetings for healthcare providers. All the data thereby gathered was fed into a committee (the Oregon Health Services Commission) for prioritization of procedures.8 Thus, many were consulted prior to the decision but relatively few participated in its making. The Commission itself doubtless engaged in much deliberation but the participation of all those people who were consulted was not part of the decision-making.
26Nor are opportunities to comment the same as deliberation. The National Institute for Health and Care Excellence in England and Wales (NICE) provides opportunities for people to comment on technologies that are under appraisal, alongside consultation and deliberation. The public in general might be invited to comment (say, via a website) and some individuals or organizations may receive specific invitations. Like consultation, commenting can be a part of a deliberative process, but it is not to be equated with one. Neither consulting nor commenting involves mutual deliberation. There is limited interchange, there is restricted participation and neither is an arrangement for the actual taking of decisions, whereas deliberative processes can embody all three. This is what makes deliberative processes different.
27One approach that embraces the whole range of comment, consultation and deliberative participation is the Cooperative Discourse Model.9 This entails the elicitation of values and criteria from stakeholder groups, the provision of policy options by expert groups and the evaluation and design of policies by randomly selected citizens. This was a model that seems to have been used to good effect by the UK Committee on Radioactive Waste Management, which is an independent committee established by the UK Government in November 2003 to develop recommendations for the long-term management of higher level radioactive wastes and which faced a classic set of issues of science and of value. Its terms of reference explicitly required that the review
be carried out in an open, transparent and inclusive manner […] must engage members of the UK public, and provide them with the opportunity to express their views. Other key stakeholder groups with interests in radioactive waste management […] [had also] to be provided with opportunity to participate. The objective of the review [was] to arrive at recommendations which can inspire public confidence and [were] practicable in securing the long-term safety of the UK’s radioactive wastes. It must therefore listen to what people say during the course of its work and address the concerns that they raise.10
28The use of the Cooperative Discourse Model seems to have been a success — at least as judged by the criterion that the client knows best. The Government’s response to the report included this:
The reflection of a wide range of viewpoints, and a basis in sound science is key to providing recommendations which inspire public confidence for managing the wastes in the long term, providing protection for people and the environment. The open and transparent manner in which CoRWM has conducted its business has been ground breaking. Accordingly, Government welcomes CoRWM’s report and believes it provides a sound basis for moving forward. Most recommendations can be acted on immediately; others require us to undertake more work.11
29The production of evidence itself will often have embodied deliberative processes as, for example, in scientific discussions of the design of a research project, clinical trial or systematic review. The typical scientific evidence on (context-free) efficacy is summarized in the form of narrative reviews, systematic reviews or meta-analyses (each of which will themselves have involved a lot of ‘judgment’) and each of which in itself will have often embodied mini-deliberative processes. Thus, within deliberative processes lie further deliberative processes. ’Artificial’ evidence, such as evidence from economic/epidemiological models that extrapolate beyond experimental time periods, is particularly suited to deliberation, as is the evidence that comes up through colloquial processes like public meetings, hearings from special witnesses and survey material.
30No evidence is totally authoritative; it all involves judgments by people in its creation, assembly and presentation. Some of the judgments are technical and scientific (was the most efficient estimating procedure used?). Some are scientific but also interpretive (are the trial results applicable in another setting?). Some are scientific and judgmental (were the scientists at risk of bias from their funding sources?). Some have the character of social value judgments (was the outcome measure an appropriate indicator of health?). Moreover, these are all questions about which it is perfectly possible for both scientifically trained and lay people to disagree amongst themselves. To be credible, therefore, all these judgments need to be seen to have been reasonable under the prevailing circumstances.
9.7 Some Characteristics of Deliberative Processes
31The table below12 offers some specific practical examples of how some common features of deliberative processes can be given a practical form. We are not advocating the indiscriminate use of deliberative processes. They are costly and may not be worth their cost. In LMICs, in particular, gaining credibility may present challenges that are hard to overcome, such as the availability of sufficiently qualified and independent individuals or the availability of evidence of direct local relevance. Under such circumstances, the transparency of the decision-making process becomes even more important, if the client population is to believe that what is claimed as a Best Buy is truly likely to be one and that interventions deemed to be Wasted Buys really are inferior to the alternatives.
Table 9.1 Principles of good governance for HTA.
Principles | Examples of how bodies can adhere to these principles |
Independence | Maintain arm’s length from government, payers, industry and professional groups; |
Strong and enforced conflict of interest policies. | |
Transparency | Meetings are open to the public; |
Material placed online; decision criteria and rationale for individual decisions made public. | |
Consultation | Wide and genuine consultation with stakeholders; |
Willingness to change decision in light of new evidence. | |
Scientific basis | Strong, scientific methods and reliance on critically appraised evidence and information. |
Timeliness | Decisions produced and published in a reasonable timeframe. |
Consistency | The same technical and process rules are applied to all priority-setting channels. |
Regular review | Regular updating of decisions and of methods, with review dates specified in final reports. |
Contestability | The decision-making process can be challenged, through legal challenges or non-judicial appeal mechanisms. |
32In the early days of the life of an advisory or decision-making organization that is to determine Best Buys, in-camera sessions might be used more frequently than in its more mature days, because at least some members might feel intimidated by the presence of a public, or afraid of unpleasantness downstream should their support for a decision lead to an unwanted outcome, or simply wish to avoid looking indecisive because they have changed their mind about something. Other participants (local politicians, aggressive lobbyists, show-off clinicians) may play to the crowd. Plainly, such measures will militate against credibility, so conflict between the ideal and the practical should be minimized as much as possible. For example, minutes could record disagreements without naming names, meetings could be held with only a select group of public witnesses present and absent evidence could be replaced with the best possible local or international expert opinion.
9.7.1 Case Study: The (then) National Institute for Clinical Excellence (England and Wales)
33NICE was created to be an authoritative foundation of ‘clinical governance’. This was (and is) a framework through which National Health Service (NHS) organizations are accountable for continually improving the quality of their services and safeguarding high standards of care by creating a local environment for managing accountability and the audit of clinical practice. From the beginning, it was decided that NICE’s procedures would be conducted with the highest degree of transparency possible and with much participation by ‘stakeholders’. These were categorically defined as patients, informal caregivers, clinical and other professional caregivers, healthcare managers, manufacturers, researchers and the public in general. NICE insisted on being located within the NHS rather than the Department of Health (ministry). It sought the respect of the overwhelming majority of the country’s clinical-and-health-service research community and the support of the Royal Colleges of Medicine and other bastions of professional life. The royal colleges are the principal professional associations of the United Kingdom’s medical professions. They comprise: The Royal College of Anesthetists, The Royal College of General Practitioners, The Royal College of Obstetricians and Gynecologists, The Royal College of Pediatrics and Child Health, The Royal College of Pathologists, The Royal College of Psychiatrists, The Royal College of Radiologists, The Royal College of Surgeons of England, The Faculty of Public Health Medicine, The Faculty of Pharmaceutical Medicine and The Faculty of Occupational Medicine.
34It was important to NICE that its guidance could not be dismissed as cranky, under-researched, or second rate. But it also had to be acceptable to the NHS’s users and fair to the inventors and manufacturers of the various interventions in a huge range of patient-management pathways. It also had to be deemed ‘do-able’ by the managers. There had to be lots of opportunities for skeptics and any who might feel threatened to air their concerns and for NICE to respond appropriately.
35Some of the ways in which NICE sought to be a model of deliberative process were:
there were open Board meetings that took place bi-monthly around England and Wales, accompanied by public receptions and ‘Question and Answer’ sessions with the chair;
minutes were published on the NICE web pages before confirmation by the Board;
there was a Partners’Council. This had a statutory duty to meet once a year to review NICE’s annual report. In practice, in the early days it met more frequently as a source of advice and a forum for exchanging ideas and developing the future plans for the Institute. Its membership included representatives from organizations with a special interest in its work such as patient groups, health professionals, NHS management, quality organizations, industry and trade unions. Members were appointed by the Secretary of State for Health (English minister) and the Welsh Assembly Government. It was abolished after a few years having served a useful function in getting NICE respectably off the ground;
there was a Citizens’ Council. This was a form of ‘citizens’ jury’ that considered social-value-laden matters referred to it by the Institute’s Board. Its thirty members had no economic involvement in the healthcare system and were selected to representative of the regions and demographic characteristics of England and Wales. Members were paid £150 per day plus their travel and subsistence expenses. It met twice a year and adopted a deliberative approach and could call witnesses and commissions papers. It was managed at arm’s length from NICE by a company specializing in research and community consultation;
the membership of the Technology Appraisals Committee was set broadly. The Committee was a standing advisory committee of the Institute, which had a very public profile since it was the source of NICE’s recommendations for the NHS. Members were appointed for a three-year term. They were drawn from the NHS, patient and care-giving organizations, relevant academic disciplines and the pharmaceutical and medical devices industries. Names of Appraisal Committee members were posted on the Institute’s website;
there were extensive consultation exercises throughout the appraisals process;
there was an appeals procedure. There were three grounds for appeal: that the Institute had failed to act fairly and in accordance with the Appraisal Procedure set out in its Guidance to Manufacturers and Sponsors; that it had prepared Guidance which was perverse in the light of the evidence submitted; and that it had exceeded its legal powers;
there were consultative processes about process. For example, the process through which the procedures for health technology assessment were developed involved several committees with representation of experts from a variety of stakeholders. The outcome was a public document describing procedure;13
there were extensive liaisons with eleven Royal Colleges, seven Independent Academic Centres and seven National Collaborating Centres. NICE created the National Collaborating Centres within consortia that consisted of the royal colleges, professional bodies and patient/carer organizations for developing clinical guidelines. They were: the National Collaborating Centres for Acute Care, Cancer, Chronic Conditions, Mental Health, Nursing and Supportive Care, Primary Care and Women and Children’s Health;
there was considerable joint working with NHS R & D and the National Coordinating Centre for Health Technology Assessment. This was a part of the Wessex Institute for Health Research and Development at the University of Southampton. It coordinated the national HTA research program on behalf of NHS R&D.
36Thus, it was determined that the process of technology appraisal was to be open, multi-disciplinary, multi-professional and multi-institutional and that it would have ‘lay’ participation. It was heavily dependent upon people’s willingness to serve pro bono. It was plain from the outset that very large numbers of people would be involved and the Institute itself would be largely a virtual organization.14
37Several of these features have been modified since 1999, mainly on grounds of expense, and it is easy to see that NICE, as a ‘Rolls Royce’ of such institutes, cannot be a model to be adopted wholesale anywhere else, nor has it survived as such in England and Wales. Its features, however, facilitated deliberation in evidence-informed decision-making and can readily be adapted to suit different contexts.
9.8 Conclusions
38A deliberative process for selecting Best Buys is likely to:
identify relevant clinical, social and political contexts for interpreting context-free scientific evidence about NCDs, simply by virtue of the fact that representative people and people who can interpret the scientific evidence on external validity are there at the table;
generate guidance that is consistent with the context-free scientific evidence and its reasonable interpretation in particular contexts;
command a wide credibility in professional circles and beyond, simply because respected professionals are there at the table;
result in a quality and power of residual opposition that is low. The prediction is that there will be less hurt, less offence and therefore less opposition if deliberation is used than without it;
result in less alienation. If the process is one whose design was actually shaped by everybody with a stake in its outcome, so that they actually become parties to its design and committed to the nature of the process, stakeholders are much less likely to be alienated by its outcome. After all, it was a process that they helped to design and even approved, rather than some other arbitrary process that somebody else invented and thrust upon them. They are more likely to be able to live with the consequences of deliberation, even if on occasion the approved process produces results that are not their preferred ones;
generate guidance whose implementation will be speedy;
identify impediments to the implementation of guidance and to find solutions to those impediments: ways of leaping over or going around them;
identify knowledge gaps that might be resolved by further enquiry and research.
39Finally, deliberation is not about establishing consensus. There is a lot to be said, however, for discovering whether there is or is not consensus and, when there is disagreement, whether it is a matter of fact that might be resolved by further research and other factual enquiry, a matter of methodology or procedures which might be resolved by specialist workshops, or a matter of value which may need a political resolution at a high level. The important principle to keep in mind is that of facing up to difficulties rather than burying them and of demonstrating reasonableness in the ways they are handled. Therein lies credibility.
Notes de bas de page
1 This chapter draws extensively on Anthony J. Culyer and Jonathan Lomas, ‘Deliberative Processes and Evidence-Informed Decision Making in Health Care— Do They Work and How Might We Know’, Evidence & Policy: A Journal of Research, Debate and Practice, 2 (2006), 357–71, https://doi.org/10.1332/174426406778023658; and Anthony J. Culyer, ‘Deliberative Processes in Decisions about Health Care Technologies’, OHE Briefing, No. 48 (2009), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2640171
2 Culyer (2009) offers a series of charachteristics of ‘good’ deliberative processes; Presidential Commission for the Study of Bioethical Issues, Deliberation for Better
Health, Science, and Technology Policy: Five Steps for Effective Deliberation 1 (2006) sets out five steps for effective deliberative approaches for decision-making in health science and technology policy.
3 Richard Norman et al., ‘International Comparisons in Valuing EQ-5D Health States: A Review and Analysis’, Value in Health, 12.8 (2009) 1194–200, https://doi.org/10.1111/j.1524-4733.2009.00581.x; EuroQol Research Foundation, ‘EQ-5D-3L | Valuation’, 2019, https://euroqol.org/eq-5d-instruments/eq-5d-3l-about/valuation/
4 Source adapted from Jonathan Lomas et al., Conceptualizing and Combining Evidence for Health System Guidance, Canadian Health Services Research Foundation (CHSRF), (2005).
5 Ibid.
6 Rob Baltussen et al., ‘Multicriteria Decision Analysis to Support HTA Agencies: Benefits, Limitations, and the Way Forward’, Value in Health 22.11 (2019), 1283–1288, https://doi.org/10.1016/j.jval.2019.06.014
7 BOLDER research group, ‘Better Outcomes through Learning, Data, Engagement, and Research (BOLDER)? A System for Improving Evidence and Clinical Practice in Low and Middle Income Countries’, F1000Research, 5 (2016) 693, https://doi.org/10.12688/f1000research.8392.1
8 Michael Garland, ‘Rationing in Public: Oregon’s Priority-Setting Methodology’, in Rationing America’s Medical Care: The Oregon Plan and Beyond. Brookings Institution, ed. by M. A. Strosberg et al. (Washington, DC: Brookings Institution, 1992).
9 Ortwin Renn, ’A Model for an Analytic−Deliberative Process in Risk Management’, Environmental Science & Technology, 33.18 (1999), 3049–55.
10 Committee on Radioactive Waste Management, Managing Our Radioactive Waste Safely (London, 2006).
11 UK Government, Response to the Report and Recommendations from the Committee on Radioactive Waste Management (CoRWM) by the UK Government and the Devolved Administrations (London, 2006).
12 Reproduced from Francis Ruiz, Kalipso Chalkidou and Laura Morris, ‘Process Matters for Priority Setting and Health Technology Assessment in Indonesia’, F1000Research, 8 (2019), https://doi.org/10.7490/f1000research.1116839.1
13 National Institute for Clinical Excellence, Guide to the Methods of Technology Appraisal (London, 2004).
14 Anthony J. Culyer, ‘NICE’s Use of Cost Effectiveness as an Exemplar of a Deliberative Process’, Health Economics, Policy, and Law, 1. Pt 3 (2006), 299–318, https://doi.org/10.1017/s1744133106004026
Auteurs
MD., Ph.D., is the Director of Global Health Policy and a Senior Fellow at the Center for Global Development, based in London and a Professor of Practice in Global Health at Imperial College London. Her work concentrates on helping governments build technical and institutional capacity for using evidence to inform health policy as they move towards Universal Healthcare Coverage. She is interested in how local information, local expertise and local institutions can drive scientific and legitimate healthcare resource allocation decisions. She has been involved in the Chinese rural health reforms and in national health reform projects in Colombia, Turkey and the Middle East, working with the World Bank, the Pan American Health Organization (PAHO), the Department for International Development (DFID) and the Inter-American Development Bank (IDB), as well as national governments. Between 2007 and 2008, she spent a year at the Johns Hopkins School of Public Health, as a Harkness fellow in Health Policy and Practice, studying how comparative effectiveness research can inform policy and US government drug pricing policies.
Kalipso led the establishment of NICE International, which she ran for eight years, and, more recently, of the international Decision Support Initiative (iDSI) which she directs and which is a multi-million, multi-country network working towards better health around the world through evidence-informed spending in healthcare in low to middle income countries. IDSI is funded by the Bill and Melinda Gates Foundation, the UK’s Department for International Development and the Rockefeller Foundation and is currently involved in national reform projects in China, India, Vietnam, Ghana, Indonesia and South Africa working together with key organizations such as the Thai Health Intervention and Technology Assessment Program (HITAP), the US Center for Global Development and PRICELESS, at Wits University in South Africa.
Ph.D., is Emeritus Professor of Economics at the University of York (England), Senior Fellow at the Institute of Health Policy, Management and Evaluation at the University of Toronto (Canada) and Visiting Professor at Imperial College London. He is Chair of the Board of the international Decision Support Initiative (iDSI). He was the founding Organizer of the Health Economists’ Study Group. For thirty-three years he was the founding Co-Editor, with Joe Newhouse at Harvard, of Journal of Health Economics. He was founding Vice Chair of the National Institute for Health and Care Excellence (NICE) until 2003. He is Editor-in-Chief of the online Encyclopaedia of Health Economics. For many years he was chair of the Department of Economics & Related Studies at York and, for six of them, was also deputy vice-chancellor. He has published widely, mostly in health economics.
He is a Founding Fellow of the Academy of Medical Sciences, an Honorary Fellow of the Royal College of Physicians of London and an Honorary Member of the Finnish Society for Health Economics (2013). He holds an honorary doctorate from the Stockholm School of Economics and is a Commander of the British Empire (CBE). He has been a member or chaired many policy committees and boards in the UK and Canada including authoring the 1994 reforms of NHS Research and Development and being a director of the Canadian Agency for Drugs and Technologies in Health (CADTH).
Le texte seul est utilisable sous licence Creative Commons - Attribution 4.0 International - CC BY 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.