Vous l’avez sans doute déjà repéré : sur la plateforme OpenEdition Books, une nouvelle interface vient d’être mise en ligne.
En cas d’anomalies au cours de votre navigation, vous pouvez nous les signaler par mail à l’adresse feedback[at]openedition[point]org.

Précédent Suivant

Chapter 3. Methodological approaches (designs) in PHIR

p. 51-75


Texte intégral

Different approaches to address different PHIR questions

1There are many approaches (designs) for evaluating population health interventions.

Figure 5. Criteria for choosing the design.

Image 1000000000000371000001D3645478B435A596BF.png

2However, these approaches should not be regarded as a simple toolbox from which to draw. The choice of research design is a scientific activity based on: 1) the epistemological stance (see Chapter 4); 2) the evaluation context and the phase of the intervention process; 3) the question being addressed (see Chapter 2); 4) the nature of the intervention being evaluated and its degree of complexity; and 5) the practical aspects of the evaluation (including data availability).

Epistemological stance

3Schematically, when considering the evaluation of effectiveness, two epistemological views of causality can be discerned.

4The first is a linear view (the same cause produces the same effect). This linear view is the source of the experimental paradigm, which is based on a counterfactual approach (inferring the intervention’s impact by comparison with a situation in which the intervention is not present). This paradigm assumes: 1) a stability in the relationship, in order to present it, and 2) that all else is equal (which is achieved by randomisation).

5In the second, more systemic view, it is considered that the same cause can produce different effects depending on the conditions under which it is mobilised. This is the principle upon which the realistic evaluation approach formalised by Pawson & Tilley (1997) is based: realistic approaches consider human actions and social interactions to be at the very heart of change. It is through the operation of entire systems of social relations that any change in behaviours, events, and social conditions takes place. A key requirement of realistic evaluation therefore is to take into account the different layers of social reality that make up and surround programmes. It is within this framework that we have proposed the notion of an interventional system that includes, beyond the components of the intervention, pre-existing contextual parameters that may be under or outside the control of the designers and the people organising the intervention. Therefore, any evaluation must assume: 1) that the contributions of all components of this interventional system, as well as the effect of their combination, are assessed, and 2) that the conclusions of the study are based in the context, even if 3) some of the conclusions (i.e. the key functions) may be transferable to other contexts (Cambon & Alla, 2021).

6This distinction is not trivial; these epistemological stances have anchored scientific cultures for decades. Are they irreconcilable, for all that? This is a matter of debate, as illustrated in particular by the strong reactions to Bonell’s proposal to conduct realistic randomised controlled trials combining the two stances in one integrated research design (Marchal et al., 2013). However, managing to integrate them into one global approach is, in fact, a major challenge for the multidisciplinary approach in PHIR.

The evaluation context and the phase of the innovation process

7Evaluation can be conducted in two contexts. The first is an innovation context, with an intervention being created de novo, possibly by the research teams. The other is an observation context, in which the researchers evaluate an intervention already developed by other actors, and which may have been in place for a long time.

8In an innovation context, the development and evaluation of the intervention involve several phases that can sometimes extend over several years. Schematically, the innovation process consists of four broad phases (see the previous chapter):

  • the development of an intervention and its theory;

  • the piloting of this intervention and the analysis of its viability;

  • its wider deployment and the evaluation of its effectiveness and the conditions for its effectiveness (analysis of processes and mechanisms);

  • its generalisation/scaling up (and the analysis of transferability factors, impediments, and drivers of its deployment and sustainability).

9It should be noted that this phased process is theoretical, even simplistic. In practice, research questions can be articulated in different phases. For example, the intervention theory can be constructed, refined, and validated throughout this process. The most suitable research designs for each phase are different. In particular, the controlled trial can be used to analyse effectiveness. It is generally not appropriate for the other phases.

10In an observation context, where the research team is evaluating a pre-existing intervention, the experimental method is generally not feasible by nature, for reasons of practicality and acceptability, as well as for ethical reasons (it may be considered unethical to take an already disseminated intervention around which there is professional consensus away from a group in order to evaluate it).

The question being addressed

11The research question is what should drive the choice of research design in the first place. For example, if the researchers are interested in effectiveness, a counterfactual method may be suitable. A realistic evaluation is also possible. If they are interested in mechanisms (including conditions for effectiveness), a theory-based evaluation would be more relevant. Finally, if their focus is more on implementation, a case study in a routine situation would be appropriate. These distinctions are very schematic, and the designs can be combined (Oakley et al., 2006).

12Figure 6 presents the main research designs used to evaluate population health interventions along two axes: 1) seeking internal versus external validity; and 2) focusing on outcomes or on processes and mechanisms.

13With regard to the concept of validity, it is important to note that research teams often have radically different understandings depending on their epistemological stances. Figure 6 uses Campbellian concepts commonly adopted in clinical and epidemiological research to simplify the presentation of research designs. In reality, however, these concepts are the subject of much debate, since the criteria for scientificness are numerous and not limited to internal and external validity. To the rigour of quantitative approaches across the four dimensions of internal validity, external validity, reliability, and objectivity, the proponents of qualitative approaches suggest adding quality principles across the four dimensions of credibility, transferability, procedural accountability, and confirmation (Regragui et al., 2018). For those using qualitative methods, Laperrière’s (1997) work (in French) on these scientific criteria will undeniably be of help.

14Figure 6 uses the notions of internal and external validity, which were formalised by Campbell and Stanley (1966). Internal validity reflects the causal relationship in experimental situations; external validity refers to the generalisability to other contexts and populations of results obtained in experimental contexts. The two notions must be seen as the extremes of a continuum that evolve inversely. In other words, in a study, the stronger the internal validity, the weaker the external validity, and vice versa. For example, the experimental condition of the randomised controlled trial is considered by some to be the design with the strongest internal validity (for its ability to demonstrate a causal relationship, all else being equal) but with weak external validity because the conditions and populations included in the controlled trial are not representative of real life. Conversely, observational studies are considered to have weak internal validity (an observed phenomenon may be related to elements other than the intervention, which are not controlled by the researcher) and strong external validity, since by their nature, observational studies observe real life.

Figure 6. Research designs most frequently used in PHIR and for related research questions.

Image 10000201000004620000037AB794040476C0429A.png

Source: Adapted from Minary et al. (2019).

15A study may be focused on the intervention’s results and/or on its processes and mechanisms. In the first case, the aim is to demonstrate that the intervention produces a particular outcome, without questioning how (for example, many programmes assessed as effective abroad are duplicated in France with no prior reflection on their transferability to a different context). In the second case, the aim is to look beyond the outcome to understand how it was achieved, by whom, and under what conditions. These process and mechanism analyses are particularly important for complex interventions, which is the case for most population health interventions. This is true from the research standpoint (to explain what may have produced a positive outcome, or conversely, in the event of a negative or unintended outcome, to determine whether the latter is linked to the ineffectiveness of the intervention per se or to the conditions and modalities of its implementation), as well as from the operationalisation standpoint (if the aim is to transpose, perpetuate, or generalise an intervention, these elements of understanding are essential) (Craig, Dieppe et al., 2012). A controlled trial may be a suitable tool for addressing a question of effectiveness. A realistic evaluation approach will be more appropriate if the focus is on mechanisms.

The nature of the intervention being evaluated and its degree of complexity

16The nature of the intervention will influence the choice of research design. In particular, some interventions cannot be integrated into experimental processes. For example, it is generally not possible to randomly assign a green space. For such an object, an evaluation can only be observational.

17Other important scenarios to take into account are actions for which the units of intervention cannot be individualised but rather are collective, such as a change in the environment, a health communication campaign, an action to modify the practices of professionals, etc. In such cases, individual randomisation does not make sense (we cannot randomise within a city who may or may not use a bicycle path or who may or may not see a poster). For such cases, the more appropriate approaches are those in which the intervention modalities being compared are composed not of individuals but of groups of individuals (a city, a school, a patient population), referred to as a cluster. The same issue arises for individual interventions with potentially collective effects (e.g. contamination), such as vaccination (a vaccinated person’s immunity can indirectly protect those around them) or health education (a behaviour change can spread within a social group). For such interventions, a cluster approach might also be considered.

18The complexity of the intervention being evaluated is also a determining factor in the choice of method. It is not a binary matter, with interventions being either simple or complex (Box 8). Complexity depends on the dimensions attributed to it; in interventions, multiple components interact (Craig, Dieppe et al., 2012). With respect to context, there is dynamic interaction among intervention components and contextual factors (Cambon et al., 2019). Thus, an intervention considered simple by some definitions might be seen as complex by others.

Box 8. Simple or complex

The act of vaccination is a simple intervention; a vaccination campaign is a complex intervention if we take into account the organisational and social aspects of vaccine acceptance in the population.
A complex intervention can undergo simple assessment, depending on the question asked. For example, an urban policy aimed at promoting soft mobility is highly complex but can be simply assessed using a comparative measurement of behaviour (physical activity) (M
oore et al., 2017; Szreter, 2003).

Practical aspects of evaluation

19The proposed research designs have different constraints in terms of legal framework, costs, feasibility, access to data, implementation time, social acceptability, etc. These also factor into the selection criteria for choosing a research design. Thus, external contingencies sometimes constrain the choice, over and above the necessary discussions with stakeholders.

20Depending on their nature, research designs can also facilitate to a greater or lesser extent a process of partnership research (see Chapter 5) or of knowledge transfer from research results to practice and decision-making (see Chapter 6). These factors are also part of the selection criteria. For example, research designs that focus on processes and mechanisms or that use consensus-building tools (e.g. Delphi, concept mapping) make it easier to involve stakeholders from the outset of the research process (see Chapter 4).

Main research designs used

21We have chosen to present the research designs according to the main question they address:

  • counterfactual, experimental, quasi-experimental, and observational approaches that generally focus on outcomes;

  • more comprehensive approaches that incorporate analyses of processes and mechanisms and theory-based evaluations.

22It should be kept in mind that research designs are not mutually exclusive. They can be combined (process evaluations integrated into controlled trials) and/or used concurrently or successively in the innovation process. For example, a realistic evaluation might be used in a pilot study, followed by a controlled trial to evaluate the results. Or conversely, a controlled trial might be followed by a case study to analyse the conditions of effectiveness, or a theory-based evaluation might be followed by a natural experiment conducted as part of the scaling-up process.

23These research designs may involve quantitative, qualitative, or mixed methods for data collection and analysis (see Chapter 4).

24Finally, some of these research designs may apply to the case study principle. A case study consists of an in-depth analysis of a case, i.e. a specific intervention. Case studies are widely used in social sciences for individual (e.g. in psychology) or collective cases (e.g. in sociology). In PHIR, the intervention units are instead population-based, according to where the interventions are organised (a living area, a school, an institution, etc.). The case study method is useful for analysing an intervention in its context to address the various research questions (see Chapter 2). A case study is a scientific approach that can use quantitative, qualitative, or mixed methods (see Chapter 4). In PHIR, the study may be based on a single case (e.g. as in a pilot and viability study), or on a series of concurrent or successive cases (e.g. a realistic evaluation), using a multiple case study approach (Yin & Ridde, 2012).

25Some research designs may also involve modelling, in whole or in part (see Chapter 4).

The experimental randomised controlled trial approach

26For evaluating population health interventions, the randomised controlled trial has been and still is often presented as the best method (gold standard). This may be due to the transposition of the principles of evidence-based medicine to public health in the late 1990s (Jenicek, 1997). It may also be linked to the social experimentation trend prevalent since the 1960s in North America (Campbell & Stanley, 1966) and recently brought forward by the work of certain development economists (Evans, 2021; Jatteau, 2021).

27The controlled trial is an experimental research design based on the principle of comparing (health) outcomes between one group that receives an intervention and a control group without that intervention or receiving a different intervention. Assignment to one of the two groups is randomised by individual draw (i.e. participants selected one at a time). This method is considered by some to be the gold standard for assessing the effectiveness of interventions because random assignment balances the characteristics (demographic, social, etc.) of the two groups. Thereafter, if a difference in outcomes is observed, it can only be attributed to the intervention alone. This type of study is considered the most suitable for demonstrating causality. However, there are conditions for this (Bédécarrats et al., 2022). In particular, the initial comparability conferred by random assignment must be maintained; the behaviours of the study participants and the conditions under which they are monitored by the researchers must be similar, hence the practice of double blinding (in which neither the participant nor the researcher knows to which group the participant belongs). It is to ensure this double blinding that placebos are used in drug trials.

28The controlled trial is one of many research designs that can be used to assess the effectiveness of population health interventions. This approach should not, however, be regarded as the gold standard, as it would be in therapeutic drug trials. Indeed, applied to the field of population health interventions, this approach can be criticised from three angles: purpose, epistemology, and methodology.

  • Purpose. A controlled trial addresses the question of effectiveness. However, the matter being assessed during PHIR may be something else entirely (see Chapter 2). A trial is not the best method for addressing questions of process, viability, acceptability, etc. Moreover, not all interventions lend themselves to a random assignment; for example, it is difficult (if not impossible) to randomly assign an item of legislation.

  • Epistemology. A controlled trial addresses the question of an intervention’s effectiveness with “all else being equal”. This implies “decontextualising” perspective by setting up an experimental situation in which a “pure” effect of the intervention, that is, one without the influence of external factors, can be observed. However, if the outcome of an intervention is considered as the product of interaction between interventional and contextual elements (see previous chapters) (Cambon et al., 2019), then what is the significance of a causal relationship obtained under conditions never observed in a real-life situation? (Zwarenstein & Treweek, 2009). Conversely, a negative outcome obtained in a controlled trial might not be due to the inefficiency per se of the intervention, but rather to the experimental conditions under which the intervention was implemented. In other words, the conclusion of a controlled trial could, in this field, be not valid or valid but not very relevant.

  • Methodology. When applied to population health interventions, controlled trials may be marred by significant biases that compromise the validity of their results.

Biases of randomised controlled trials as applied to PHIR

29The biases described in this paragraph are not specific to PHIR. However, they can be so strong in this field that they call into question the very choice of an experimental research design.

30These biases are mainly linked to the nature of the intended effects, which generally relate to behaviour, i.e. the behaviours of those delivering or receiving the intervention, and behaviour either as a direct effect of the intervention (e.g. in health education) or as an indirect effect (of actions aimed at providing environmental conditions for a behaviour change, such as installing bicycle paths). A major assumption of the controlled trial in the clinical field is that the biological outcomes observed in the participants are a reflection of these outcomes in the general population. This assumption makes sense in the clinical setting. For example, the hypothesis that the immunity conferred by vaccination in a controlled trial is not different from the immunity conferred by the same vaccination in the general population is plausible. Conversely, when looking at human behaviour during PHIR, this assumption is not tenable. Indeed, the very fact of participating in a controlled trial, as well as the conditions under which it is carried out, may in themselves determine behaviour or be associated with determinants of behaviour (e.g. a controlled trial may recruit people who are more attentive to their health). Thus, what is observed in the controlled trial may be far removed from what would be observed in real-life conditions. To draw a parallel, controlled trials are to PHIR what animal studies are to clinical research. They can be helpful for generating hypotheses but are generally not sufficient for reaching operational conclusions.

31Broadly, we can identify three main types of bias in controlled trials in PHIR (Tarquinio et al., 2015): recruitment bias, bias related to experimental conditions, and bias related to lack of blinding.

Recruitment bias

32This family of biases is related to the fact that the subjects participating in the intervention are different from those who do not participate in the intervention. This is particularly salient when an intervention’s causal effect can vary depending on the individual. For example, in behaviour change interventions, the same intervention dose will have less impact on people whose needs are less marked (Victora et al., 2004). Among these biases, the voluntary response bias is particularly relevant; it relates to the fact that the factors that induce the subject to participate in a controlled trial also contribute to the outcome (e.g. attention to health, health literacy, and sociocultural and economic factors).

Bias related to experimental conditions

33Experimental conditions (e.g. specially trained professionals, intervention framed by standard operating procedures, etc.) are specific. They can be determinants of behaviour that differ from those in real life. In particular, the monitoring of participants (through observations, questionnaires, etc.) in a trial can lead to specific behaviours among these participants. This is the Hawthorne effect, well described in psychology, where participants were found to have changed their behaviour because they knew they were being observed. The simple fact of answering a questionnaire about one’s behaviour can be a factor in influencing behaviour (this is, in fact, an interventional tool, identified as a behavioural change technique [Michie et al., 2013]). This helps to explain why, even in control groups without intervention in controlled trials, behaviours are different from those in the general population.

34Moreover, the standardisation of interventions, inherent in the experimental approach, is by its very nature at odds with the need to adapt complex interventions to their implementation context (Craig et al., 2013). However, it is known that such adaptation to context is a key factor in effectiveness. Thus, the negative outcome of a controlled trial may not be related to the ineffectiveness of the interventional component per se but rather to the fact that it was implemented in a standardised manner in a constrained context, which could, among other things, limit the actors’ commitment.

Bias related to lack of blinding

35Knowing about the intervention in which one is participating, and one’s perception of it, can influence participants’ responses and behaviours and the investigator’s judgement. This is the reason for double-blinding.

36In particular, in PHIR, there is the risk of resentful demoralisation (demoralisation bias). This may occur, for instance, if control group participants think they are not participating in a desirable intervention and this then negatively affects their attitude and behaviour and, consequently, the controlled trial results.

37Likewise, when comparing several interventions, one may be preferred by a particular participant. As well, the outcome of the assessment may depend on the interaction between preferences and the assigned intervention (i.e. better results obtained by people who prefer the intervention to which they have been assigned). This phenomenon is observed in all controlled trials for which blinding is not possible, as is the case in PHIR and in some non-drug clinical interventions. For example, in a comparison of a brief intervention and physiotherapy for neck pain, the results were diametrically opposed according to people’s preferences: in those who preferred physiotherapy, the latter was more effective than the brief intervention, and vice versa (Moore et al., 2015). This is particularly important to consider, as the recruitment methods used for a controlled trial can select people with preferences, which then could strongly influence the results in one direction or the other. For instance, a controlled trial aimed at comparing a digital health application with traditional monitoring will not have the same results – or may even have opposite results – if participants are recruited online with an incentive to test a new application or recruited by their treating physician. It is also important to consider, from the standpoint of transferring results in real life, what is the point of assessing an intervention that does not consider preferences when, in real life, people generally choose the intervention they prefer? Moreover, we know that the preferences of a community can also influence outcomes among individuals participating in an intervention (Ouédraogo et al., 2019).

38These specific biases in PHIR help to explain the gap that is often observed between the effectiveness of population health interventions as measured in controlled trials and that observed in real-life situations. This underscores the limitations for PHIR of the experimental approach coming out of clinical research.

Other experimental counterfactual approaches

39These are experimental approaches (randomly assigned de novo interventions) that are adapted from the conventional randomised controlled trial models to take into account the constraints of population health interventions.

40These adaptations are intended to respond to two challenges. The first is how to use an experimental research design (i.e. with randomised selection) for interventions that do not lend themselves to individual randomisation; these are clustered randomised controlled trials. The second is how to position the design in a situation that more closely approximates real life (i.e. to maximise the internal/external validity ratio); these are pragmatic controlled trials and controlled trials that take preferences into account.

41These three types of controlled trials are not mutually exclusive and can be broken down into subtypes (e.g. stepped-wedge cluster randomised trials).

Pragmatic controlled trials

42Schwartz and Lellouch (1967) proposed a distinction between “explanatory” controlled trials, carried out in ideal conditions, to confirm a hypothesis and “experimental” controlled tests, carried out in real-life conditions, to support decision-making. In other words, an intervention is evaluated compared to other routine interventions (Patsopoulos, 2011; Ridde & Haddad, 2013). As mentioned earlier, English has two words for effectiveness, which are used to distinguish between effectiveness obtained under natural conditions (effectiveness, in pragmatic trials) and that obtained under experimental conditions (efficacy, in classical trials) (Cochrane, 1972).

43Of course, there is no binary separation between the two types of controlled trial, but rather a continuum and several dimensions to consider (Thorpe et al., 2009). More than the controlled trial, it is the approach that is described as more or less pragmatic depending on how it is conducted (e.g. involving the usual providers is more practical than involving specially recruited and specifically trained providers in the controlled trial; including the general population is more pragmatic than having strict inclusion criteria that select participants based on medical or social standards, etc.).

44A look at the nature of potential providers mobilised by the research illustrates such a continuum, from least to most pragmatic: 1) providers drawn from the research team; 2) the regular providers, selected according to strict criteria (in terms of professional skills, etc.); 3) the regular providers, not specially selected, but specially trained with a guide to good practice to be followed in the research process; and 4) the regular providers, as part of their usual practices. This pragmatic approach is an exception in clinical research (fewer than one trial per 100,000) (Zwarenstein & Treweek, 2009), but is quite normal in population health interventions, which are generally not conducted in research settings but in real-life environments and involve (or should involve) the stakeholders in these environments.

45To best conduct a pragmatic approach, it is important to involve the stakeholders in developing the protocol and conducting the controlled trial. This makes it possible to obtain results that can more easily be “routinised” and scaled up.

46The reason why these controlled trials are rarer is that they are more challenging to conduct. In particular, it is more difficult to objectivise a result under pragmatic conditions because the variability induced by the situation (greater heterogeneity of people, more variability in professional practices, diversity of contexts, etc.) generates “noise” that makes it more difficult to observe the actual outcome of the intervention. The number of staff involved is thus higher than in conventional trials, and the trials are consequently more expensive.

Cluster randomised controlled trials

47In these trials, the unit of randomisation for the intervention in the study is not a person, as in the classic individual controlled trial but rather a group (cluster). For example, the unit of randomisation can be a city, a school, a hospital, a patient population, etc.

48To assess the effect of modifications to school timetables on students’ physical activity, for example, the randomisation must be done by class, school, or city, since the modification concerns de facto all pupils enrolled together.

49These controlled trials have known methodological limitations (selection bias, effect dilutions, cluster effect, cluster imbalance, lack of blinding, etc.) (Minary et al., 2019). They also present ethical and regulatory problems. In particular, they raise the question of consent. One of the cardinal principles of health research is the individual consent of the persons taking part in it. But how can consent be given when the intervention is collective and people are subjected to it in the places where they live, work, or obtain healthcare? Other options have been developed and are under consideration at the international level (Weijer et al., 2012).

50Several variants exist in terms of methodology. One that is increasingly used in PHIR is the stepped-wedge cluster randomised trial (Hemming et al., 2015). In these trials, all groups receive the intervention to be evaluated. What is randomly assigned is not the intervention, yes or no, but rather the order in which the different groups will receive the intervention, which makes it possible to compare periods with and without the intervention. There are two main advantages to this type of testing: 1) since all groups receive the intervention, it is easier to gain community support (e.g. it is difficult, for example, in a conventional controlled trial to explain to a school community that they will not receive the innovative intervention because they have been assigned to the control group); and 2) operationally, this allows the intervention to be implemented progressively, thereby smoothing out staff requirements over time. The major inconvenience of this type of controlled trial is that the total study time for this research design is longer because of this phased implementation.

Patient preference-controlled trial

51To limit biases related to people’s preferences for one or another of the interventions being compared, several research designs have been proposed (Torgerson et al., 1996).

52The main ones are (Figure 7):

Figure 7. The main research designs taking into account preferences (A & B groups = the two arms with alternative interventions; R= Randomisation).

Image 100002010000039700000440A4F242E9C32FDC68.png

Source: Torgerson & Torgerson (2008).

  • A preference-focused research design, in which participants are asked about their preferences and two pairs of groups are formed, resulting in two comparisons: 1) one comparison among the indifferent, who either receive the intervention, or do not, by randomisation; and 2) one among those who have expressed a preference, who are assigned to their preferred group without randomisation.

  • A research design focused on randomly selected preferences, in which participants’ preferences are recorded after they have given consent in the usual way and before randomisation. The interactions between these preferences and the outcomes are then analysed statistically.

  • A research design with randomisation prior to consent (Zelen design) involves randomisation of participants before consent. Consent is sought only from those assigned to the experimental group (not the control group), and only those accepting the treatment are included in the experimental group; the others will be assigned to the control group.

53These different types of controlled trials that take preferences into account present the advantage of describing preferences, analysing their impact, and using this information to interpret the result. This consideration of preferences is particularly useful from a pragmatic perspective because it reflects reality (in real life, our actions are not the result of a draw but of a choice or preference, even if these are more or less influenced).

Quasi- and non-experimental counterfactual approaches focused on results

54As a preamble to this chapter, it should be noted that the terminology is not clear-cut and depends on scientific traditions. Quasi-experimental studies generally refer to controlled trials (experimental studies in which the researcher has control over all or part of the intervention) in which the intervention assignment is not randomised. In observational studies, researchers observe an existing intervention in which they do not intervene (e.g. the introduction of a new regulation). The term natural experiment is sometimes used when referring to observational studies, sometimes to quasi-experimental studies, and sometimes to both. Finally, some use the term quasi-experimental studies to refer to observational studies (de Vocht et al., 2021). We have chosen to refer to controlled trials in which the researcher has an influence (even if partial) on exposure as quasi-experimental studies, and to those where the researcher has no influence as observational studies.

Quasi-experimental studies

55Quasi-experimental studies are so called because they have a hybrid format between controlled trials and observational studies (Campbell & Stanley, 1966). They are called “experimental” because an intervention is put in place de novo to be evaluated, and “quasi” because there is no random assignment. The two typical research designs in this context are:

  • pre-test/post-test studies, in which a situation within a population is compared before and after the intervention is implemented;

  • here/there studies, in which a situation is compared between the group receiving the intervention and another group without the intervention (elsewhere).

56There are other types of quasi-experimental studies derived from these two research designs. For example, the two may be combined (one measurement taken beforehand in the intervention group and the control group and one measurement after in both groups). Another example is a research design in which the intervention is withdrawn and three measurements are taken (pre-implementation, post-implementation, post-withdrawal). Time series are another derivative (Box 9). These consist of a series of measurements before the intervention is implemented and a series after. The analysis compares the dynamics of change over time before and after implementation of the intervention; note that time series are also a method used in observational studies (see below).

Box 9. Time series to assess policy effectiveness

Collecting original empirical data is sometimes very costly and sometimes even impossible when an intervention has started before a research design is in place. In such cases, it is possible to use routine, quality-assessed administrative data to evaluate an intervention using time series designs that are interrupted (by the intervention) and controlled (with a control group and background variables). Figure 8 shows the effects of two interventions (vertical bar 1:80% subsidy for deliveries in both groups; vertical bar 2:100% subsidy only in the group of blue districts) on the proportion of facility-based deliveries in four districts in the same region. Statistical models are used to measure the magnitude and duration of these effects.

Figure 8. Evolution of the monthly proportion of facility-based deliveries in the intervention and control groups between January 2004 and December 2014 (observation lines and fitted lines).

Image 10000201000005000000030C2C2B240F0CF3E341.png

Source: Nguyen et al. (2018).

57Such quasi-experimental studies are useful, often for reasons of practicality or acceptability. For example, if a municipality is willing to fund a new intervention, the intervention must be implemented in its jurisdiction (it would not be able to fund the intervention elsewhere), thus precluding random assignment.

58They have the drawback, according to some, of having weaker internal validity than controlled trials, because factors other than the implementation of the intervention may explain an outcome (for example, concomitant changes in other contextual factors in pre-test/post-test designs, an intercurrent intervention, a change in legislation, etc.). However, there are many ways to strengthen their internal validity through statistical or contextual methods.

Observational studies (natural experiments)

59In what are known as observational studies, the researcher does not instigate an intervention but observes an existing one. These studies, sometimes called natural experiments (Petticrew et al., 2005), have the advantage of very strong external validity, as they observe results under natural conditions. They make it possible to evaluate interventions that have already been implemented. The cost of this type of research is generally lower, because the researcher does not have to bear the cost of the intervention, unlike in trials and quasi-experimental studies.

60Their main drawback is weaker internal validity. If an effect is observed, it may be related to factors other than the intervention. For example, the reasons for which the intervention was implemented could be a contributing factor to the outcome, as could other intercurrent interventions. For example, there has been a significant decline in the number of smokers in France in recent years, but it is difficult to attribute this to any one particular intervention because smoking-cessation campaigns have incorporated many levers that have been implemented concomitantly for a long time.

Approaches that integrate process and mechanism analysis

Process evaluations integrated into controlled trials

61While these types of controlled trial could have been included in the first section because they focus on outcomes, they offer a complementary approach in that they focus on processes (Oakley et al., 2006). They thus aim to explain results by explaining the action mechanisms (which intervention components contributed to the results, and how). They can also be used to explain the absence of results (linked to the intervention itself or how it was implemented). Finally, they help explain variations in results between sites or social groups.

62The main limitation of these analyses integrated into controlled trials is that the processes under experimental conditions are probably different from those under natural conditions, and therefore the observation is not necessarily reproducible.

Theory-based evaluations

63As mentioned earlier, interventions and contextual elements are actually intertwined in what is called the interventional system. Within that system, the core concept is in fact that of mechanisms of effect, which become the real key transferable functions (Cambon & Alla, 2019). These mechanisms result from the combination of human factors (e.g. knowledge, attitudes, representations, psychosocial and technical skills) or material factors within the system. This notion of mechanism can be defined as an agent’s reaction in a given context (Lacouture et al., 2015). It characterises and punctuates the change process (agent’s reaction, cognitive or social processes). These mechanisms can be psychological (e.g. motivation, self-efficacy, self-control, skills) in a behavioural intervention, or social (e.g. shared values in a community, perception of power-sharing) in a socio-ecological intervention.

64Evaluation is therefore about understanding how this system works: on whom, how, and under what conditions does it produce an effect? Thus, approaches that favour contributory analysis of the system’s various elements (Mayne 2001; 2010) in relation to the production of outcomes, such as theory-based evaluation (TBE) (Cambon & Alla, 2021; Chen, 1990; De Silva et al., 2014; Weiss, 1997) make sense. This means exploring the pathway by which a phenomenon, such as the desired health outcome, occurs by examining the causal chain involved. In other words, instead of “does the intervention work?” the PHIR question becomes “given the number of components that influence the outcome, how did each contribute significantly to the outcome observed?” To understand how each element of the system, alone or in combination, produces an outcome, the interventional system must be untangled. One solution is to characterise this entanglement by making explicit and validating the causal hypotheses it reveals. This involves understanding how the interventional system works (what are the combinations of parameters that cause these mechanisms?) and the conditions for its transferability (which mechanisms are to be reproduced in another context?).

65Evaluations may be conducted alone (e.g. realistic evaluation) and/or combined with a conventional experimental design (Bonell et al., 2012). The most commonly used in health research are realistic evaluation (Pawson & Tilley, 1997) and the theory of change (Chen, 1990; De Silva et al., 2014).

66In the first, the effectiveness of the intervention depends on the underlying mechanisms at play in a given context. Evaluation involves identifying the context-mechanism-outcome (CMO) configurations that explain how (by which mechanism, M) a phenomenon (the outcome, desired or not, O) occurs in a specific context (C), with the interventional elements included in the context. These configurations are called middle-range theories. Their recurrence is observed in successive case studies.

67In the second, the components or ingredients of the intervention are made explicit and examined separately from those of the context to study how they contribute to producing outcomes. As with realistic evaluations, the initial hypothesis (the intervention in theory) is based on hypotheses that are empirical (i.e. from previous evaluations) or theoretical (i.e. from social or psychosocial theories). What is validated (or not) is the extent to which the explanatory theory, including the implementation parameters, fits with the observations. In both categories, the objective is to generate hypotheses about combinations of components by formulating a theory based on scientific evidence, multidisciplinary expertise, and empirical investigations. If the theory is confirmed by empirical evidence, causality can be inferred.

68Finally, the most arduous part of the process is defining the theory (or theories), which must of necessity be grounded in a scientific rationale about what is called theory. In fact, a theory is an organised set of constructs/variables designed to structure how we observe, understand, and explain the world. To be usable, the theory should explain how a programme produces effects (i.e. why and how the intervention works) by defining a set of explicit or implicit hypotheses. In the interventional system approach, this theory should incorporate implementation processes, contextual elements, links between activities and mechanisms they trigger, and links between mechanisms and contextual factors. This interventional system theory (Cambon & Alla, 2021): 1) is explanatory, in considering which causal pathway is supposed to achieve the objective; 2) postulates hypotheses about the specific actions and implementation sequences that contribute to that pathway; and 3) considers that contextual elements and their influence exist and must be taken into account. As explained in Chapter 2, it is thus both a causal theory and a model of action.

69This interventional system theory approach is not inconsistent with the theory of change or realistic evaluation. For example, in the theory of change, the focus is on the links between the intervention components, implementation, and outcomes. To embrace the concept of interventional system, we only need to add the mechanisms of effects and the contextual elements. In realistic evaluation, the contextual elements and mechanisms are considered central to medium-range theories, but not the interventional components (Box 10).

Box 10. Illustration – The TC-REG project

The TC-REG study aims to evaluate the conditions for successful knowledge transfer strategies about prevention implemented in local associations and regional health agencies (Affret et al., 2020; Cambon, Petit et al., 2017). In the TC-REG study, the final middle-range theories defined at the end of data collection include:
- external factors, called C
e (external context): e.g. initial stakeholder training, interest in disseminating knowledge transfer programmes, leader profiles, political support within the organisations, time required to study evidence, team size;
- interventional components, called C
i (interventional context): e.g. access to evidence, training, seminars, knowledge brokering activities;
- mechanisms (M) triggered by combining the two: perceived benefit of using evidence, motivation for evidence-based decision-making, self-efficacy in analysing and adapting evidence in practice, etc.;
- outcomes (O): use of evidence in practice and in decision-making.

Realistic trials are hybrid research designs proposed by Bonell and colleagues, aimed at combining the respective advantages of experimental and realistic approaches (Bonell et al., 2012). Realistic evaluation is integrated into an experimental research design with a view to ensuring that the evaluation refines and tests hypotheses about how the context interacts with the intervention mechanisms to generate outcomes. This hybrid research design has been criticised not only from an epistemological standpoint (see the introduction to this chapter) but also from an operational standpoint: i.e. can a randomised trial have a sufficiently heterogeneous range of situations to construct and validate CMO? (Marchal et al., 2012).

70Finally, it is essential to study this theory throughout the innovation process, from the pilot study through to the dissemination of the intervention, including the design of the intervention and the evaluation of the conditions for its effectiveness. In the Ocaprev study (Aromatario et al., 2019), for example, the theory-based approach made it possible to design an evidence-based and theory-based health application as part of a pilot study before the application was developed and evaluated. In the ee-TIS study (Cambon, Bergman, et al., 2017), the randomised controlled trial includes a contribution analysis to evaluate a smoking-cessation application (Tabac Info Service); this is currently in the evaluation stage. In addition to collecting results, the study aimed to understand how each activity proposed by the app works to stop smoking through the mechanisms triggered (e.g. self-efficacy, perception of utility, confidence in the application) and the contextual parameters that can influence smoking cessation (e.g. smoking status of the domestic partner, the existence of children, support from others to quit, family and social or professional events, etc.).

71The defined theory is validated through multiple data collection processes, which may be qualitative, quantitative, or mixed (Creswell, 2009). There are no specific rules; the choice depends on the study design and/or the desire to combine several. For example, outcomes or change objectives can be collected through questionnaires or secondary use of health data, as in any other study. Mechanisms and contextual conditions, on the other hand, are more likely to be collected qualitatively through interviews (especially for mechanisms) or observation (especially for contextual conditions). However, if a primary qualitative survey is used to identify contextual elements and mechanisms that remain to be validated on a large scale, mixed methods can be applied (Qual-Quan) or analytical methods such as structural equation modelling to validate the theory (Beran & Violato, 2010).

Conclusion

72In this chapter, the aim has been to show the range of approaches that can be used in PHIR. The choice of the most suitable research design is a rigorous process that encompasses the epistemological stance, the specificity of the research questions being investigated, the feasibility and pragmatism that make it possible to conduct the research in the best conditions while validating the hypotheses, and the desire to produce socially valuable results. Thus, the research designs are situated along a continuum graded according to these different dimensions, and combining them can present a good option for satisfying the various requirements of PHIR (see Chapter 1): balance between originality and social utility; context as an element influencing the outcome; the participatory dimension of research (see Chapter 5); the plurality of questions, with no hierarchy; and consequently, multidisciplinarity and the acceptance of all methods.

Précédent Suivant

Le texte seul est utilisable sous licence Creative Commons - Attribution - Pas d'Utilisation Commerciale - Pas de Modification 4.0 International - CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.