Versión clásicaVersión móvil

Proceedings of the fourth Resilience Engineering Symposium

 | 
Erik Hollnagel
, 
Éric Rigaud
, 
Denis Besnard

How Human Adaptive Systems Balance Fundamental Trade-offs: Implications For Polycentric Governance Architectures

David D. Woods y Matthieu Branlat

Resumen

Investigations into complex adaptive systems (CAS) have identified multiple trade-offs that place hard limits on the behavior of adaptive systems of any type. Complexity theory continues to search for a formalization that can unify these trade-offs around one or a few fundamental ones, and explain how observed tradeoffs are derived from the most basic ones (Alderson and Doyle, 2010). Resilience Engineering (RE) also arose from the recognition that basic trade-offs placed hard limits on the safety performance of teams and organizations in the context of pressures for systems to be “faster, better, cheaper” (Woods, 2006; Hollnagel, 2009). Combining the results from CAS on physical complex systems with the results from RE on high risk, high consequence human designed systems leads to a potential unification. The unification consists of (a) five basic trade-offs that bound the performance of all human adaptive systems (Hoffman and Woods, 2011), and (b) an architecture for polycentric control or governance based on regulating margin of maneuver to be able to dynamically balance the conflicts, risks and pressures that arise from the fundamental trade-offs.

Texto completo

1 Introduction

1Investigations of complex adaptive systems have identified fundamental tradeoffs that bound the performance of adaptive systems. Based on studies of biological and physical systems, Doyle (Doyle, 2000; Csete and Doyle, 2002;) provided a proof that the pursuit of increases in optimality with respect to some criteria guaranteed an increase in brittleness with respect to changes or variations that fell outside of those criteria—a tradeoff between optimality and fragility (or in Doyle’s terminology robust yet fragile systems – RYF). In parallel, work on proactive safety management and the emergence of Resilience Engineering also identified basic tradeoffs that bounded the performance of organizations that carry out risky activities: fundamental tradeoffs between acute and chronic goals and between efficiency and thoroughness criteria (Brown, 2005; Woods, 2006; Hollnagel, 2009; Woods, 2009; Grote, 2009).

2First, this paper presents five fundamental trade-offs that Hoffman/Woods have proposed as a unification of the different proposals for bounds (hard limits) on the performance of all human adaptive systems (Hoffman and Woods, 2011). Second, the paper discusses how the Five Bounds provide a potential unification for the class of human adaptive systems, including how the trade-offs produce some of the observed lawful behavior of human adaptive systems. Third, the paper explores the implications of the Five Bounds for Resilience Engineering in terms of the potential to develop polycentric control architectures.

2 Five Bounds

2.1 Bounded Ecology – the optimality-fragility trade-off

3Doyle’s original optimality-fragility tradeoff is labeled as Bounded Ecology—an adaptive system can never completely match its environment; there are always gaps in fitness. As in biological systems, there is an ongoing struggle for fitness that can ease or intensify as changes occur and adaptations develop. This omnipresent kind of gap creates an impetus to develop resilience and avoid brittleness in the sense of the need to be able to gracefully degrade in the face of surprise.

2.2 Bounded Cognizance – the efficiency-thoroughness trade-off

4Hollnagel’s efficiency-thoroughness trade-off is labeled as Bounded Cognizance— algorithms, embodied in any form, operate with finite resources and thus are fallible. This expresses the fact that there are always gaps in plans, models, and procedures relative to the situations where they would be implemented to achieve goals. There are always challenges in bringing knowledge to bear in a context (deploying knowledge to effect); these processes cannot be treated as These gaps lead to “effort after meaning”, to quote Bartlett and Bruner, or an impetus to learn to adjust plans to fit the situations actually at hand. (Note: we use bounded cognizance to distinguish the process from the usual interpretations associated with Simon’s bounded rationality label.)

2.3 Bounded Perspectivity -- the acute-chronic trade-off

5Bounded Perspectivity refers to fundamental limits on an agent’s ability to see and assess the world around them. Agents, at any level of abstraction, occupy a point of observation relative to the world they are embedded in, and this relationship defines a perspective (Morison et al., 2009). As Woods has put it to summarize the research, “the view from any single point of observation simultaneously reveals and obscures aspects of the world.” Disambiguation arises from the ability to shift and contrast perspectives (Morison, 2010; Woods and Sarter, 2010). Interestingly, models of complex systems also have found it necessary to introduce the concept of perspective as a basic parameter (Page, 2007). Since there is never one all-encompassing or omnipresent view of the environment, gaps arise in perceiving the world from a perspective and what would perceive and apprehend from another. This means there is an invitation for reflection, that is, to step out of the current perspective to see the situation in contrast to the previous point of observation (note this definition of reflection as what is revealed from the contrasts that result from different kinds of perspective shifts). The ability to shift and contrast perspectives has proven to be essential to coordinated activity and collaborative work (Smith et al., 2010). Situations change in how strongly they signal the need to shift perspectives to reveal what had been hidden.

2.4 Bounded Responsibility -- the specialist-generalist tradeoff

6Bounded Responsibility arises because responsibility and risks associated with achieving or failing to achieve goals are divided over roles at different levels or echelons of a system (differential responsibility). All systems pursue multiple goals that interact and can conflict. As a result, gaps arise across roles as different parts of a distributed system are differentially responsible for different subsets of goals. This means that all systems are simultaneously cooperative over shared goals and competitive where goals conflict. Most critically, conflict arises between the family of acute goals—timely, efficient, effective (or after NASA’s policy, the Faster, Better, Cheaper goals) and the family of chronic goals (such as safety or equity).

2.5 Bounded Effectivity -- the distributed-concentrated tradeoff

7Bounded Effectivity arises because adaptive systems are restricted in the ways they can act on the world and influence processes underway. No single controller in omnipotent. Given there is always some potential for surprise, all systems are balancing distant plans with local adaptations to fit responses to actual conditions so as to make progress toward goals. Thus there are multiple centers of control working in parallel. A center of control has partial or bounded scope of authority for adapting to meet sub-goals within a context of other centers. “Local” centers make direct contact with sources of variability, which give them privileged ability to pick up on surprises, disruptions, and opportunities to plans in progress. “Distant” centers of control provide broader perspectives over time, space, and multiple functions, which allows them to see how to coordinate activities to achieve larger goals under tighter pressures. Control is polycentric when overall performance results from interactions across activities carried out in different centers and overall goals are achieved through work to achieve partial goals at each center. Bounds on acting to generate progress toward goals arise from tradeoff about distributing authority, initiative, and autonomy across centers or concentrating authority, initiative, and autonomy in a single center.

3 Unifying Fundamentals

8The tradeoffs capture hard limits on the performance of all human systems modeled as adaptive systems. Human adaptive systems are goal directed (intentional) and have the potential to reflect on performance and risks, to consider possible future conditions, to actively learn new approaches from past experiences, and to redesign themselves— while under pressure from others to achieve multiple conflicting goals. Human adaptive systems use the above processes to find a balance across the tradeoffs based on the local situation, history, signals, context, goals and risks (human adaptive systems exist in and adjust their position in the state space defined over the trade-offs).

9First, the hard limits define a boundary in the state space. No one place on the boundary is ideal in general; rather positions along the boundary represent different solutions to the problem of balancing over trade-offs. As conditions change, the relative costs and benefits of different positions change.

10Second, naturally occurring adaptive systems appear to have evolved toward the hard limit boundary. Investigations of natural systems discover, eventually, that these are exquisitely balanced on the hard limits defined by tradeoffs. These discoveries have led Doyle and others to ask what are the fundamental architectural principles that allow evolveability (Alderson and Doyle, 2010; Doyle, in press).

11Third, human adaptive systems performance is often located far from the hard limits. Misdiagnoses of how poor performance arises leads stakeholders to deploy interventions that remain far from the hard limit boundary. As a result, these interventions produce a diverse and difficult to interpret set of effects: some positive effects result but these are offset by surprising unintended consequences and sudden system collapses (e.g. in accidents such the Columbia space shuttle; Woods, 2005).

12The above have led some to look for the architectural principles that can be used to manage human adaptive systems. First, how to recognize the signs that the system is operating far from the hard limits and to intervene in ways that will move the system toward that performance boundary? This has been stated as (a) the problem of discriminating inefficiencies from sources of resilience and (b) the problem of estimating whether a system is becoming increasingly brittle over time. Second, how to recognize the need and to have the capability to move a systems operating point along the hard limit boundary as conditions change? In natural adaptive systems this is defined as architectures for evolveability (Kirschner and Gerhart, 2006). For human adaptive systems, the challenge has become defined as how to design polycentric governance systems (Dietz, Ostrom and Stern, 2003).

4 Polycentric Control Architectures

13Human adaptive systems adjust their position relative to the trade-offs, in part, governed by regularities that apply to all adaptive systems and, in part, based on processes relevant to intntional and reflective systems. A system can be poorly postioned in the trade space in the sense that: i) the current operating point is far from the hard limit boundary and could adapt to move closer; ii) the current operating point risks poor performance in the near future and the system should adapt and re-balance its position relative to the trade-offs (e.g., patterns of resilience and brittleness in hospital emergency departments, Wears et al., 2008, patterns of anticipation, Woods, 2010, or patterns of maladaptive behavior, Woods and Branlat, 2010a).

14The behavior and performance of human adaptive systems emerge from how multiple centers, centers of adaptive behavior, carry out their roles and interact relative to the 5 bounds. Each center has partial autonomy (ability to carry out activities on its own), partial authority to deploy and adapt plans, and partial responsibility to meet the goals for its scope relative to other centers and to the larger system the centers are part of (Ostrom, 1990; 1999). Different kinds of constraints interconnect centers making each interdependent on how other centers carry out activities, shift priorities, and achieve outcomes (Woods and Branlat, 2010a; 2010b). Polycentric systems vary in how the different centers regulate and coordinate their activities relative to other centers. Research is building up a set of critical properties of polycentric systems such as reciprocity (Ostrom, 2003) commitment to build common ground and to align goals across centers and levels (Klein et al., 2005), accountability systems, ability to shift forms of coordination across centers (Smith et al., 2010; Nyssen, 2010), the ability anticipate bottlenecks ahead (Woods, 2010), and how initiative is delegated and regulated.

15Two critical challenges are being explored in current research: how to regulate the interactions across centers in polycentric systems—polycentric goverance or control; and what underlying architectural principles lead to resilience in polycentric systems, i..e., the ability to adapt the system’s position in the trade space (Doyle, in press).

16We have proposed that polycentric goverance is based on regulating margin of maneuver (Woods and Branlat, 2010a; 2010b). Each center of adaptive behavior works to create, maintain, amd manage their margin of maneuver, a cushion of potential actions and additional resources that allows the system to continue functioning despite unexpected demands. Failure to maintain margin leaves the system too brittle and increases the risk of falling into the maladaptive traps. In addition, Stephens et al. (2011) have identified basic locally adaptive patterns for how one center’s manage its margin when there are interactions with other centers’ behaviors to manage their margin.

17The future research directions underway are testing the potential for margin of maneuver to provide a unifying concept to model polycentric interactions and governance. The research is testing how the different properties of polycentric systems can be modeled by the dynamics of regulating margin of maneuver. The next step will be to develop the tools to specify normative models.

Bibliografía

References

Alderson, D. L. & Doyle, J. C. (2010). Contrasting views of complexity and their implications for network-centric infrastructures. IEEE Systems, Man and Cybernetics, Part A, 40(4), 839-852.

Brown, J. P. (2005). Key themes in healthcare safety dilemmas. In M. S. Patankar, J. P. Brown, & M. D. Treadwell (Eds.), Safety Ethics: Cases from Aviation, Healthcare, and Occupational and Environmental Health (pp. 103-148). Adelshot, UK: Ashgate.

Csete, M.E. and Doyle, J.C. (2002). Reverse engineering of biological complexity. Science, 295, 1664–1669.

Dietz, T., Ostrom, E., & Stern, P. C. (2003). The struggle to govern the commons. Science, 302(5652), 1907.

Doyle, J. C. (2000). Multiscale networking, robustness, and rigor. In T. Samad & J. Weyrauch (Eds.), Automation, control, and complexity: an integrated approach. NY: John Wiley & Sons, Inc. New York, p. 287 – 301.

Doyle, J. C. (in press). Architecture, constraints, and behavior. Science.

Grote, G. (2009). Management of Uncertainty: Theory and Application in the Design of Systems and Organizations. Springer-Verlag London.

Hoffman, R. R. & Woods, D. D. (2005). Steps toward a theory of complex and cognitive systems. IEEE: Intelligent Systems, January/February, pp. 76-79.

Hoffman, R. R. and Woods, D. D. (2011). Simon’s Slice: Five Fundamental Tradeoffs that Bound the Performance of Human Work Systems. 10th International Conference on Naturalistic Decision Making, Orlando FL, 5-31 to 6-3-2011.

Hollnagel, E. (2009). The ETTO Principle: Efficiency-Thoroughness Trade-Off: Why Things That Go Right Sometimes Go Wrong. Ashgate.

Kirschner, M. and Gerhart, J. C. (2006). The Plausibility of Life: Resolving Darwin’s Dilemma. Yale University Press, New Haven.

Klein, G., Feltovich, P., Bradshaw, J., and Woods, D. D. (2005). Common ground and coordination in joint activity. In W. Rouse and K. Boff (Eds.), Organizational Simulation. Wiley, Chichester, pp 139–184

Morison, A. et al., (2009). Integrating Diverse Feeds to Extend Human Perception into Distant Scenes. In P. McDermott (Ed.)., Advanced Decision Architectures for the Warfighter: Foundation and Technology. Alion Science.

Morison, A. (2010). Perspective Control: Technology to Solve the Multiple Feeds Problem in Sensor Systems. Unpublished Dissertation. The Ohio State University. August 2010.

Nyssen, A. S. (2010). From myopic coordination to resilience in socio-technical systems: a case study in a hospital. In E. Hollnagel, Paries, J., Woods, D.D., and Wreathall, J., Eds., Resilience Engineering in Practice. Ashgate, Aldershot, UK.

Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. New York: Cambridge University Press.

Ostrom, E. (1999). Coping with tragedies of the commons. Annual Reviews in Political Science, 2(1):493–535.

Ostrom, E. (2003). Toward a Behavioral Theory Linking Trust, Reciprocity, and Reputation. In E. Ostrom and J. Walker (eds.), Trust and Reciprocity : Interdisciplinary Lessons from Experimental Research. Russell Sage Foundation, NY.

Page, S. (2007). The Difference: how the power of diversity creates better groups, firms, schools, and societies. Princeton NJ: Princeton University Press.

Simon, H.A. (1981). The Sciences of the Artificial. Cambridge MA: MIT Press.

Smith. P.J., Spencer, A.L. and Billings, C. (2010). The design of a distributed work system to support adaptive decision making across multiple organizations. In K. L. Mosier and U. M. Fischer, (Eds.), Informed by knowledge: Expert performance in complex situations (pp. 139-152). New York: Taylor and Francis.

Stephens, R. J., Woods, D. D., Branlat, M. and Wears, R. L. (2011b). Colliding Dilemmas: Interactions of Locally Adaptive Strategies in a Hospital Setting. Fourth International Symposium on Resilience Engineering, Sophia Antipolis, France, June 8-10, 2011.

Watts-Perotti, J. and Woods, D. D. (2009). Cooperative Advocacy: A Strategy for Integrating Diverse Perspectives in Anomaly Response. Computer Supported Cooperative Work: The Journal of Collaborative Computing, 18(2), 175-198.

Wears, R. L., Perry, S., Anders, S. and Woods, D. D. (2008). Resilience in the Emergency Department. In E. Hollnagel, C. Nemeth and S. W. A. Dekker, eds., Resilience Engineering Perspectives 1: Remaining sensitive to the possibility of failure. Ashgate, Aldershot, UK, pp. 193-209.

Woods, D.D. (2002). Steering the Reverberations of Technology Change on Fields of Practice: Laws that Govern Cognitive Work. Proceedings of the 24th Annual Meeting of the Cognitive Science Society.

Woods, D. D. (2006). Essential characteristics of resilience. In E. Hollnagel, D. D. Woods, & N. Leveson (Eds.), Resilience Engineering: Concepts And Precepts (pp. 19– 30). Adelshot, UK: Ashgate.

Woods, D. D. (2009). Escaping Failures of Foresight. Safety Science, 47(4), 498-501.

Woods, D. D. (2010). Resilience and the Ability to Anticipate. In E. Hollnagel, Paries, J., Woods, D.D., and Wreathall, J., Eds., Resilience Engineering in Practice. Ashgate, Aldershot, UK.

Woods, D. D. and Branlat, M. (2010a). How Adaptive Systems Fail. In E. Hollnagel, Paries, J., Woods, D.D., and Wreathall, J., Eds., Resilience Engineering in Practice. Ashgate, Aldershot, UK, pp. 127-143.

Woods, D. D. and Branlat, M. (2010b). Hollnagel’s test: being ‘in control’ of highly interdependent multi-layered networked systems. Cognition, Technology, and Work, 12(2), 95-101.

Woods, D. D., & Hollnagel, E. (2006). Joint Cognitive Systems: Patterns in Cognitive Systems Engineering. Boca Raton, FL: Taylor & Francis/CRC Press.

Woods, D. D. and Sarter, N. B. (2010). Capturing the Dynamics of Attention Control From Individual to Distributed Systems. Theoretical Issues in Ergonomics, 11(1), 7-28.

Autores

Center on Complexity in Natural, Social and Engineered Systems The Ohio State University, 1971 Neil Ave, Columbus, OH 43202, USA woods.2@osu.edu

Center on Complexity in Natural, Social and Engineered Systems The Ohio State University, 1971 Neil Ave, Columbus, OH 43202, USA branlat.2@osu.edu

Salvo indicación contraria, el texto y otros elementos (ilustraciones, archivos adicionales importados) se puede utilizar bajo licencia OpenEdition Books License.

Buscar en OpenEdition Search

Se le redirigirá a OpenEdition Search