The MITRE Corporation Models in Multi-Agency C 2 Experiment Lifecycles : The Collaborative Experimentation Environment as a Case Study

Anthony J. Bigbee, Jonathan A. Curtiss,Laurie S. Litwin, Michael T. Harkin

semanticscholar(2010)

引用 0|浏览0
暂无评分
摘要
We present the Collaborative Experimentation Environment (CEE) as a case study for the use of models to support multi-agency C2 experimentation lifecycles. The Collaborative Experimentation Environment (CEE) is a distributed capability and means for designing and conducting joint Net Centric Experiments (NETEXs) where the goal is to explore multiagency coordination and mission effectiveness, such as national disaster response or responding to in-flight security incidents involving terrorism. From the inception of the project, the team has used models in deliberate ways across experiment lifecycles, from experiment conception and design, to post-hoc analysis. In this paper, our goal is to broadly describe our major model types—information flow/decision, event response, scenario, domain simulation, data collection and analysis, and architecture. Using this taxonomy, we review the elements of each model and provide examples. We then describe three completed experiments and identify crucial roles for models within those experiments. We conclude by offering some general lessons learned and identifying future work. 2 The International C2 Journal | Vol 4, No 3 Introduction This paper presents MITRE’s Collaborative Experimentation Environment (CEE) as a case study for the use of models to support multi-agency C2 experimentation. We use the term models in the broadest sense: any artifact—computational or descriptive/diagrammatic—shared within the team and with stakeholders that represents a process, concept, or scenario. For some members of the C2 community, models are strictly associated with simulation; this perspective pervades prominent C2 experiment codes such as Kass (2006) and Alberts (2002); the MORS Experimentation Community of Practice Experimentation Lexicon does not define the term model. From the inception of the CEE project, the team has used models in deliberate ways across experiment lifecycles, from experiment conception and design, through experiment execution, to post-hoc analysis. We believe that this paper makes a contribution to the practice of multi-agency C2 experimentation; it is intended as a report from the field resulting from two years of experimentation involving military, government, commercial, and other organizations that conduct operations in shared mission space. Our intent is to share a methodology and lessons learned in the spirit of the C2 community “[conducting] better experiments, develop a culture of experimentation, and sharing...the lessons learned” (Alberts and Hayes 2005). We briefly describe the CEE project and the experimental methodology, but focus the majority of this paper on what models we created, why we created them, and lessons learned. We do not advocate CEE experiments as the only or best ways to conduct multi-agency experiments. MITRE’s CEE is a distributed capability and means for designing and conducting joint Net Centric Experiments (NETEXs) where the goal is to explore multi-agency mission effectiveness. The NETEXs are human-in-the-loop discovery experiments (Alberts and Hayes 2005) that often use low to medium fidelity dynamic simulation, BIGBEE ET AL. | Models in Multi-Agency Experimentation 3 scaled to fit between tabletop and command-post experiments or events. Each NETEX environment is supposed to reflect real world coordination and collaboration issues where several agencies or organizations (including private and non-governmental) must execute overlapping missions; one example is an in-flight security incident over North American airspace. Because the missions involve multiple agencies, no single individual or group has complete knowledge of the relevant domain, procedures, and policies; rather, each agency holds a piece of the puzzle. Models depicting complex or complicated processes have proven useful for eliciting domain knowledge from stakeholders as well as documenting them for other team members. Because developing each experiment is a collaborative endeavor, attaining a shared understanding within the team is crucial. More importantly, imparting this larger, joint understanding of the domain has proven beneficial to participating agencies and resulted in operational procedure changes and policy deliberations. Experimentation Approach Expectations regarding close collaboration and effective coordination between agencies in many shared mission areas have grown. These expectations, major Federal organization evolution, and continued criticisms and identification of gaps in Federal coordination (GAO 2007) inspired the formation of CEE. A key feature of CEE is that each experiment features a new technology capability concept, new organizational structures and process, or both. Since each experiment involves human-in-the loop interaction within and between cells and organizations, and participants are allowed significant decision-making freedom and creativity, these discovery experiments (Kass 2006) do not completely follow classical precepts found in academic psychology laboratories. The CEE experiment process includes events that are not pure experiment trials. In particular, a lightweight tabletop event usually takes place two to three months prior to the actual experiment during 4 The International C2 Journal | Vol 4, No 3 which the experiment concept, objectives, vignette phases, scenario elements, and roles and responsibilities are presented. This event is used to refine hypotheses, fill in gaps in the team’s knowledge, build consensus, and elicit participation in the experiment itself. Although we do strive for rigor via elements like hypotheses, independent or controlled variables, measurement and instrumentation, there are no repeated trials, only one group of subjects, and full factorial designs are impossible. This results in some loss of control and introduction of internal validity concerns. Many of these limitations stem from the use of subject matter experts with deep domain knowledge drawn from watch floors and operational cells—these participants are difficult to obtain, have limited time available, and may not be able to stay for the duration of a multi-day event. As a result, we often must choose a design that reduces one internal validity threat while increasing another. Relative to single group threats (Trochim 2006), for example, we usually present a new technology concept or a process first, and then remove it to see if performance/ outcomes are approximately equal or less in order to address maturity/learning threats. Mortality threats increase, however, as we risk losing participants due to other commitments during later vignettes. The role of cognitive performance in C2 behavior is not completely accessible to observers or the participants themselves. Thus, our measurement and instrumentation approach is a blend of objective and subjective techniques where we seek to triangulate on the causes and relationships between behavior and outcomes. Finally, one of the CEE goals is for participants and stakeholder observers to learn from the experience and modify local policies and procedures as they deem appropriate. This is a goal not represented in classical experiment designs. BIGBEE ET AL. | Models in Multi-Agency Experimentation 5 Model Types In this paper, we present a simple taxonomy comprising six major model types. We then summarize three NETEXs and discuss how models influenced the design, preparation, conduct, and post-hoc analysis phases. Model Taxonomy and Development The word model has strong connotations for certain communities; we use the term in a broad sense to mean any abstraction of a process, system, or behavior expressed in an artifact intended to be shared or as part of a system used to execute mission tasks in our experiments. Over the course of the CEE NETEXs, we have created six types of models: 1. Information flow/decision 2. Event response 3. Scenario 4. Domain simulation 5. Data collection and analysis 6. Architecture Five of these types are descriptive—and can be construed as conceptual—and the sixth type, domain simulation, can be either predictive, descriptive, or both. Figure 1 below depicts a typical flow of model development for each model type across the four major phases of CEE experimentation development. 6 The International C2 Journal | Vol 4, No 3 Figure 1. Model Development A discussion of each of these model types follows. Illustrations for each model are included to give the reader a feel for the model’s nature; detailed content is not important for the purposes of the paper. For additional details and descriptions, Maroney et al. (2009) describe a CEE experiment examining Unmanned Aircraft Systems (UAS) multi-agency coordination in hurricane events. Information Flow Models Information Flow Models are meant to depict relationships between potential participants. They are used to explore the domain, scope the experiment, refine hypotheses, and act as a reference for other models. BIGBEE ET AL. | Models in Multi-Agency Experimentation 7 A Node Information Flow Model is used to capture a common understanding of the existing real-world relationships between organizations involved in the experiment. This model is created early in the experiment design process and is maintained throughout the duration of the experiment. The format of Node Information Models is loosely based on concept mapping norms, where the nodes (boxes) are concepts, generally nouns, and the arrows connecting the nodes are the inter-relation between the nodes, generally verb phrases. For NETEX development, the nodes are people, organizations, physical locations, or other objects related to the experiment domain. The arrows indicate the flow of information (e.g., sharing, need) and responsibility relationships or other relations. The model is developed and refined through meetings, often in real time, with internal domain experts and with the organizations expected to take part in the experiment, to ensure that it reflects the
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要