Ems Efficiency Measurement System Users Manual

An energy management system (EMS) is a system of computer-aided tools used by operators of electric utilitygrids to monitor, control, and optimize the performance of the generation or transmission system. Also, it can be used in small scale systems like microgrids.[1][2]

  1. Ems Efficiency Measurement System User's Manual Template
  2. Ems Efficiency Measurement System User’s Manual
  3. Ems Efficiency Measurement System
  4. Ems Efficiency Measurement System User's Manual Software
  5. Ems Efficiency Measurement System User's Manual Pdf

Energy Management Systems (EMS) XA/21TM Need Optimally manage transmission grid and energy generation in a reliable and secure manner:. Increase overall transmission grid reliability proactively minimizing blackouts. Meet stringent security requirements Solution Benefits. Comprehensive, Integrated, Secure Sys. Performance measurement is. the regular measurement of results or outcomes and efficiency of services or programs. a tool to create accountability for results and improve performance. government’s way of determining whether it is providing a quality product at a reasonable cost.

  • 3Other meanings

Terminology[edit]

The computer technology is also referred to as SCADA/EMS or EMS/SCADA. In these respects, the terminology EMS then excludes the monitoring and control functions, but more specifically refers to the collective suite of power network applications and to the generation control and scheduling applications.

Manufacturers of EMS also commonly supply a corresponding dispatcher training simulator (DTS). This related technology makes use of components of SCADA and EMS as a training tool for control center operators.

Operating systems[edit]

Up to the early 1990 it was common to find EMS systems being delivered based on proprietary hardware and operating systems. Back then EMS suppliers such as Harris Controls (now GE), Hitachi, Cebyc, Control Data Corporation, Siemens and Toshiba manufactured their own proprietary hardware. EMS suppliers that did not manufacture their own hardware often relied on products developed by Digital Equipment, Gould Electronics and MODCOMP. The VAX 11/780 from Digital Equipment was a popular choice amongst some EMS suppliers. EMS systems now rely on a model based approach. Traditional planning models and EMS models were always independently maintained and seldom in synchronism with each other. Using EMS software allows planners and operators to share a common model reducing the mismatch between the two and cutting model maintenance by half. Having a common user interface also allows for easier transition of information from planning to operations.

As proprietary systems became uneconomical, EMS suppliers began to deliver solutions based on industry standard hardware platforms such as those from Digital Equipment (later Compaq (later HP)), IBM and Sun. The common operating system then was either DEC OpenVMS or Unix. By 2004, various EMS suppliers including Alstom, ABB and OSI had begun to offer Windows based solutions. By 2006 customers had a choice of UNIX, Linux or Windows-based systems. Some suppliers including ETAP, NARI, PSI-CNI and Siemens continue to offer UNIX-based solutions. It is now common for suppliers to integrate UNIX-based solutions on either the Sun Solaris or IBM platform. Newer EMS systems based on blade servers occupy a fraction of the space previously required. For instance, a blade rack of 20 servers occupy much the same space as that previously occupied by a single MicroVAX server.

Other meanings[edit]

Energy efficiency[edit]

In a slightly different context, EMS can also refer to a system designed to achieve energy efficiency through process optimization by reporting on granular energy use by individual pieces of equipment. Newer, cloud-based energy management systems provide the ability to remotely control HVAC and other energy-consuming equipment; gather detailed, real-time data for each piece of equipment; and generate intelligent, specific, real-time guidance on finding and capturing the most compelling savings opportunities.

Home energy management system[edit]

Home energy management (HEM) enables domestic consumers to take part in demand side activities. But, it confronts some problems resulted from the uncertainties of renewable energy resources and consumers' behaviour; while, the domestic consumers aim at the highest level of comfort that should be considered by minimizing the “response fatigue” phenomenon.[3]

Automated control in buildings[edit]

The term Energy Management System can also refer to a computer system which is designed specifically for the automated control and monitoring of those electromechanical facilities in a building which yield significant energy consumption such as heating, ventilation and lighting installations. The scope may span from a single building to a group of buildings such as university campuses, office buildings, retail stores networks or factories. Most of these energy management systems also provide facilities for the reading of electricity, gas and water meters. The data obtained from these can then be used to perform self-diagnostic and optimization routines on a frequent basis and to produce trend analysis and annual consumption forecasts.

Energy management systems are also often commonly used by individual commercial entities to monitor, measure, and control their electrical building loads. Energy management systems can be used to centrally control devices like HVAC units and lighting systems across multiple locations, such as retail, grocery and restaurant sites. Energy management systems can also provide metering, submetering, and monitoring functions that allow facility and building managers to gather data and insight that allows them to make more informed decisions about energy activities across their sites.

See also[edit]

  • Energy management software, software to monitor and optimize energy consumption in buildings or communities
  • Energy storage as a service (ESaaS)
  • Load management for balancing the supply of electricity on a distribution network.

References[edit]

  1. ^'Communication Based Control for DC Microgrids – IEEE Journals & Magazine'. ieeexplore.ieee.org. Retrieved 2018-05-05.
  2. ^'Energy management algorithm for resilient controlled delivery grids – IEEE Conference Publication'. ieeexplore.ieee.org. Retrieved 2018-05-05.
  3. ^Miadreza Shafie-Khah, and Pierluigi Siano. 'A stochastic home energy management system considering satisfaction cost and response fatigue.' IEEE Transactions on Industrial Informatics 14.2 (2018): 629–638. doi: 10.1109/TII.2017.2728803
  • EPRI (2005) Advanced Control Room Energy Management System: Requirements and Implementation Guidance. Palo Alto, CA. EPRI report 1010076.
  • EEMUA 191 Alarm Systems – A Guide to Design, Management and Procurement (1999) ISBN0-85931-076-0
  • PAS – The Energy Management Handbook – Second Edition (2010) ISBN0-9778969-2-7
  • SSM InfoTech Solutions Pvt. Ltd. – The Alarm Management Company Energy Management System
  • Energyly Vidabest Pvt. Ltd - Energy Monitoring System for Industries Energy Monitoring System
  • ASM Consortium (2009) – Effective Alarm Management Practices ISBN978-1-4421-8425-1
  • ANSI/ISA–18.2–2009 – Management of Energy Systems for the Process Industries
  • IEC 62682 Management of Energy systems for the process industries
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Energy_management_system&oldid=930070343'
  • About this Journal ·
  • Abstracting and Indexing ·
  • Aims and Scope ·
  • Article Processing Charges ·
  • Bibliographic Information ·
  • Editorial Board ·
  • Editorial Workflow ·
  • Publication Ethics ·
  • Reviewer Resources ·
  • Submit a Manuscript ·
  • Subscription Information ·
  • Open Special Issues ·
  • Published Special Issues ·

Measuring Quality in Emergency Medical Services: A Review of Clinical Performance Indicators

EMS and Prehospital Care Program, Department of Emergency Medicine, American University of Beirut Medical Center, P.O. Box 11-0236, Riad El Solh, Beirut 110 72020, Lebanon

Received 8 August 2011; Accepted 15 August 2011

Academic Editor: Stephen H. Thomas

Copyright © 2012 Mazen J. El Sayed. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Measuring quality in Emergency Medical Services (EMSs) systems is challenging. This paper reviews the current approaches to measuring quality in health care and EMS with a focus on currently used clinical performance indicators in EMS systems (US and international systems). The different types of performance indicators, the advantages and limitations of each type, and the evidence-based prehospital clinical bundles are discussed. This paper aims at introducing emergency physicians and health care providers to quality initiatives in EMS and serves as a reference for tools that EMS medical directors can use to launch new or modify existing quality control programs in their systems.

1. Background

Measuring quality in emergency medical services (EMSs) is important since EMS is the practice of medicine in the prehospital setting. At its earliest developmental stage, an EMS system is a system of emergency care that functions to reduce death and disability usually resulting from two epidemics: trauma and cardiac arrest. In the United States, EMS systems have witnessed a major transformation since the EMS program was first established in 1966 in the Department of Transportation through the Highway Safety Act. The expansion in EMS scope and increase in the range of medical interventions performed by prehospital providers were paralleled with an increased scrutiny of the value and effectiveness of the services deployed in the prehospital setting [1, 2]. The need for increased coordination in patient care and higher quality care at lower costs has made it essential for EMS agencies to have in-place quality control or quality improvement programs that rely on key performance indicators to continuously monitor the system’s overall performance and the effectiveness of the different prehospital interventions.

The Institute of Medicine (IOM), in a report entitled “Emergency Medical Services at the Crossroads” and published in 2006, recommended the development of “evidence based performance indicators that can be nationally standardized so that statewide and national comparisons can be made” [3]. The development and implementation of these indicators would enhance accountability in EMS and provide EMS agencies with data to measure their system’s overall performance and to develop sound strategic quality improvement planning. The objective of this paper is to introduce emergency physicians and other healthcare providers to the concepts of quality measurement in EMS with a focus on clinical performance indicators currently used by EMS agencies in the USA and by ambulance services in other countries such as the United Kingdom and Australia.

2. Quality Care in EMS: Definition and Challenges

A central premise is that the same principles of healthcare quality apply to EMS. Many definitions of quality in health care exist (Donabedian and the American Medical Association definitions of quality) [3, 4]; however the most widely cited one and most applicable to EMS systems is the definition formulated by the IOM. The IOM defined quality as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” and described six dimensions of quality care: a care that is safe, effective, patient centered, timely, efficient, and equitable [5]. When applied to EMS, the IOM concepts on quality care therefore entail a system design with a specific arrangement of personnel, facilities, and equipment that functions to ensure not only effective and coordinated delivery of health care services under emergency conditions but also high quality appropriate care. This ideal system design is nonexistent since most EMS systems evolved as a response to the communities’ needs for emergent health care services (military conflicts, major highways trauma, and nontraumatic cardiac arrest) rather than as an a priori designed EMS infrastructure. This resulted in heterogeneity of existing EMS systems designs [6, 7] making EMS systems complex and difficult to evaluate or compare.

Other EMS-specific challenges to systems evaluation include the lack of uniformity in data collection and the lack of agreement over the validity of the performance indicators or assessment measures used in EMS research [8, 9]. Adding to that, the existence of a broad range of EMS conditions (i.e., conditions that cause EMS activation) and the challenge of isolating the prehospital care effect from that of the emergency department and hospital care increase the complexity of measuring quality in EMS [10, 11].

3. Approaches to Quality Measurement

Initiatives to incorporate quality assessment in EMS, similar to other healthcare settings, have adopted the frameworks and principles of quality management systems used in the business industry. The goal is to improve the end product and the customer’s satisfaction. In EMS, the end product is the care provided to patients in the prehospital setting. Quality programs usually range from basic traditional Quality Assurance (QA) to Continuous Quality Improvement (CQI) and complex Total Quality Management (TQM). Quality assurance is the static and retrospective review or inspection of services to identify problem areas [12, 13]. Quality improvement requires a continuous study and improvement of a process, system, or organization [13]. Total Quality Management is the most advanced and most comprehensive since it involves the whole organization. Elements of TQM consist of leadership commitment to quality, use of reliable indicators of quality, and involvement of front-line workers in quality improvement efforts [12]. The shift in quality management paradigm in EMS from pure quality assurance programs towards quality improvement took place after the adoption of CQI concept by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) in 1992. EMS quality assessment focused more on improving patient care through continuous measurements and inputs using key performance indicators [14].

4. EMS System Performance Indicators: Structure-Process-Outcome Model

Ems Efficiency Measurement System User's Manual Template

Performance indicators are measurement tools that should be “specific, measurable, action oriented, relevant and timely” [15]. Three types of indicators are used to measure quality in patient care: Structure, process and outcome indicators (Table 1) [16–19]. EMS system performance indicators follow the same classification.

Table 1: Structure-Process-Outcome Model for EMS systems PIs.

Structural data are attributes of the setting in which care is provided [17]. These usually refer to the characteristics of the different components of an EMS system including facilities, equipment, staffing, knowledge base of providers, credentialing, deployment. Most structure indicators reflect standards developed at a local, regional, or national level through consensus building or by EMS organizations, administrators, or authority. These indicators provide an indirect measure of quality and are difficult to relate to outcomes in patient care [19]. Since EMS systems designs are diverse as discussed above, these indicators may not be applicable to all systems. Emergency vehicle response time standard is the most commonly used structure measure in EMS. The goal is to respond to 90% of priority 1 calls (life threatening and highly time dependant) in less than 9 minutes [22]. Several EMS systems designed ambulance deployment strategies to meet this standard despite conflicting results from several studies about the effect of short response times on patient outcome in trauma [23–25] and the need for even shorter EMS times (around 4 minutes) to impact survival in out-of-hospital cardiac arrest [26, 27]. In the United Kingdom, the adoption of a similar time target structure measure (8-minute response time for 75% of category A or emergency calls) by the National Service Framework for coronary artery disease as the main performance indicator was criticized and described by the paramedics as a poor quality indicator that is “too simplistic and narrow” and that is putting patients and ambulance crews at risk [28].

Another type of measures is process data. These are the components of the encounter between the prehospital provider and the patient. It is an evaluation of the steps of the care provided. A process is the repeatable sequence of actions used across all EMS levels to produce good patient outcome [19]. Process measures are more sensitive to differences in quality of care [29]. In contrast to structure and outcome measures that provide an indirect approach to quality measurement, process measures allow for a direct assessment of quality of care. The inputs from process measures are very useful for quality improvement programs since they are easy to interpret and act on [30, 31]. One disadvantage of using process measures to monitor quality is that they can become very complex with increased clinical sophistication of the medical services provided in the prehospital setting. In the USA, EMS medical directors commonly use process measures when performing a structured implicit review of prehospital records (run sheets) to evaluate compliance with medical protocols and appropriateness of the treatment provided. One example would be collecting specific data points on the process of endotracheal intubation performed by EMS providers to monitor the success rate of this procedure. A medical director can evaluate the technical skill of providers performing this procedure and their compliance with preestablished criteria and decide which specific elements will need improvement such as mandating tube placement verification with End Tidal CO2 waveform documentation.

A third type of measures is outcome data. These evaluate the change in patient’s subsequent health status in response to a clinical intervention. Numerous prehospital interventions are not yet evidence based [2, 31–33]. Outcome research in EMS focuses on determining the effectiveness of some of these interventions and showing the true value of an EMS system since it offers feedback on all aspects of care. Outcome data is easy to interpret and easily understood by the different stakeholders (policymakers, patients, EMS providers, public, etc.) and can be used to compare EMS systems. The adoption of pure outcome data as performance indicators is not however straightforward. For outcome data and more specifically clinical outcome data, to be relevant performance indicators, accurate risk adjustment, standardization of definitions, and development of research models for each measured outcome are required [11, 29, 30]. Four explanations for the source of variation in outcome were described by Mant: these include differences in type of patient (patient characteristics), differences in measurement, chance (random variation), and differences in quality of care [30]. Adding to these challenges are the degree of sophistication of some prehospital treatment technologies, the operational complexity of the prehospital environment, and the difficulty in isolating the prehospital effect from the emergency department and hospital effect [11]. In an effort to overcome the barriers to outcome research and the adoption of outcome data as performance indicators for EMS systems, the US National Highway Traffic Safety Administration (NHTSA) launched in 1994 the EMS Outcomes Project (EMSOP) [10]. This project identified the priority conditions that should take precedence in EMS outcomes research based on impact and frequency and defined six outcome categories (6D’s): survival (death), impaired physiology (disease), limit disability (disability), alleviate discomfort (discomfort), satisfaction (dissatisfaction), and cost-effectiveness (destitution) [10]. Two framework models were also proposed to facilitate outcome measurement in EMS: the “Episode of Care Model” and the “Out-of-Hospital Unit of Service Model” [11]. The first model is used for high-priority conditions (highly time dependent) to measure long-term outcomes (survival, physiologic derangement, long term disability). The second one is used for low-priority conditions (minor trauma) to measure short- or intermediate-term outcomes (patient satisfaction, relief of pain). Examples of core risk adjustment measures (RAMS) that are common to all EMS conditions (e.g., age, gender and vital signs) and specific RAMS (e.g., peak flow measurement for asthma exacerbation) were also described [34]. All these steps were designed to facilitate outcome research and the adoption of outcome measures in evaluating quality in EMS.

Internationally, out-of-hospital cardiac arrest (OHCA) survival is the most common outcome measure used to compare EMS systems. Standardized risk adjustment measures and data collection forms are well defined (Utstein template) [35]. The incorporation of these elements in quality programs would enable EMS administrators and medical directors to compare outcomes with other systems, to identify the specific components of the system that are functioning properly and those that are not and how changes can be implemented to improve cardiac arrest outcomes.

5. Transitioning from Theory to Practice

Relying on only one type of performance measures (structure, process, or outcome) can yield a very narrow perspective on quality care in EMS. The complexity of EMS systems requires a more comprehensive evaluation of the different components of the system.

One approach is to use a set of mixed indicators that cover different aspects of an EMS system. Several EMS stakeholders in the USA have proposed comprehensive sets of indicators: one set was proposed by the International Association of Firefighters in the National Fire Protection Association (NFPA) in several publications on standards of emergency medical operations (NFPA 1710), criteria for response times (NFPA 1720), and dispatch standards (NFPA 1221) [36]. Another set of 35 consensus-based indicators was proposed by the National Association of EMS Officials (NAEMSO) in 2007 at the end of the EMS Performance Measures Project in an effort to identify a common set of specifically defined measures of EMS system performance [37]. A tool was also proposed for EMS agencies to properly measure these indicators. Other sets of indicators are used by international ambulance services such as those used by the South Australian ambulance services and which are part of performance framework encompassing “operational, managerial and strategic level” indicators [38]. The validity and practical application of these indicators are yet to be tested.

Ems Efficiency Measurement System User’s Manual

Another approach is to focus on few high impact clinical conditions and to use bundles of measures that are disease or condition specific in order to evaluate quality in the overall system. Evaluating the system’s response to few high-priority clinical conditions considered “tracer conditions” can help predict performance of same elements in response to other clinical entities [39–41]. “Tracer conditions” such as trauma or cardiac arrest are clinical entities with high impact (mortality and morbidity, cost and frequency) and potential for improved outcome [10, 40, 42]. The bundles of measures are similar to composite measures that link structure and process to outcome. The elements of the bundle are evidence-based measures that would lead to improved patient outcome when combined together. One example of evidence-based treatment bundles is the set of clinical performance indicators proposed in 2007 by the US Consortium of Metropolitan Municipalities EMS directors [20]. Six EMS priority conditions were selected based on available supporting evidence of an effective prehospital treatment and on a consensus of EMS experts. Specific outcomes were described in the form of number needed to treat (NNT) and the harm avoided if the bundle measures were met in each case. A similar approach was used by the UK Care Quality Commission in proposing a different set of evidence-based indicators following the recommendations of Joint Royal Colleges Ambulance Liaison Committee (JRCALC) in 2006 [21]. Comparing the US and UK sets reveals overlap between some of the clinical conditions and the indicators that are proposed (Table 2). The outcomes that were defined by the UK set were however less specific than those of the US set. The goal of these bundles, when used in combination with standardized outcome categories, is to establish evidence-based benchmarks or best practices for EMS systems or ambulances services and to allow comparison of performance between different systems [20, 43]. Prerequisites for the use of these bundles for performance comparison between EMS systems include but are not limited to similarity in infrastructure and clinical sophistication of the prehospital services and standardized data collection.

Ems Efficiency Measurement System

Table 2: Comparison of EMS clinical performance indicators.

6. Implications for the Future

The ultimate goal of performance indicators in EMS is to measure the true value of the system. A lot of work has been done to find the right system metrics for EMS system’s evaluation. Evidence-based bundles can be good measures of the effectiveness of the system when it comes to specific clinical conditions and patient outcome. These however represent only one perspective of what good quality prehospital care means. Different stakeholders have different perspectives on quality care [44, 45]. A transition towards “Whole System Measures” defined by the Institute for Healthcare Improvement (IHI) as “balanced set of system level measures which are aligned with the Institute of Medicine’s (IOM’s) six dimensions of quality and are not disease or condition specific” can help overcome some of the challenges of evaluating quality in EMS [46]. Patient satisfaction with care score, rate of adverse events, incidence of occupational injuries and illnesses, and healthcare cost per capita are some examples of these whole systems measures [46]. These measures would be part of a balanced score card or measurement dashboard with specific set goals for improvement that are communicated across all levels of the EMS system from prehospital providers to leadership and policy makers. This integration of whole system measures into EMS system evaluation can help answer the question: what value is the system adding to patient care and what is the quality of the services provided?

Haier HMC725SESS. Haier microwave oven hmc720bebb user manual download.

Ems Efficiency Measurement System User's Manual Software

References

Ems Efficiency Measurement System User's Manual Pdf

  1. M. Callaham, “Quantifying the scanty science of prehospital emergency care,” Annals of Emergency Medicine, vol. 30, no. 6, pp. 785–790, 1997.View at Google Scholar · View at Scopus
  2. J. H. Brice, H. G. Garrison, and A. T. Evans, “Study design and outcomes in out-of-hospital emergency medicine research: a ten-year analysis,” Prehospital Emergency Care, vol. 4, no. 2, pp. 144–150, 2000.View at Google Scholar · View at Scopus
  3. A. Donabedian, Explorations in Quality Assessment and Monitoring, vol. 1 of The Definition of Quality and Approaches to Its Assessment, Health Administration Press, Ann Arbor, Mich, USA, 1980.
  4. American Medical Association, “Council of Medical Service. Quality of care,” JAMA, vol. 256, pp. 1032–1034, 1986.View at Google Scholar
  5. Institute of Medicine, Emergency Medical Services at a Crossroads, The National Academies Press, Washington, DC, USA, 2006.
  6. M. N. Shah, “The formation of the emergency medical services system,” American Journal of Public Health, vol. 96, no. 3, pp. 414–423, 2006.View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  7. D. M. Williams, “2006 JEMS 200-city survey. EMS from all angles,” JEMS, vol. 32, no. 2, pp. 38–46, 2007.View at Google Scholar · View at Scopus
  8. E. J. Sobo, S. Andriese, C. Stroup, D. Morgan, and P. Kurtin, “Developing indicators for emergency medical services (EMS) system evaluation and quality improvement: a statewide demonstration and planning project,” The Joint Commission Journal on Quality Improvement, vol. 27, no. 3, pp. 138–154, 2001.View at Google Scholar · View at Scopus
  9. D. W. Spaite, E. A. Criss, T. D. Valenzuela, and J. Guisto, “Emergency medical service systems research: problems of the past, challenges of the future,” Annals of Emergency Medicine, vol. 26, no. 2, pp. 146–152, 1995.View at Publisher · View at Google Scholar · View at Scopus
  10. R. F. Maio, H. G. Garrison, D. W. Spaire et al., “Emergency medical services outcomes project I (EMSOP I): prioritizing conditions for outcomes research,” Annals of Emergency Medicine, vol. 33, no. 4, pp. 423–432, 1999.View at Publisher · View at Google Scholar · View at Scopus
  11. D. W. Spaite, R. Maio, H. G. Garrison et al., “Emergency medical services outcomes project (EMSOP) II: developing the foundation and conceptual models for out-of-hospital outcomes research,” Annals of Emergency Medicine, vol. 37, no. 6, pp. 657–663, 2001.View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  12. G. Laffel and D. Blumenthal, “The case for using industrial quality management science in health care organizations,” JAMA, vol. 262, no. 20, pp. 2869–2873, 1989.View at Publisher · View at Google Scholar · View at Scopus
  13. D. M. Berwick, “Continuous improvement as an ideal in the health care,” The New England Journal of Medicine, vol. 320, no. 1, pp. 53–56, 1989.View at Google Scholar · View at Scopus
  14. C. J. Mattera, “The evolving change in paradigm from quality assurance to continuous quality improvement in prehospital care,” Journal of Emergency Nursing, vol. 21, no. 1, pp. 46–52, 1995.View at Google Scholar · View at Scopus
  15. J. Dunford, R. M. Domeier, T. Blackwell et al., “Performance measurements in emergency medical services,” Prehospital Emergency Care, vol. 6, no. 1, pp. 92–98, 2002.View at Google Scholar · View at Scopus
  16. M. Dagher and R. J. Lloyd, “Developing EMS quality assessment indicators,” Prehospital and disaster medicine : the official journal of the National Association of EMS Physicians and the World Association for Emergency and Disaster Medicine in association with the Acute Care Foundation, vol. 7, no. 1, pp. 69–74, 1992.View at Google Scholar · View at Scopus
  17. R. A. Swor, S. I. Rottman, R. G. Pirrallo, and E. Davis, Eds., Quality Management in Prehospital Care, Mosby, St. Louis, Mo, USA, 1993.
  18. R. H. Brook, E. A. Mcglynn, and P. D. Cleary, “Part 2: measuring quality of care,” The New England Journal of Medicine, vol. 335, no. 13, pp. 966–970, 1996.View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  19. L. Moore, “Measuring quality and effectiveness of prehospital EMS,” Prehospital Emergency Care, vol. 3, no. 4, pp. 325–331, 1999.View at Google Scholar · View at Scopus
  20. J. B. Myers, C. M. Slovis, M. Eckstein et al., “Evidence-based performance measures for emergency medical services systems: a model for expanded EMS benchmarking,” Prehospital Emergency Care, vol. 12, no. 2, pp. 141–151, 2008.View at Publisher · View at Google Scholar · View at PubMed
  21. The Department of Health. Office of Strategic Health Authorities. Emergency Services Review: A comparative review of international ambulance service best practice, 2009, http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/documents/digitalasset/dh_107335.pdf .
  22. J. Fitch, “Response times: myths, measurement & management,” JEMS, vol. 30, no. 9, pp. 47–56, 2005.View at Google Scholar
  23. J. S. Sampalis, A. Lavoie, J. I. Williams, D. S. Mulder, and M. Kalina, “Impact of on-site care, prehospital time, and level of in-hospital care on survival in severely injured patients,” Journal of Trauma, vol. 34, no. 2, pp. 252–261, 1993.View at Google Scholar
  24. R. W. Petri, A. Dyer, and J. Lumpkin, “The effect of prehospital transport time on the mortality from traumatic injury,” Prehospital and Disaster Medicine, vol. 10, no. 1, pp. 24–29, 1995.View at Google Scholar
  25. C. D. Newgard, R. H. Schmicker, J. R. Hedges et al., “Emergency medical services intervals and survival in trauma: assessment of the “Golden Hour” in a North American prospective cohort,” Annals of Emergency Medicine, vol. 55, no. 3, pp. 235–246, 2010.View at Publisher · View at Google Scholar · View at PubMed
  26. H. C. Abrams, P. H. Moyer, and K. S. Dyer, “A model of survival from out-of-hospital cardiac arrest using the Boston EMS arrest registry,” Resuscitation, vol. 82, no. 8, pp. 999–1003, 2011.View at Publisher · View at Google Scholar · View at PubMed
  27. V. J. De Maio, I. G. Stiell, G. A. Wells, and D. W. Spaite, “Optimal defibrillation response intervals for maximum out-of-hospital cardiac arrest survival rates,” Annals of Emergency Medicine, vol. 42, no. 2, pp. 242–250, 2003.View at Publisher · View at Google Scholar
  28. L. Price, “Treating the clock and not the patient: ambulance response times and risk,” Quality and Safety in Health Care, vol. 15, no. 2, pp. 127–130, 2006.View at Publisher · View at Google Scholar · View at PubMed
  29. H. R. Rubin, P. Pronovost, and G. B. Diette, “The advantages and disadvantages of process-based measures of health care quality,” International Journal for Quality in Health Care, vol. 13, no. 6, pp. 469–474, 2001.View at Publisher · View at Google Scholar
  30. J. Mant, “Process versus outcome indicators in the assessment of quality of health care,” International Journal for Quality in Health Care, vol. 13, no. 6, pp. 475–480, 2001.View at Publisher · View at Google Scholar
  31. D. W. Spaite, “Outcome analysis in EMS systems,” Annals of Emergency Medicine, vol. 22, no. 8, pp. 1310–1311, 1993.View at Google Scholar
  32. S. A. McLean, R. F. Maio, D. W. Spaite, and H. G. Garrison, “Emergency medical services outcomes research: evaluating the effectiveness of prehospital care,” Prehospital Emergency Care, vol. 6, no. 2, supplement, pp. S52–S56, 2002.View at Google Scholar
  33. K. L. Koenig, “Quo vadis: “scoop and run,” “stay and treat,” or “;treat and street”?” Academic Emergency Medicine, vol. 2, no. 6, pp. 477–479, 1995.View at Google Scholar
  34. H. G. Garrison, R. F. Maio, D. W. Spaite et al., “Emergency Medical Services Outcomes Project III (EMSOP III): the role of risk adjustment in out-of-hospital outcomes research,” Annals of Emergency Medicine, vol. 40, no. 1, pp. 79–88, 2002.View at Publisher · View at Google Scholar
  35. R. O. Cummins, D. A. Chamberlain, N. S. Abramson et al., “Recommended guidelines for uniform reporting of data from out-of-hospital cardiac arrest: the utstein style: a statement for health professionals from a task force of the American Heart Association, the European Resuscitation Council, and Heart and Stroke Foundation of Canada, and the Australian Resuscitation Council,” Circulation, vol. 84, no. 2, pp. 960–975, 1991.View at Google Scholar
  36. National Fire Protection Association. Standard for the Organization and Deployment of Fire Suppression, Emergency Medical Administration Operations, and Special Operations to the Public by Career Fire Departments. National Fire Protection Association, Document No. 1710, http://www.nfpa.org.
  37. National Association of EMS Officials. EMS Performance Measures Project Report EMS Performance Measures: Recommended Attributes and Indicators for System and Service Performance, http://www.nasemso.org/Projects/PerformanceMeasures/.
  38. P. O’Meara, “A generic performance framework for ambulance services: an Australian health services perspective,” Journal of Emergency Primary Health Care, vol. 3, no. 3, Article ID 990132, 2005.View at Google Scholar
  39. D. M. Kessner and C. E. Kalk, A Strategy for Evaluating Health Services, Institute of Medicine, Washington, DC, USA, 1973.
  40. D. M. Kessner, C. E. Kalk, and J. Singer, “Assessing health quality—the case for tracers,” The New England Journal of Medicine, vol. 288, no. 4, pp. 189–194, 1973.View at Google Scholar
  41. I. G. Stiell, G .A. Wells, and B. J. Field, “Improved out-of-hospital cardiac arrest survival through the inexpensive optimization of an existing defibrillation program: opals study phase ii. Ontario prehospital advanced life support,” The New England Journal of Medicine, vol. 351, no. 7, pp. 647–656, 1999.View at Google Scholar
  42. E. A. McGlynn and S. M. Asch, “Developing a clinical performance measure,” American Journal of Preventive Medicine, vol. 14, no. 3, pp. 14–21, 1998.View at Publisher · View at Google Scholar
  43. A. N. Siriwardena, D. Shaw, R. Donohoe, S. Black, and J. Stephenson, “Development and pilot of clinical performance indicators for English ambulance services,” Emergency Medicine Journal, vol. 27, no. 4, pp. 327–331, 2010.View at Publisher · View at Google Scholar · View at PubMed
  44. R. Grol, “Improving the quality of medical care: building bridges among professional pride, payer profit, and patient satisfaction,” JAMA, vol. 286, no. 20, pp. 2578–2585, 2001.View at Google Scholar · View at Scopus
  45. D. Blumenthal, “Part 1: quality of care—what is it?” The New England Journal of Medicine, vol. 335, no. 12, pp. 891–894, 1996.View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  46. L. A. Martin, E. C. Nelson, R. C. Lloyd, and T. W. Nolan, Whole System Measures, IHI Innovation Series White Paper, Institute for Healthcare Improvement, Cambridge, Mass, USA, 2007.